OpenAI Tweaks GPT-5 UI after User Reactions: Last week’s long awaited GPT-5 had a big idea: simplify the models served to mainstream users. Instead of having 700 million plus weekly users pick their model from over half a dozen OpenAI LLM AIs, GPT-5 now acted as a traffic cop (aka a router), to select the best models for the user query. But many users pushed back, wanting direct choice. So OpenAI implemented an ‘Auto, Fast, and Thinking’ selector mode. This highlights the difficulty of creating easier User interfaces and experiences (UI/UX), for LLM AIs. They’re probably the most opaque technology product to date, and yet have the most open-end, general uses for businesses and consumers. Lots of teaching moments for OpenAI and the industry from the launch. More here.
Users Split of OpenAI GPT-5 ‘Personality’: Another unexpected pushback on the GPT-5 launch was GPT-5’s ‘personality’. OpenAI made the general ‘voice’ more to the point, and less ‘empathetic’ and ‘supportive. This generated lots of feedback from users who’d gotten used to ‘their’ ChatGPT, with its own ‘voice’ and ‘personality’. Especially in the context of ‘companionship’ and ‘friendship’. On the other hand, many business users found the change welcome, with a more professional and matter-of-fact approach in answering their queries. Looks like this capability will likely also need a ‘selector’ like the model picker above. This highlights the inclination of mainstream users to ‘humanize’ AIs, and approach these services by anthropomorphizing them. This becomes even more acute an issue in the voice modality, with users getting emotionally ‘attached’ to their AIs. Again, another industry ‘teaching moment’. More here.
Amazon AI enabled Alexa+ Out: Amazon’s long-awaited LLM AI revamped Alexa is out branded as Alexa+. The service is being rolled out on Amazon’s display based Echo units called Echo Show, and comes free for Amazon Prime users, and a $20/month fee for non-Prime users. The initial reviews are mixed, with a number of teething problems from response quality, to many capabilities still needing further improvement. Part of the issue remains blending the legacy and generative AI code bases for the product. As well as making sure the collection of LLM AI models behind the service can scale up to millions of users, as well as provide accurate and reliable answers, at a user satisfactory latency in response time. The good news is that Amazon should be able to roll out software updates on an iterative basis. More here.
Apple’s AI Siri and AI Hardware roadmap: Apple is the other ‘legacy’ Voice Assistant company with the industry pioneering Siri service, also undergoing an LLM AI set of upgrades. Apple is tackling a more ambitious challenge of integrating its AI ‘App Intents’ software frameworks that allows its millions of developers to build bottoms up AI functionality into their Apps and Services on Apple’s iPhone, Mac and other platforms. This promises a suite of creative AI applications that could be industry defining. Additionally, Apple also has a robust AI hardware platform roadmap that include home table top robots, home security appliances, and other devices that can manage and distribute customized services optimized for families to use together in a collaborative fashion. This strategy will roll out over the next 2-3 years . More here and here.
Anthropic & Google Ramp up AI Memory: One popular core feature of OpenAI’s ChatGPT service for its hundreds of millions of users, is its robust memory features that were introduced last year. Now Anthropic and Google are rolling out their versions of user memory in its products, with differentiated approaches by each company. The core intent is for the AIs to become a lot more relevant and responsive to end users in the millions with customized and personalized AI services. And of course make it more difficult for users to switch to competing products. These memory capabilities being rolled out are still fairly basic. A lot of innovation is likely here by the industry at large. More here.
Other AI Readings for weekend:
US Immigration Policy revisions and Silicon Valley AI Talent Hires. More here and here.
AI Neocloud Coreweave holds steady through lockup release. More here.
(Additional Note: Doing a new podcast series on AI from a Gen Z and later perspective called AI Ramblings. Now 16 weekly Episodes and counting. More with the latest AI Ramblings Episode 16 here, on AI issues of the day. As well as our latest ‘Reads’ and ‘Obsessions’ of the Week). Co-hosted with my nephew Neal Makwana. His latest ‘The New Terms of Service’ piece here).
Up next, the Sunday ‘The Bigger Picture’ tomorrow. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)