AI: New Weekly Summary
...week ending July 21, 2023
Starting a new feature for Saturdays here on AI: Reset to Zero (AI:RTZ). AI developments every week are building fast and furious. Even in these record hot summer days. Thought it’d be useful to have a weekly summary of the key AI events of note in one place, with useful links for longer, leisurely weekend reading.
It’s not meant to be a comprehensive look at the week’s AI developments, just the highlights from my perspective. Will also have a separate section to note AI research papers and developments of note every week, from the dozens and more flowing through every week. And pointers to pieces you may have missed on AI: RTZ this week, with brief summaries.
Here we go for this week ending July 21, 2023. Beverage of choice recommended:
Meta-Microsoft Llama 2 open source LLM AI launch: Meta’s long anticipated Llama 2 launch was well executed, and the Meta team should be complimenting itself on a job well done. Lots of well-coordinated support from luminary investment firms. (Small note, they officially changed the spelling from LLaMA to Llama…easier to type!). A publicly highlighted partnership with Microsoft with some shade at their partner OpenAI. They released three versions of Llama 2 models at 7, 13 and 70 billion parameters (vs. 175 billion for OpenAI GPT 3 and 1.8 trillion for GPT 4). It’s an open-source release with commercial options and weights. Ready to use by businesses on Microsoft’s Azure Cloud. Notably, Meta made licenses for anyone with over 700 million users to be available only at Meta’s ‘sole discretion’. That of course is targeted at Apple, Snap and others around the world. Amazon AWS, Hugging Face and other cloud datacenter hubs are also distribution partners. Also notable was the Qualcomm partnership by Meta to make Llama 2 work on phones (‘AI to the Edge’ as I’ve been discussing). Lots of technical details on Llama 2 here. My indepth take here with additional links. Earlier background piece here on Meta AI open source efforts.
Microsoft CoPilot for Office 365 ‘Sticker Shock’ Pricing: Microsoft also had a big week beyond Llama 2, with new details on CoPilot Office 365 Pricing. Industry watchers were surprised at $30/user/month, higher than the $5-$20 expectations. Driven both by higher underlying AI compute costs, and opportunities of incremental revenues of $5 to over $15 billion a year by street estimates. Google hasn’t yet announced their pricing for AI enabled Google Workspaces. Other options for enterprises include Salesforce and other offerings. Early days for AI product costs and price discovery. Have a more detailed take here on underlying cost and pricing issues, with additional links.
Open AI ChatGPT ‘Behavior Drift’: Lots of discussion around the issue of OpenAI’s GPT4 and ChatGPT showing anecdotal and new research driven indications of ‘model drift’, or degradation of results for many users. It’s been framed by the media as “the decline of GPT4” in social media twitter memes. Analysis centers on post-prompt behaviors, with results varying on the same queries when repeated. Issues around ‘Steerability’ of particular interest. OpenAI proactively announced some fixes today that don’t address all the issues, but should be viewed as a start by the community. Have a more detailed look here, at the underlying research paper, and results with more links.
Google Founder back in the House on AI: The Wall Street Journal highlighted co-founder Sergey Brin being back on Google campus working on AI. So now it makes two mega public tech giants Google and Meta, with founders focused on the AI north star. Three of course, if one includes Demis Hassabis, Google’s new global AI head, who co-founded London-based Deepmind, now Google’s flagship AI center of gravity. Discussion centers around Google’s work on Gemini, which is their next LLM AI to go head to head with OpenAI’s GPT 4 and beyond. Story also discusses Google’s recent AI shift to Demis Hassabis at Google Deepmind in London as well. Additional reports and details here and here. On the flip side, this piece in the Information highlights Google’s tougher months of late post ChatGPT AI ‘Code Reds’, recent layoffs, and more.
Apple LLM AI strategy with ‘Apple GPT’, aka Ajax: Story by Bloomberg on Apple working hard on their LLM AI options. Discusses effort being led by AI head ‘JG’ and Software head Craig Federighi. Also highlights Apple’s need to balance Privacy, with leveraging user data to reinforce AI models. Story makes clear they’re deeply focused on applying machine learning on a whole range of hardware and software products, including the upcoming Vision Pro platform. But it is early days still on their LLM AI strategy. Brief mention of discussions with OpenAI that don’t seem to have gone far. Story does highlight their test product Ajax is built on Google Jax, Google’s machine learning framework, and that it runs on Google Cloud. Also highlights Apple’s use of Amazon AWS infrastructure for a wider range of Apple services. I view Apple as one of the best positioned Big Tech companies, as AI really gets going. My detailed take on Apple’s AI opportunities here with more links.
Biden White House AI initiatives: White House got 7 AI firms to take an AI safety pledge. They include Microsoft, OpenAI, Google, Meta, Amazon, Anthropic and Inflection AI. Plan is for them to allow external examination of their AI applications and services by external experts, and share information back with government bodies and agencies. Axios also had an interview with WH Chief of Staff Zients on AI matters, detailing a push for more AI authority. All this of course supplements Senator Schumer’s Fall efforts around AI regulation issues to come. Finally on the regulatory front, The NY Times underlines it is early days for AI regulations in the US vs Europe, with its upcoming AI Act.
In AI Papers of note for those more technically inclined,
Would highlight again the ChatGPT drift paper discussed above. Provides an upfront perspective on using LLM AI models like GPT4/ChatGPT, and the perceived degradation issues around results.
Another paper of note is this one by Stanford researchers proposing an alternative to reinforced learning with a different algorithmic approach called Direct Preference Optimization (DPO). Promising on 6 billion parameter models tested. This one is notable because as I’ve discussed before, reinforced learning, human and/or AI driven (aka RLHF and RLAI), remain the key ways today to make foundation LLM AI models more reliable and accurate, and perform ‘magical’ things. Alternative ways to do the same at scale with less expense and Compute would be game-changing.
Finally, I would highlight this paper proposing an alternative way to improve Transformers (The ‘T’ in GPT’, and key in LLM AI models today), with what they call Retentive Networks (RetNet). Again, promising metrics on smaller models.
Additionally, in terms of AI: RTZ pieces you may have missed this week that may make good weekend reading,
I did a piece trying to simply explain the deep math of AI computing vs traditional software computing, with a number of background links.
Also had a deeper look at the AI and other issues driving the historic Hollywood writers and actors strike, and context against tech cycles. It even includes TikTok in the discussion!
And took a stab at framing Elon Musk’s xAI initiative going up against OpenAI and others. He’s a dark horse in AI, but like for Apple, it’s early days, and he’s always been a great Wild Card.
For ongoing reference, I’ll point to my recent thoughts on AI over the next three years, along with my second half 2023 overview, as we go through the Summer.
That makes for a busy week indeed, and hopefully useful fodder for weekend AI reading.
Finally, if you’re new here, a guidepost:
There are almost 60 posts here on AI, and counting. Posting daily. If you’re looking for more pieces of interest, here are ten of note. They will give you a context of AI from my perspective (some overlap with pieces above), and a taste of what to expect.
1. AI Tech Wave vs PC and Internet
2. AI Deep Math Explained simply
5. Meta driving AI open source
7. Why the Apple Vision Pro is a historical bargain
9. Google Empire Kinda Strikes back
10. AI Reinforcement Learning Loops Drive it all
The direct link to the overall site is here. No passwords or login needed.
And no paid subscriptions required.
And if you want to know more about me and this site, please click here.
Thanks for reading thus far. Please join us. Stay tuned.


