The Bigger Picture, Sunday October 22, 2023
In last week’s Sunday’s ‘The Bigger Picture’, I made the point that the tech industry is designed and destined to chase new technologies organically for its own sake, raising and investing billions in the process. In this week’s bigger picture, I’d like to extend on that point. With new technologies, we often become more enamored of what they’ll do eventually than what they do already. And we need to build fast on those near-term things and not let our far-term goals distract us from what the new technologies are capable of NOW, and the near-NOW. Let me explain.
We got safer ‘Driver Assist’ in our cars while waiting for full self driving cars. We got free audio and video communication with anyone and everyone on a planet with over five billion on smartphones, while waiting for a metaverse where we can all live a second life. We are currently fearful, and enamored with AI surpassing our intelligence (aka AGI and ‘super intelligence’) while we can get ChatGPT and AI assisted Google Search to do the uniquely useful things it already does. And it’s just now starting to do more multimodal with voice than just text. Just try ChatGPT Plus with Voice when you have a chance.
This point of the near and the far came to mind reading about Bill Gates’ takes to date, and his latest take on OpenAI and ChatGPT. Remember that he has front-row seats here with his Microsoft pedigree and relationships:
“While Gates is no longer officially involved in Microsoft's day-to-day operations, he continues to serve as an advisor and is familiar with OpenAI's leadership team and ideas. Microsoft is OpenAI's largest shareholder with a 49 percent stake.”
In February 2023, Gates told Forbes that he didn't believe OpenAI's approach of developing AI models without explicit symbolic logic would scale. However, OpenAI had convinced him that scaling could lead to significant emergent capabilities.”
And as a hard-core software coder himself, Bill understands better than most what LLM AI and Generative AI are potentially capable of, despite the fact even coders can’t truly grok how the AI math truly works and his ultimate optimism on AI ‘Explainability’ and ‘Transparency’:
“Another important milestone, according to Gates, is the development of understandable AI. "It's weird, we know the algorithm, but we don't really know how it works," says Gates. He believes this task will be fully solved in "the next decade.”
But to get to the point of tech near and far, Bill left his interviewers with this take on OpenAI’s breathtakingly impressive GPT-4 vs an upcoming GPT-5 and beyond:
“Bill Gates does not expect GPT-5 to be much better than GPT-4”
“In an interview with the German business newspaper Handelsblatt, Microsoft founder Bill Gates says there are many reasons to believe that GPT technology has reached a plateau.”
“There are "many good people" working at OpenAI who are convinced that GPT-5 will be significantly better than GPT-4, including OpenAI CEO Sam Altman, Gates says. But he believes that current generative AI has reached a ceiling - though he admits he could be wrong.”
“As a benchmark for what he sees as a major quality improvement, he cited the big jump in quality from GPT-2 to GPT-4, which he described as "incredible."
“It is currently unknown when OpenAI will begin training on GPT-5 or release the model. The company is rumored to be working on several prototypes with the primary goal of increasing model efficiency to reduce inference costs.”
“Two to five years to cheaper, more reliable AI”
“Still, Gates sees great potential in today's AI systems, especially if high development costs and error rates can be reduced and reliability improved. He believes this can be achieved in the next two to five years, making generative AI viable for medical applications such as drug development or health advice.”
I have a high degree of respect for Bill like most people, given his extraordinary accomplishments, and having had a chance to interact with him many times during my time heading up Internet Research at Goldman Sachs in the nineties.
So his assessments on AI today and in the future, via OpenAI and other companies doing AI big and small are directionally right.
But the key point I’d emphasize is to focus in the near-term on AI that can do wonders where reliability of 80% or less, is more than good enough. Which is NOT for now fields like healthcare, finance, legal, or areas where 100% accuracy is an essential and not a ‘nice to have’.
Used with that handicap in mind, and while reminding users of that handicap, LLM AI as it works today offers amazing opportunities in so many fields, ranging from education to knowledge-based work, to yes, even drug development, where it helps humans CREATE and REASON faster through the ever-growing amount of information flooding into every discipline. Augmenting us as it were.
This post today applies more to users finding bigger uses than their designers intended for the AI applications and services they’re releasing today. Much like how Twitter found novel ways to use Twitter’s core functionality in the early days by inventing ‘retweets’, hashtags and ‘tweet storms’. Or how TikTok became a gusher for Creators to reimagine music and songs licensed by TikTok from the music industry, to tell entirely new stories to entertain and inform. And how the same is happening with short video clips of movies and TV shows.
And that’s not even including practical, unassuming tweaks by new companies of old services that became the foundation of new internet empires. Thing available near, that surpassed the dreams of the original services in the far distant future. Examples here include Snapchat making social media feeds ‘ephemeral’ where the posts disappeared forever after being shared with intended friends/family for a short time. Meant to counter the ‘internet never forgets’ unintended consequence of posts on Facebook.
Or a little company like Instagram delivering humble photo filters to make digital pictures taken by new-fangled smart phone cameras so much cooler without any training or editing. And how an emerging startup like WhatsApp hacked a way to do SMS across hundreds of telecom operators worldwide with just a handful of talented employees, and created an unstoppable juggernaut that Facebook (Now Meta) HAD to buy for $19 billion as soon as possible. By the way, with the same alacrity that Mark Zuckerberg of course bought Instagram.
Prosaic, but useful tweaks that fundamentally changed the nature of the underlying platform. Far beyond what the companies that spawned them imagined or intended. AI has the potential to do the same with user and creator creativity. With the end result being unexpected mega-successes in AI far beyond ChatGPT. While we’re waiting for the Far away potential of superintelligent smart AI agents, and so many other wondrous AI capabilities and services.
And just keeping that in mind, and using AI optimized in their user interface (UI) in applications and services, will help humans do so many things so much better, TODAY. Not tomorrow, while we wait for the FAR. Leveraging the NEAR-now. Get maximum utility of the tech we have, not the tech we’re dreaming of. We’ll get THERE when we get there.
A bigger picture to keep in mind as we accelerate on our AI Tech Wave journey. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)