In my very first piece I published on ‘AI: Reset to Zero’ over 250 posts ago last year, I highlighted that “Today’s LLM AIs, and AI Chat Bots, have the ‘memory of a goldfish’”. Like Dory of ‘Finding Nemo’ fame.
Every series of prompts and queries into ChatGPT and its ilk of LLM AI models is like re-introducing oneself to the AI ‘wonder’ product on almost every use. As I then added back in that early post on May 12, 2023 titled “AI: Memory of a Goldfish”:
“Memory can meaningfully enable new LLM AI User experiences, changing current UI/UX at its core.”
“Next up is Persistent memory by user that remembers and tracks their needs and queries independent of the centralized foundation LLM models.”
“Much more to come here.”
And now almost 15 months after OpenAI’s ‘ChatGPT moment’ that kicked off this cycle of the AI Tech Wave, and ensuing AI infrastructure gold rush, we may have more progress on AI and memory. OpenAI today announced some new Memory features and controls for ChatGPT:
“We’re testing the ability for ChatGPT to remember things you discuss to make future chats more helpful. You’re in control of ChatGPT’s memory.”
“You're in control of ChatGPT's memory. You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings. You can also turn it off entirely.”
“We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is. We will share plans for broader roll out soon.”
The Verge’s David Pierce provides more context in “ChatGPT is getting ‘memory’ to remember who you are and what you like”:
“Talking to an AI chatbot can feel a bit like Groundhog Day after a while, as you tell it for the umpteenth time how you like your emails formatted and which of those “fun things to do this weekend” you’ve already done six times. OpenAI is trying to fix that and personalize its own bot in a big way. It’s rolling out “memory” for ChatGPT, which will allow the bot to remember information about you and your conversations over time.”
So far so good. But the details point out that these are but baby steps by OpenAI with its flagship ChatGPT, with steps that currently need effort by the user:
“Memory works in one of two ways. You can tell ChatGPT to remember something specific about you: you always write code in Javascript, your boss’s name is Anna, your kid is allergic to sweet potatoes. Or ChatGPT can simply try to pick up those details over time, storing information about you as you ask questions and get answers. In either case, the goal is for ChatGPT to feel a little more personal and a little smarter, without needing to be reminded every time.”
“Each custom GPT you use will have its own memory, too. OpenAI uses the Books GPT as an example: with memory turned on, it can automatically remember which books you’ve already read and which genres you like best. There are lots of places in the GPT Store you can imagine memory might be useful, for that matter. The Tutor Me could offer a much better long-term course load once it knows what you know; Kayak could go straight to your favorite airlines and hotels; GymStreak could track your progress over time.”
Lots of work to be done to make this easier and simpler. And competitors like Google with Gemini, Inflection with Pi, Anthropic with Claude 2.1, and others are hot on the same memory trail. It’s early days for all, as The Verge goes on to explain in terms of OpenAI’s ChatGPT memory enhancement plans:
“By default, memory will be turned on, and OpenAI says memories will be used to train its models going forward. (Companies using ChatGPT Enterprise and Teams won’t have their data sent back to the models.)”
“For now, memory is just a test, open to a “small portion” of users, the company said in its blog post announcing the feature. But it’s easy to imagine how quickly this might become a core part of the way we interact with ChatGPT, for better or for worse. The bots are getting smarter, and they’re getting to know us really fast.”
But as the Verge points out, there is a Privacy issue that all the Foundation LLM AI companies with their centralized Frontier, Hyper-scaled models have to also address, starting with OpenAI’s current efforts to add personalized memory:
“In many ways, memory is a feature ChatGPT desperately needs. It’s also a total minefield. OpenAI’s strategy here sounds a lot like the way other internet services learn about you — they watch you operate their services, learn about what you search for or click on or like or whatever else, and develop a profile of you over time.”
Sounds like a de ja vu issue with how the web so far has tried to personalize things for user convenience, and of course exponential monetization for advertisers and third parties at scale. These new, ‘memory enhanced’ AI systems will have to deal with those issues on steroids, as the Verge goes on to discuss:
“But that approach, of course, makes a lot of people feel uncomfortable! Many users are already wary of having their questions and missives hoovered up by OpenAI and fed back into the system as training data to help personalize the bot even further; the idea of ChatGPT “knowing” users is both cool and creepy.”
Another major area for AI innovation, and very much another ‘Work in Progress’. Early days indeed, but it’ll be definitely be useful for our AIs to have better memory of user queries, habits, intentions and preferences.
And balance it with Privacy issues as they feed the queries back into their critical “Reinforcement Learning Loops”, inference and all. Perhaps also an opportunity for Apple, with its core focus on Privacy perhaps playing an important role.
Especially if it’s able to provide ‘Small AI’ on local devices at scale, with privacy AND better memory. Presumably far better than Dory. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)