Very soon over the coming months, we’re all going to be bombarded with an abundant reinforced flow of AI generated results, fed by data pools real, synthetic, and/or from the edge of the internet. And these results will be delivered increasingly by LLM AI models larger than ever, running on GPU ‘Compute’ hardware many times faster than what’s already making us ‘gob-smacked’ and fearful of how much better AI is likely to get very soon.
Oh, and the scientists building this stuff are amazed this all works, and are still not sure how it all works.
There is increasingly more money and effort being expended around AI ‘generated people’. Here, it’s going to be a combination of ‘authorized’ AI generated ‘Digital Twins’, via voice and/or video, as well as unauthorized fakes, AI generated ‘counterfeit people’. ‘Deepfakes’ by another name.
All of the above, AI generated synthetic data, digital twins, and ‘counterfeit people’ are going to get a lot more abundant in a hurry, and likely to overwhelm our ability to absorb, digest, and adjust our emotions and habits. Especially since we’re all programmed to run on emotions over rational thought in the best of times. Let’s dig in.
Of note is a recent essay, “The Problem with Counterfeit People”, by Daniel C. Dennett, professor emeritus of philosophy at Tufts University. In the piece he likens the coming flood of AI generated ‘counterfeit people’ to be a potentially society-breaking risk akin to counterfeit money in our world.
He highlights how counterfeit money is a mortal threat to individual and institutional trust in a global trade and commerce system if not regulated and controlled:
“It will be difficult—maybe impossible—to clean up the pollution of our media of communication that has already occurred, thanks to the arms race of algorithms that is spreading infection at an alarming rate. Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.”
As these LLM AI models are also going multi-modal to generate code, audio, voice, video and beyond, they’re increasingly outputting results that will be consumed by billions, using their emotions over rational brains.
This emerging area of ‘Smart Agents’ and ‘Companions’ is an area of intense industry investment and building, and an area we’re likely to see a tsunami of offerings from companies large and small. Meta products featuring ‘Smart Agents’ are coming soon, led by founder/CEO Mark Zuckerberg. And we thought Facebook and Instagram were addicting today.
In the meantime, we are already seeing these technologies deployed in their prosaic, commercial form. As The Information outlines in a timely piece by Stephanie Palazzolo, there’s a flood of funding for AI video startups around ‘Digital Twins’, or ‘Synthetic People’:
“Tavus, a two-year-old startup whose software helps companies customize videos they can then send to prospective customers, job candidates and others.”
The goal of the company’s product is to allow folks to market themselves in an AI generated personalized way with ‘Digital Personas’ based on themselves. Making your video automatically address the recipient in highly personalized ways with their names and specific details. Make more emotional connections in a simulated one-on-one manner. But in the context of a mass mailing marketing program.
“Tavus has raised about $18 million in Series A funding at a $80 million post-investment valuation, five months after closing its seed round, according to a person with direct knowledge of the fundraising.”
“The startup was part of Y Combinator’s summer 2021 cohort and previously raised a $6.1 million seed round from investors including Sequoia, Lightspeed and Index.”
“Despite the big name backers, it faces stiff competition. Synthesia, a London-based startup that recently raised a $90 million Series C round from Accel, also helps companies create AI-generated avatars and videos and says it serves more than 50,000 customers. And a number of smaller startups that also use AI to automatically create videos for businesses, including Maverick, BHuman and Reachout.AI, have popped up in recent months.”
Tavus’ products are in the early stages of its eventual capabilities, as reviewed here by user/realtor Travis Wood. It will get better presumably with the new funding, but the 20 minute video review is a good illustration of what the ‘product’ does now, and potentially needs to do to justify what it costs (hundreds to thousands of dollars).
As we’ve seen, the current AI prices for early AI products and services are already evoking sticker shock for both business and consumer applications.
As we gird and brace ourselves for what’s ahead, it’s important to note that our emotional brains are about to be engaged even more frenetically by AI driven software remakes traditional software that drives the algorithms that serve us daily.
Again, here is Professor, Author, Philosopher Daniel Dennett, from a recent NYTimes interview with David Marchese:
“For more than 50 years, Daniel C. Dennett has been right in the thick of some of humankind’s most meaningful arguments: the nature and function of consciousness and religion, the development and dangers of artificial intelligence and the relationship between science and philosophy, to name a few. For Dennett, an éminence grise of American philosophy.”
As Professor Dennett frames it:
“Society depends on trust. Trust is now seriously endangered by the replicative power of AI and phony interactions. This is a grave danger. There’s a natural human tendency to think, If I can do it, I will do it, and not worry about whether I ought to.”
“The AI community has altogether too many people who just see the potential and aren’t willing to think about risks and responsibility. I would like to throw a pail of cold water on their heads and say, “Wait a minute, it’s not cool to make easily copied devices that will manipulate people in ways that will destroy their trust.”
“It’s all — to use an old-fashioned term — driven by passion. The emotions rule. All control in human minds is via emotion. This is an important idea. Your laptop has an operating system. It’s dictatorial in how it runs things.”
“It’s the traffic cop. In your brain, there’s no operating system in that sense — it’s all the turmoil of emotions. Happily, we have learned how to harness those emotions. That is to say, the emotions have learned how to harness one another. But that “self” is at every level and all times driven by what we might call emotions and micro emotions.”
This framework is critical to keep in mind as we see an accelerated flow of AI driven data and input, fed into ever larger LLM AI models and GPUs, much of it in the form of emotionally potent video. Be it real or fake, it’ll be hyper-personalized to influence us on everything in our lives, both at work and in our personal lives.
Perhaps we can eventually employ AI ‘Smart Agents’ to help negotiate these incoming sales pitches in a more rational than emotional way.
Now that’d be something worth paying for in a world flooded with AI data and advice. Real and synthetic. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)