Yesterday we discussed the latest polls showing some wariness with AI and the AI industry amongst the public and the AI academic community. While we wait for the AI Tech Wave and industry to build, deploy and scale more capable, reliable and safer apps and services, there are still some headwinds ahead in terms of potential bad actions by bad actors. And of course, regulators are also bracing and trying to build safeguards against them.
In an Axios piece titled “Behind the Curtain: AI architects’ greatest fear’, they note:
“Open AI and other creators of artificial intelligence technologies are close to releasing tools that make easy — almost magical — creation of fake videos ubiquitous.
“One leading AI architect told us that in private tests, they can no longer distinguish fake from real — something they didn't expect would be possible so soon.”
“This technology will be available to everyone — including bad actors internationally — as soon as early 2024.”
“Making matters worse, this will hit when the biggest social platforms have cut the number of staff policing fake content. Most have weakened their policies to curb misinformation.”
I’ve already discussed AI tools getting good enough to make industry watchers ask questions like “What is a Photo” and “What is a Video”. And as LLM AI technologies go more multimodal with Voice and Universal Translation, amongst other capabilities, there will be opportunities by bad actors, especially backed by adversarial State resources, to use them to the max, especially going into a big election year in 2024. Again, as Axios highlights:
“The big picture: Just as the 2024 presidential race hits high gear, more people will have more tools to create more misinformation or fake content on more platforms — with less policing.”
“A former top national security official told us that Russia's Vladimir Putin sees these tools as an easy, low-cost, scalable way to help tear apart Americans.”
“U.S. intelligence shows Russia actively tried in 2020 to help re-elect former President Trump. Top U.S. and European officials fear Putin will push for a 2024 win by Trump, who wants to curtail U.S. aid to Ukraine.”
Lots of questions on how to get AI ready for AI.
As Axios continues:
“Yes, the White House and some congressional leaders want regulations to call out real versus fake videos. The top idea: mandating watermarking so it'll be clear what videos are AI-generated.”
“But researchers have tried that. The tech doesn't work yet.”
“In any case, deciding which content is "AI-generated" is rapidly becoming impossible, as the tech industry rolls AI into every product used to create and edit media.”
“Reality check: The best self-policing in the world won't stop the faucet of fake. The sludge will flow. Fast. Furiously.”
As a technology optimist, I think today’s generations young and old have rapidly been coming to terms with how technology can be used for bad and good over the last tech waves with the internet, social media and AI. As I noted in a recent piece, young people are already proactively taking action slowly but surely to protect themselves the downsides of technology. Even as our best technologists figure out how to all make it work for good.
Think we’ll come through these potential AI storms in the near-term, and the AI Tech Wave will do a lot of net societal good over time. But nevertheless some bracing and anticipating for the potential bad actors and their bad acts can also go a long way. Like boarding up windows ahead of a storm. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)