In hindsight, it’s amazing they didn’t see this coming. But the company that became a verb 'for online meetings’, didn’t see people would mind if Zoom trained their AI on customer’s Zoom call Data. As Axios illustrates with their graphic above and story:
“A change to Zoom's terms of service left customers confused and worried that the video conferencing company was seeking broad rights to use images, sound and other content from meetings to train its AI algorithms.”
“Why it matters: The pandemic made Zoom synonymous with online meetings. Now users and companies don't want to see their conversations and deliberations shared with the world.”
“Details: Zoom made changes to its terms of service back in March, but concern only spiked this past weekend after a Hacker News post highlighted that the changes appeared to give the company unbounded rights to use content to train its AI systems.”
NBC News explains further:
“The videoconferencing app Zoom said Monday it won’t use customers’ data without their consent to train artificial intelligence, addressing privacy concerns of a growing number of customers over new language in the app’s terms of service.”
The seeds for the kerfuffle were strewn a few months ago:
“In Section 10.4 of Zoom’s terms of service, updated in March, users agree to “grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” for various purposes, including “machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof.”
“Among the ways Zoom now uses AI are the Zoom IQ Meeting Summary, which provides automated meeting summaries, and services like automated scanning of webinar invitations to detect spam activity, Chief Product Officer Smita Hashim said in a blog post Monday.”
“The blog post emphasized that meeting administrators can opt out of sharing meeting summaries data with Zoom. Non-administrator meeting members are notified about Zoom’s new data-sharing policies and are given the option to accept or leave meetings.”
The Zoom management did manage to correct their course with alacrity once it became clear how bad it looked.
But the episode highlights the mistrust and sensitivities people have about AI way before they see any benefits from the technology. Regardless of the intentions of the purveyor, as NBC News went onto highlight:
“The criticism underscores the growing public scrutiny of AI, specifically concerns over how people’s data and content could be used to train AI large language models without their consent or without their receiving compensation.”
“Janet Haven, the executive director of Data & Society, a nonprofit research institute, and a member of the National AI Initiative advisory committee, said concerns over the emerging tech go beyond Zoom’s terms of service and represent long-standing concerns over data privacy.”
“I think that the fundamental issue is that we don’t have those protections in law as a society in place and in a kind of robust way, which means that people are being asked to react at the individual level. And so that is the real problem with terms of service,” Haven said.”
I’ve discussed the protracted threats of litigation and demands for compensation already building over how Foundation LLM AI companies are building their services on ‘Extractive Data’ from every corner of the Internet.
Every LLM AI company is scrambling to position themselves to be in the best position to negotiate and/or litigate as needed. Google for example, updated their Privacy Policy to collect public data for AI Training just recently:
“A Shift from “Language” Models To “AI” Models
The updated policy marks a clear shift from Google’s previous terms of service. Before this weekend’s update, Google’s policy said it used people’s data to improve “language” models. Now, Google reserves the right to use people’s data to improve all its “AI” models and products, including translation systems, systems that generate text, and cloud AI services.”
OpenAI took similar measures as well in their terms of service. As Techcrunch summarized it recently:
“Microsoft-backed OpenAI, Google and Anthropic ban the use of their content to train other AI models.
However, these companies have been using other online content for their own model training. Can Big Tech have it both ways? Reddit and others are trying to stop this.”
Lots to sort out here for all parties. And the issues are going to prove gnarly for companies large and small. And lots of fine print will have to be read and signed.
As we’ve covered in detail, the accelerating investments in Foundation LLM AI models REQUIRE insatiable amounts of new Data, AND ‘reinforcement learning’ feedback loops of users. This from their continuous interactions with every imaginable software product and service that could be augmented with AI. The rights, boundaries, permissions and any eventual compensation are going to be hard fought and negotiated every step of the way.
So it’s going to be important for all companies with any AI aspirations to watch their steps carefully, from beginning to end. And not try to Zoom through the process. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here).
Very interesting and good to know … did participate in many zoom calls so far ;)). Thank you! Have a great day!