AI: Meta's Purple Llama to ease Security concerns
...more secure open source LLM AI complies with DC directives
As discussed here before, Meta has taken the lead amongst the big tech ‘Magnificent 7’ in open source Foundation LLM AI models. Llama 2 is their flagship there, and is currently one of the more widely used open source models by developers and businesses worldwide. Now this model has enhanced cybersecurity and safety features under the name Purple Llama. Meta is one of the largest LLM AI companies, and is currently one of Nvidia’s largest customers of its AI GPU infrastructure alongside Microsoft/OpenAI, Google, Amazon, Elon Musk’s xAI/Grok/X and Tesla, amongst others,
It’s designed to address some of the recent regulatory initiatives by the White House under its AI Executive order addressing AI safety. As Axios lays it out:
“Today Meta released benchmark cybersecurity practices for large language models, which it says is an effort to "level the playing field for developers to responsibly deploy generative AI models."
“Why it matters: The White House has urged AI companies to ramp up their safety efforts, and codify some safety requirements in its AI Executive Order, worried that AI chatbots and open source LLMs like Meta's Llama 2 will lead to dangerous misuse.”
“LLMs can serve as attack vectors, hacked to access proprietary information, or manipulated to produce harmful content, even when they've been designed not to.”
“The big picture: Cybersecurity risks around LLMs are "a pervasive problem that we need to mitigate," Joseph Spisak, Meta's director of product management for generative AI, tells Axios.”
"There's no real ground truth: We're still trying to find our way into how to evaluate these models" and need to "build a community to help standardize these things," he said.”
“What's happening: Meta's two key releases in its "Purple Llama" initiative are CyberSec Eval, a set of cybersecurity safety evaluation benchmarks for LLMs; and Llama Guard, which "provides developers with a pre-trained model to help defend against generating potentially risky outputs."
“Spisak tells Axios that Purple Llama will partner with members of a newly-formed AI Alliance that Meta is helping lead and others such as Microsoft, AWS, Nvidia and Google Cloud.”
“Flashback: Meta previously released a Llama 2 Responsible Use Guide, an approach that critics say is insufficient for managing how an open source model can be misused in the wild.”
As SiliconAngle explains the ‘purple’ in Purple Llama:
“Meta said Purple Llama is an appropriate name for its new AI safety initiative because mitigating the risks of generative AI requires developers to combine attack, in the shape of “red teaming,” and defense, through so-called “blue teaming.” In traditional cybersecurity, red teams consist of experts who carry out various attacks to try and overcome a company’s cybersecurity defenses, while the blue teams are focused on protecting and responding to those attacks.”
“Therefore, Meta is labeling its approach to generative AI security as “purple teaming,” with an aim to foster a collaborative approach to evaluating and mitigating the potential risks of the technology.”
The ongoing debate in the early stages of the AI Tech Wave between open and closed Foundation LLM AI models continues amongst developers, businesses and regulators. Meta as the current champion for open source AI software is poised to show it’s AI Safety chops.
The issue of AI safety and risks of course is a pivotal issue with a lot of recent drama, in the AI tech community. The tussle, some would say a ‘civil war’, is between the two camps battling over the right speed between AI Speed vs Safety, with the recent board battle at OpenAI being the latest example..
Hundreds of billions are at stake in this long AI race ahead. Now we have a Purple Llama in the mix. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)