Facebook continues to be the other Major AI Tech player to watch. Not just OpenAI/Microsoft, Google, and Amazon AWS.
WHAT’s OF NOTE:.
Facebook/Meta META 0.00%↑ has continued to play a different, but important role in Large Language Model (LLM) AI technologies, with the roll out of its quasi-open source LLaMA models in March being a key catalyst to open source initiatives over the last few weeks, and of course, the now infamous “Google (and OpenAI) have no Moat” leaked memo of a few weeks ago (A must read for GOOG 0.00%↑, MSFT 0.00%↑ & AMZN 0.00%↑ followers) .
Facebook/Meta Founder/CEO Mark Zuckerberg continues to use Open Source LLM AI initiatives to throw sand in the business model gears for prime competitors OpenAI/Microsoft and Google.
Meta’s business models aren’t geared around Search, Prompt Queries or API cloud usage fees. They benefit from the content and services (and Ads off them), on top of the AI Tech Stack (more on this in later posts).
Each of the large Foundation LLM companies (Microsoft, Google, Facebook/Meta, Amazon, and others) run their models in Clouds that use hardware with GPUs from a variety of designs and vendors. Graphical Processing Units (GPUs) do the lion’s share of work processing billions of transactions in minute units of time to do the statistical sorting involved in LLM AI doing its “magic”.
NVIDIA ($NVDA) is the leader in this category with over half the global market. But for the largest AI companies, it’s increasingly more efficient in their Cloud ‘Compute’ cycles, both in costs and processing time, to design their own chips vs using the A100 s, H100s or other variants from NVDA 0.00%↑. (The chip layer in the AI Tech stack is a rich vein of discussion around the future of AI that we’ll tackle in future posts).
This week Facebook continued to show its LLM AI chip chops in the hardware and software infrastructure front:
“Meta announced the computer chips amid an AI infrastructure event pitched as the first time it has publicly detailed its internal silicon chip project.
“Investors have been closely watching Meta’s investments into AI and related data center hardware as the company embarks on a “year of efficiency” that will result in the firing of 21,000 employees and major cost cutting.
The chips will eventually power more advanced metaverse-related tasks, such as virtual reality and augmented reality, as well as the burgeoning field of generative AI, which generally refers to AI software that can create compelling text, images and videos.”
This announcement underlies the company’s continued work on open source AI models, the latest being Imagebind. It’s notable for adding entirely new modalities beyond text, images, video and coding. It adds motion and other capabilities, potentially making possible entirely new experiences with LLM AIs.
“Meta has announced a new open-source generative AI model called “ImageBind” that links together six different modalities; images, text, audio, depth, thermal, and IMU (sensor that measures angular rate, force, and magnetic field).
Facebook owner Meta says that ImageBind is the first AI model to combine six types of data into a single “embedding space.” For example, it can create an image from an audio clip — such as creating an image based on the sounds of a rainforest or a bustling market.”
It will be interesting to see if this model has the same impact on the open source community as the LLaMA models did. Stay tuned.