I enjoyed writing my last article about Google (GOOG)(GOOGL) and its supposed “cheap” valuation – and the reasons why the market is keeping it cheap. But what I enjoyed more than the topic and writing process was the comments and discussion I had with several readers asking basic yet thought-provoking questions about AI – from both sides. Several onlookers commented on how much they appreciated the discourse and how it’s helped bring new data into their investing research. Seeking Alpha, you’ve done your part.
But it led to a discussion of who is getting AI “right” and the best way to invest in AI. Was it the companies utilizing AI to create an efficient business? Was it the big guns hosting the AI resources after spending billions of dollars on hardware and data center expansions through years of capital expenditures like Microsoft (MSFT), Amazon (AMZN), and Google? Or was it the ones designing and selling the hardware and AI operating systems of sorts to the aforementioned AI cloud providers?
My answer up to this point has been the ones selling the hardware and processors like Nvidia (NVDA), the networking like Arista Networks (ANET), and the specialized memory like Micron (MU).
But I’m also seeing the possibility of investing in companies utilizing AI to make their businesses more efficient and their products better with straightforward returns from those AI investments. And because it’s become clear vastly different strategies are being played out by the major cloud providers, a clear divide has formed as to who those companies are.
And the strategic difference comes down to the posture of the AI system code: closed sourced or open sourced. And, so far, it appears the open source model is the winning strategy.
Seeing Through The Strategies
About two weeks ago, I posted my earnings analysis of Google and Meta Platforms (NASDAQ:META) to Tech Cache members, discussing how companies are finding ways to get ahead in the AI race and monetize the use cases or concepts. After all, investing is about companies making money, so this AI investment needs to produce a return – an obvious point, but the point. In it, I told subscribers, “…in the tech industry, catching up means you’re very far behind and likely not to close the gap unless the company stumbles upon a better solution or finds a differentiated use case.”
Basically, once behind, a tech company has to find a “shortcut” – not a cheat, but a new strategy, to vie for leadership.
I found Google was lagging as it showcased an unworthy demo of Bard months after ChatGPT released its first version to the public. I said, surely, Google has been investing in AI, but it hasn’t gotten itself ahead of the competition in this vein. Moreover, its ad revenue in the quarter (its primary revenue source) didn’t have any obvious AI enhancements to show for it, unlike Meta’s performance where it clearly turned the corner to positive growth.
This was my first clue, leading to digging into the different AI strategies.
Google’s work on AI has been very guarded regarding optimization, but its search competitor Microsoft forced it to show its hand with what it had after it incorporated ChatGPT into Bing. Sure, Google might have put together some original concepts still used in AI today, but it hasn’t done anything to monetize it, and if it has, management has poorly communicated it, if at all.
But that’s not to say Microsoft’s AI strategy isn’t guarded, either. Its investment into AI has come primarily through OpenAI, the owner of ChatGPT, which utilizes a proprietary release model. Nothing different here in terms of its approach. However, its monetization strategy of investing in OpenAI, which charges for the latest version through metered usage, while hosting and providing LLMs (large language models) through Azure is a few steps ahead of Google’s vision for ROI. However, neither company has outlined a clear financial case, though Microsoft is showing it has a path.
But then there was Meta.
Meta said it wasn’t going the proprietary route. Instead, it went the open source route. And in my earnings analysis two weeks ago, I said it was letting the masses make the improvements, driving AI to the next level much quicker than a group of engineers and researchers at any one company.
Meta isn’t behind here (at least not by as much as many thought), but it thinks it has a differentiated use case with open-sourcing its AI so others can build on it and improve it.
Clue number two now clear.
Then last Thursday, I came across a fairly lengthy document from a Google researcher admitting what I had already known: Google is on the wrong path in terms of advancing its AI, and it needs to pivot. And according to the Google researcher, so does OpenAI.
Did Meta choose right?
This added insider information was crucial in solidifying the advantages.
The key point is what the researcher pointed out about open source versus ChatGPT after one month of the public tinkering with Meta’s LLaMA open source system:
Berkeley launches Koala, a dialogue model trained entirely using freely available data.
They take the crucial step of measuring real human preferences between their model and ChatGPT. While ChatGPT still holds a slight edge, more than 50% of the time users either prefer Koala or have no preference. Training Cost: $100.
Meta’s open source strategy has led to the flattening of the AI playing field.
All those closed models are now run down by individuals and universities with new models as good as ChatGPT within 30 days of toying with it. And it has led to individuals training models on devices costing only hundreds of dollars, and in the Berkeley study, just a hundred dollars.
What’s even more interesting is Berkely’s study sees the same results as the Google researcher: open source plus high-quality data sets, not larger, is the path forward.
We hope that these results contribute further to the discourse around the relative performance of large closed-source models to smaller public models. In particular, it suggests that models that are small enough to be run locally can capture much of the performance of their larger cousins if trained on carefully sourced data. This might imply, for example, that the community should put more effort into curating high-quality datasets, as this might do more to enable safer, more factual, and more capable models than simply increasing the size of existing systems.
OK, wait, slow down.
Did Meta negate the need for cloud-based AI infrastructure?
No, there are plenty of use cases where training large models is exponentially more complicated or requires exponentially more data than creating realism renders and Darth Vader teapots. Just like CPUs on our phones can now process things that used to take up five floors in a city building, AI models in the proper context can be run on inexpensive devices. Yet we still need more powerful resources, things our smartphones can’t do. The concept simply shifts the ‘what’ it is you’re processing. There’s nothing new under the sun.
The point is the open source concept has allowed people to determine the best ways to get accurate and intelligent data from these LLMs. Many have found high quality over high quantity leads to better results, as the Berkely study indicates.
Meta Has A Business And Shareholder Driven Return Plan For AI
Google, for its part, is relying on proprietary improvements to share models through Google Cloud and, I assume, for products like ads and content algorithms (if not, I got nothing for Google). Microsoft is investing directly in a proprietary AI company (OpenAI) able to charge for its latest and greatest full-feature product and models. At the same time, it hosts the same capabilities on Azure – AI as a service – and its search product.
I can tangibly see Microsoft’s return; I haven’t seen Google’s yet.
But Meta said, why waste the cycles of doing things internally when the world could optimize it in 30 days? Of course, plenty more experiments and optimization will come, but LLaMA went from being released untuned to matching ChatGPT output in a month.
Alright, who cares, though?
Meta does.
Meta can use those optimizations internally to enhance its advertising AI by improving all aspects, from targeting ads and ROAS (return on ad spend) results to creating the creative piece of the ad for marketers.
Let me spell it out in case you’re missing the concept.
Meta’s AI system, used internally, is also available for the public to download, improve, and tune. Meta then takes those improvements and feeds them back into its system, producing a better system and better business results.
It’s letting the world research, experiment, and optimize, while taking but a month to have a working model with competitive features. Imagine what the open source architecture can do in three months, six months, a year, or more. Meta’s capabilities in AI will likely jump exponentially as it incorporates the best methods to train and optimize.
Anything to create a more accurate, more intelligent AI system helps the company’s mission to create better ads, better ad placement, and better results for its advertisers, leading to better revenue growth for its shareholders.
We already know Meta is extensively utilizing its early AI efforts to generate better ad targeting and more accurate ROAS results while planning to do the creative piece.
We remain focused on continuing to improve ads ranking and measurement with our ongoing AI investments while also leveraging AI to power increased automation for advertisers through products like Advantage+ shopping, which continues to gain adoption and receive positive feedback from advertisers. These investments will help us develop and deploy privacy-enhancing technologies and build new innovative tools that make it easier for businesses to not only find the right audience for their ad, but also optimize and eventually develop their ad creative.
– Susan Li, CFO, Q1 ’23 Earnings Call
This will turn into a direct return on investment for Meta as it regains its competitive edge in advertising. Since open source makes a product better for everyone, and since everyone includes Meta, Meta gains from it both in terms of collaboration efforts – including public interest and donated learnings – and advancements to its main product: advertising.
Meta Is An Investment In AI – For Shareholders
AI is the reason I saw a clear difference in Meta’s ad revenue growth from Google’s in Q1. I’m very interested to see how this open source AI experiment adds to Meta’s future ad returns with its AI system out in the wild. With individuals, groups, and universities finding unique and genius ways of tuning AI for many purposes and what it takes from a hardware perspective, Meta is letting the world do a lot of the heavy lifting.
The advantages are clear for open source AI, and Meta is leveraging the free labor and time of others to enhance its core business. This leads to a direct return on AI investment as it improves its core business beyond what’s capable without AI. Between Google, Microsoft, and Meta, it’s almost surprising Meta has the best financial vision for utilizing AI. But it’s not when you realize it took the path less traveled – so far.
Read the full article here