Call it lazy. Call it an obsession. Call it a lazy obsession. For the past couple of years, everyone with a keyboard, the X app and half an opinion, insisted Apple was “behind” on artificial intelligence (AI). Apparently, barometers of success include having a frontier model to chest thump about, a founder or CEO doing the podcast circuit to explain why artificial general intelligence is just five minutes away, hyperscale capex flex, as well as dramatic product demos videos. Apparently, Apple was behind. Apple had missed the moment, we were told. It turns out that Apple didn’t “lose” the AI race. (Reuters)
Microsoft pumped billions into OpenAI. Google scrambled to embed Gemini into every app they have. Nvidia’s market cap went vertical for a while as every Silicon Valley startup fought for H100s like they were the last bottles of water in a desert (it was, as the cool kids call it, FOMO or the fear of missing out). Through it all, Apple simply sat. They talked about things that weren’t as cool, such as Machine Learning. The experts on X told the world Tim Cook had missed the greatest shift in computing history. But as we stand here in 2026, the dust settling, and a clearer picture that’s emerging looks a whole lot different.
Also Read: Tech Tonic | Apple MacBook Neo is something Windows PCs may never be
It turns out that Apple didn’t “lose” the AI race. They played a patient, waiting game (something most don’t really understand), letting AI companies clear the landmines, make the roads and discover the complexities of costs and regulation while at it. And now, Apple is free to drive in, and it is their choice whether they insist on doing that in a tank, or a Ferrari Purosangue. That being said, there is a need to speed up Apple Intelligence development, something they’ve been consistently doing—for instance, expanded language support announced last year.
But Apple must play the optics game too, particularly with Gemini being so nicely integrated in Android, for a while now. One could perhaps argue the patience element stretched on a bit longer than perhaps ideal, also because of the Android context. Apple and Google will now build the next generation of Apple Foundation Models, based on Google’s Gemini models, and these arrive later in the year. This will be a pivotal moment.
The biggest misconception about the AI race was that you simply had to build the biggest model to win. For two years, Nvidia, Microsoft, OpenAI, Oracle, Alphabet, Meta, Amazon, Tesla and dozens of unicorns collectively burned trillions of dollars (and counting) on research and development, data centres, and eye-watering energy bills required to train Large Language Models (LLMs). Though Chinese AI company DeepSeek did provide a timely context, early last year.
Analysts J.P. Morgan’s 2026 Outlook specifically cited that the hyperscalers (Microsoft, Google, Meta, and Amazon) had crossed the $1.3–$1.4 trillion mark in combined R&D and AI infrastructure spending since the ChatGPT boom began. That’s just element of the big beautiful bubble.
Apple looked at that bill and politely declined. Instead of trying to out-compute Google or OpenAI, or being like Microsoft which was totally dependent on other AI companies to layer its Windows OS and services, Apple waited for models to become a commodity. They eventually inked a deal to use Google Gemini for Siri’s “heavy lifting” for a reported $1 billion a year. Think about that math. While competitors are spending $50 billion a year on capex just to keep their models relevant, Apple is “leasing” the world’s best intelligence for the cost of a rounding error on their balance sheet.
“After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards,” the official statement, from earlier this year.
It would seem Apple made a smart decision, to not try and build another power plant when many already were trying to exist. Apple realised early that the model race was only one layer of the stack and perhaps not even the most durable one because of the costs and competition. It simply chose the best one, and plugged into it. By skipping the model wars (and I’m certain the urge would have been strong to do the opposite), Apple saved resources for the one thing that actually matters—the user experience.
What does Apple now do? Apple’s biggest AI advantage is not one model. It is the active user base. It simply will flip the switch with a subsequent update that enables Gemini for almost 2.5 billion (and counting) devices across the iPhone, iPad, Mac and other product lines. That will define user experience. And at that point, many will wonder if it makes sense to par ₹1,900 or so a month to OpenAI or Anthropic for the privilege of an AI chatbot, when the iPhone, iPad or Mac has Gemini for free. This, while rivals fight to acquire users for their AI assistants one subscription, one app install and one enterprise deal at a time
Think about it, even as OpenAI struggles to make something cogent of all the ‘AI device’ rumours that have been doing rounds for 12 months now, Apple has released one generation or more of its product lines powered by the chips that also are the best for AI compute on devices. Every iPhone 17 and M4/M5 Mac is effectively an AI computer (without the in-your-face Microsoft-esque branding). Because Apple designs its own silicon, they’ve spent years baking Neural Engines (this is the part of a chip which accelerates Artificial Intelligence (AI) and machine learning tasks) with the chips, even before LLMs were even a household name.
Here’s something most casual experts don’t understand. AI companies can well have an extraordinary model, but still face the tough economics of customer acquisition, cloud bills and a product surface that remains, for many users, a destination that must choose every time. Apple doesn’t have to worry about making that destination a habit, but something that’s invisible, yet always there, contextually useful and also tightly woven into every app or software they use.
And here’s the long-term cost advantage. Every time you ask ChatGPT, Copilot or Claude a question, it costs them some cents to a dollar to compute on a server somewhere (either rented, or their own after spending billions of dollars). When your iPhone’s Apple Intelligence will summarise an email thread or generate an image, it will cost Apple $0. The compute is paid for by the user when they buy the phone. While everyone else is bleeding cash on server bills, Apple retains a workable margin on the same chips that do the work.
If you remember the statement from Apple which I referenced earlier, they talked about privacy. Most AI models require you to send your data to the cloud, where it’s processed (and often stored). With Apple focusing on the methodology of on-device, they can very well make a case for personal AI if ever there was one. They even invented Private Cloud Compute —a system where, even if a request does have to go to a server, it’s processed on Apple Silicon in a “black box” that even Apple can’t see into.
As we look back, Apple perhaps didn’t plan it so perfectly when things started unfolding a couple of years ago. I am sure there was some internal activity, conversations around AI when ChatGPT burst on to the scene in 2022. But that’s where Apple’s patient approach again paid off. As it perhaps has, with foldable phones.
(Vishal Mathur is the Technology Editor at HT. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice versa. The views expressed are personal.)