AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
Apple isn’t being left behind in generative AI—it’s playing a different game. While every other tech company is spending billions on inference compute—Apple is being paid for it.
Since OpenAI released ChatGPT in November 2022, the rest of big tech has been seen to be catching up. Most major players have released their own models and applications, requiring huge investments in data centers to train and operate generative AI applications. Many have also made huge investments in private companies to help give them a stronger position with or against OpenAI.
Throughout the breathless coverage of moves by Amazon, Facebook, Google, and Microsoft, there has been an odd narrative developing: Apple is being left behind. The theory goes that since Apple hasn’t released a large language model or a text-to-image generator or aligned with a major AI startup through an investment, it will be left out of the AI revolution.
I think this theory is missing the plot. Apple isn’t being left behind—it’s playing a different game. While the rest of big tech is fighting to buy GPU chips and build data centers to support generative AI in the cloud, Apple is focused on edge AI—enabling AI on device. And that may give Apple a significant technical and financial advantage.
Let’s break down how this strategy plays into Apple’s unique strengths.
Since the 1997 acquisition of NeXT, Apple has built a strong track record of leveraging open source technology across its products and tools—including tools to that help optimize machine learning models on Apple hardware (specifically coremltools). This history is important in the generative AI world because there is a vibrant open source community creating foundation models for text, images, and more. Apple has already taken advantage of one of these projects, accelerating the popular open source image model Stable Diffusion on Apple devices. Apple doesn’t need to create it’s own large language or text-to-image models or invest billions of dollars in startups—it can join an open source project and invest in accelerating the models on device.
Apple has been investing in AI on device since the launch of the Neural Engine in 2017 and Core ML in 2018. This history is largely ignored, however, in the current discussion about AI leadership. Perhaps that’s because people perceive Siri to be less useful than Alexa or Google Assistant. Or perhaps that’s because Apple is relatively quiet about it’s AI work. Or perhaps it’s because people think of generative AI as a cloud-based service that needs to run on Nvidia GPUs.
But I think it’s a mistake to dismiss the advantage that Apple has built over the past 6 years. While other AI leaders are battling each other to buy GPU chips from Nvidia to build new data center capabilities, Apple controls its own on-device chip strategy—and has sold more than a billion devices with its AI-enabled chips.
The number of neural engine devices in pockets around the world gives Apple an important advantage: a large customer base that can run ML models on their devices. This means that Apple and its developers do not need to build or operate the compute infrastructure to run models for iPhone users.
Much of the discussion about the cost of generative AI is focused on model training—mostly because there is limited data on the inference cost (aka the cost to respond to user prompts). The major generative AI companies don’t share their costs because they consider it competitive information. And we don’t understand yet how much people will use generative AI. Will you ask a few questions in a day or have multi-hour conversations? Since we haven’t seen what kind of application experiences will be developed, we simply don’t know how many inferences will be made per person and, therefore, how much it will cost to support generative AI applications.
That said, pretty much everyone agrees that running generative AI models is expensive. Some have estimated that OpenAI’s inference costs are $700,000/day. Some have estimated that a prompt response is 4x the cost of a search response. Some have suggested that inference compute is 1.5x that of training. OpenAI is predicted to be spending $12 billion on Nvidia chips next year—even half of that (plus the power and infrastructure to run the chips) is a heck of a lot of money.
But Apple doesn’t have to spend this. Quite the opposite. Apple’s customers have already paid for the compute required to run generative AI models on their phones. While every other tech company is spending billions on inference compute—Apple is being paid for it.
While every other tech company is spending billions on inference compute—Apple is being paid for it.
Let that settle in. Everyone else in generative AI is investing billions in compute (GPU chips, data center infrastructure, operating costs) but Apple is being paid by their customers to ship iPhones and Macs that will run generative AI models on device.
The modern, lighting speed-to-judgment theory says that Apple has already lost because it doesn’t have a language or image application. I see two major issues with this theory.
First, current generative AI applications are far too early to declare anyone a winner. ChatGPT and Midjourney are analogous to the BBEdits and Mosaics of the internet era or to the bulletin boards of the social era. We haven’t gotten to the Internet Explorer or MySpace phase yet. We haven’t begun to see what generative AI has to offer. No one has won or lost yet.
And that plays to Apple’s strengths. Despite Apple’s unmatched history of innovation, it has rarely been first or even early. The iPod launched nearly five years after the Diamond Multimedia Rio because it took Apple that long to be convinced that the technology could support what the user really wanted. While the company has changed in the 12 years since Steve’s death, the patient mindset remains—wait until it will really work.
Second, Apple is demonstrating an integrated approach for AI, using transformers for auto-correct, language translation, scene analysis in photos, image captioning, and more. While these implementations may not grab the headlines like ChatGPT, they are core to Apple’s services. They also demonstrate that Apple hasn’t missed the generative AI wave—it is simply charting a different path.
Apple isn’t alone in pursuing edge AI—Google also has a neural processing unit called Edge TPU. But Google’s strategy will always be split since it’s primary business is in the cloud. Google also lacks Apple’s 100% integration which provides an advantage when integrating across software and hardware.
Apple may have new edge AI competition nipping at its heels from the Apple diaspora. Jony Ive is reported to be in discussions to create a new AI device with OpenAI. And Humane, run by a group of ex-Apple employees, is launching a screen less “Ai Pin” designed “for the emerging convergence between humans and AI.” I don’t really get the Ai Pin yet but am interested to try it. Who knows what Jony might design next but I certainly wouldn’t bet against him.
Beyond the hardware designers and developers, however, the key question is who will make the first great generative AI applications? Yes, ChatGPT is wildly successful but the application itself is quite basic and clunky (no offense meant—it was designed as an experiment). When will we see the first truly great apps? Who will create the generative AI versions of iTunes and iPhoto that opened up the digital media era and created new ways to link computers with external devices? Who will create the next generation of applications at the edge—on computers, phones, and vision products like Apple’s upcoming Vision Pro?
The Artificiality Weekend Briefing: About AI, Not Written by AI