AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
AIX: The Apple Intelligence Experience, The Future of (Generative AI) Search, The Synergy of Human Creativity and AI, The ARC Prize and What it Means for AGI, and more!
Starting in 2016, we have included a slide like this in our keynote presentations:
Our message has been to anticipate how Big Tech will deploy AI based on their core missions. Google’s mission is to organize the world’s information, resulting in using generative AI to summarize the web through AI Overviews. Microsoft’s mission is to increase user productivity, resulting in Copilot. Amazon’s mission is to be the largest retail company, resulting in AI-written review summaries. While these companies’ businesses are broader than just the examples I’ve shared, this framework has proven to be useful in predicting AI products and services.
This week, Apple announced Apple Intelligence, the brand name for a suite of AI capabilities throughout iOS, macOS, applications, and developer toolkits. Unlike Microsoft Copilot that brings AI capabilities into each application to enhance productivity or OpenAI which requires users to switch to a separate application, ChatGPT, to access its capabilities, Apple is using AI to create a more integrated experience.
I’m calling this AIX as an acronym for the Apple Intelligence Experience (yes, you can read it like the French and just say “x”). To me, the most important value of AIX isn’t as an assistant that pops up in various applications—the real value is that, as Apple described, it moves “in concert with you.” Given Apple’s universal access to your life across its devices, Apple has a near unique capability to understand the context of you and reason from that context what your intent might be. It can do that across applications—from Apple or from developers who embrace AIX—creating an integrated and intelligent interaction capability that hasn’t existed before.
In April 2023, we described foundation models (i.e., OpenAI GPT-4o, the model underneath ChatGPT), as a new aiOS. Our point was to anticipate that these models would provide new foundations for building new applications and experiences. That has, so far, been proven true. Yet, while the world has been obsessed with large, cloud-based foundation models, it has missed what might become the broadest use of models under applications—models which Apple provides to its developers.
Part of the reason Apple’s announcement caught many off guard is because most people thought AI would only follow the path of the early movers. As I wrote in October 2023:
“Throughout the breathless coverage of moves by Amazon, Facebook, Google, and Microsoft, there has been an odd narrative developing: Apple is being left behind. The theory goes that since Apple hasn’t released a large language model or a text-to-image generator or aligned with a major AI startup through an investment, it will be left out of the AI revolution.
I think this theory is missing the plot. Apple isn’t being left behind—it’s playing a different game. While the rest of big tech is fighting to buy GPU chips and build data centers to support generative AI in the cloud, Apple is focused on edge AI—enabling AI on device. And that may give Apple a significant technical and financial advantage.
Apple has been investing in AI on device since the launch of the Neural Engine in 2017 and Core ML in 2018. This history is largely ignored, however, in the current discussion about AI leadership. Perhaps that’s because people perceive Siri to be less useful than Alexa or Google Assistant. Or perhaps that’s because Apple is relatively quiet about its AI work. Or perhaps it’s because people think of generative AI as a cloud-based service that needs to run on Nvidia GPUs.”
Apple’s announcement this week confirmed my theory that its AI strategy would be primarily on device. At the time, I had missed the idea that Apple might use its chips in its data centers (perhaps reflecting the memory of Apple abandoning the Xserve product line that I was involved with). While this is an important and smart strategy to maintain user privacy in the cloud, I think my prediction that Apple’s focus would be on device was correct. There simply isn’t a more secure way to run an AI model on user data than to run it on device.
Deploying AIX at the edge isn’t just about privacy, however. Again from October 2023:
“But I think it’s a mistake to dismiss the advantage that Apple has built over the past 6 years. While other AI leaders are battling each other to buy GPU chips from Nvidia to build new data center capabilities, Apple controls its own on-device chip strategy—and has sold more than a billion devices with its AI-enabled chips.
The number of neural engine devices in pockets around the world gives Apple an important advantage: a large customer base that can run ML models on their devices. This means that Apple and its developers do not need to build or operate the compute infrastructure to run models for iPhone users.
Apple’s customers have already paid for the compute required to run generative AI models on their phones. While every other tech company is spending billions on inference compute—Apple is being paid for it.”
I still think this is a profound and under-recognized advantage for Apple—and its developers. AI inference is expensive when operating in the cloud. But it is free on device. Today, our phones can’t run the most advanced models. But, history tells us that software will become more efficient and our phones will be able to handle more. Apple can take advantage of this free compute itself, and it can extend this advantage to its developers by integrating AIX into developer toolkits.
Overall, AIX makes me the most excited I’ve been about AI since the initial launch of large language models. The integrated and universal capabilities of AIX are a step towards our dream of a Mind for our Minds. And it is great to see that not all of Big Tech are lemmings following in OpenAI’s path.
Finally, from October 2023:
“Apple is demonstrating an integrated approach for AI, using transformers for auto-correct, language translation, scene analysis in photos, image captioning, and more. While these implementations may not grab the headlines like ChatGPT, they are core to Apple’s services. They also demonstrate that Apple hasn’t missed the generative AI wave—it is simply charting a different path.”
Then I Am Myself the World, What Consciousness Is and How to Expand It, by Christof Koch
Koch is a legend in neuroscience for his work on consciousness and its relationship to information processing, Tononi's integrated information theory (IIT) of consciousness. Koch famously lost a long-running bet with David Chalmers. He wagered Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now but had to hand over some very nice wine because we still do not know.
Koch's writing is fun to read, personal and engaging. His chapter on the ins and outs of IIT is a good summary if you're unfamiliar with the ideas and don't want to tackle the math that underlies the idea.
But I don't think the ideas about IIT or panpsychism are the reason to read this book. The reason to read it is for its humanism—if you want to read about how a famed scientist of consciousness has experienced profound changes to his own mind. Psychedelics and a near death experience are here.
The other reason to read it is as an example of a recent shift in thinking around the role of consciousness and human experience. There is an emerging group of philosophers and scientists, including Adam Frank, Marcelo Gleiser, and Evan Thompson, who question the place of consciousness in science. In their book The Blind Spot (which I'll talk about in the coming weeks), they argue that human experience needs to be central to scientific inquiry. Koch's ideas are parallel in that he sees consciousness as having causal power over itself, that is, consciousness is a change agent in itself so cannot be "separated" from the practice of studying it.
Nowadays, it seems that any talk of consciousness is incomplete without a discussion of consciousness in machines. Koch does a good job at explaining current ideas around broader instantiations of consciousness—separating function from structure. He debunks some of the weirder Silicon Valley ideas of whole-brain simulations with his IIT view that consciousness is not solely computation. That it is far more causally complex and unfolds accordingly.
Consciousness is not a clever algorithm. Causal power is not something intangible, ethereal, but something physical—the extent to which the system's recent past specifies its present state (cause power) and the extent to which this current state specifies its immediate future (effect power). And here's the rub: causal power, the ability to influence oneself, cannot be simulated. Not now or in the future. It must be built into the system, part of the physics of the system.
In other words, if you want to build a conscious machine, it has to be built for it.
IIT may or may not be one of the winning ideas in consciousness but I do appreciate reading about his experiences and life story while being educated in his perspective.
The Artificiality Weekend Briefing: About AI, Not Written by AI