AIX: The Apple Intelligence Experience

AIX: The Apple Intelligence Experience, The Future of (Generative AI) Search, The Synergy of Human Creativity and AI, The ARC Prize and What it Means for AGI, and more!

An abstract image of an Apple iMac

The Imagining Summit

  • We are starting to take registrations for The Imagining Summit. Given space constraints, this will be invite-only event. Please check out the details here and fill-in the form or simply reply to this email to indicate your interest. We look forward to bringing together a creative, diverse group of imaginitive thinkers and innovators who share our hope for the future and are crazy enough to think we can collectively change things.

AIX: The Apple Intelligence Experience

Starting in 2016, we have included a slide like this in our keynote presentations:

Our message has been to anticipate how Big Tech will deploy AI based on their core missions. Google’s mission is to organize the world’s information, resulting in using generative AI to summarize the web through AI Overviews. Microsoft’s mission is to increase user productivity, resulting in Copilot. Amazon’s mission is to be the largest retail company, resulting in AI-written review summaries. While these companies’ businesses are broader than just the examples I’ve shared, this framework has proven to be useful in predicting AI products and services.

This week, Apple announced Apple Intelligence, the brand name for a suite of AI capabilities throughout iOS, macOS, applications, and developer toolkits. Unlike Microsoft Copilot that brings AI capabilities into each application to enhance productivity or OpenAI which requires users to switch to a separate application, ChatGPT, to access its capabilities, Apple is using AI to create a more integrated experience.

I’m calling this AIX as an acronym for the Apple Intelligence Experience (yes, you can read it like the French and just say “x”). To me, the most important value of AIX isn’t as an assistant that pops up in various applications—the real value is that, as Apple described, it moves “in concert with you.” Given Apple’s universal access to your life across its devices, Apple has a near unique capability to understand the context of you and reason from that context what your intent might be. It can do that across applications—from Apple or from developers who embrace AIX—creating an integrated and intelligent interaction capability that hasn’t existed before.

In April 2023, we described foundation models (i.e., OpenAI GPT-4o, the model underneath ChatGPT), as a new aiOS. Our point was to anticipate that these models would provide new foundations for building new applications and experiences. That has, so far, been proven true. Yet, while the world has been obsessed with large, cloud-based foundation models, it has missed what might become the broadest use of models under applications—models which Apple provides to its developers.

Part of the reason Apple’s announcement caught many off guard is because most people thought AI would only follow the path of the early movers. As I wrote in October 2023:

“Throughout the breathless coverage of moves by Amazon, Facebook, Google, and Microsoft, there has been an odd narrative developing: Apple is being left behind. The theory goes that since Apple hasn’t released a large language model or a text-to-image generator or aligned with a major AI startup through an investment, it will be left out of the AI revolution.
I think this theory is missing the plot. Apple isn’t being left behind—it’s playing a different game. While the rest of big tech is fighting to buy GPU chips and build data centers to support generative AI in the cloud, Apple is focused on edge AI—enabling AI on device. And that may give Apple a significant technical and financial advantage.
Apple has been investing in AI on device since the launch of the Neural Engine in 2017 and Core ML in 2018. This history is largely ignored, however, in the current discussion about AI leadership. Perhaps that’s because people perceive Siri to be less useful than Alexa or Google Assistant. Or perhaps that’s because Apple is relatively quiet about its AI work. Or perhaps it’s because people think of generative AI as a cloud-based service that needs to run on Nvidia GPUs.”

Apple’s announcement this week confirmed my theory that its AI strategy would be primarily on device. At the time, I had missed the idea that Apple might use its chips in its data centers (perhaps reflecting the memory of Apple abandoning the Xserve product line that I was involved with). While this is an important and smart strategy to maintain user privacy in the cloud, I think my prediction that Apple’s focus would be on device was correct. There simply isn’t a more secure way to run an AI model on user data than to run it on device.

Deploying AIX at the edge isn’t just about privacy, however. Again from October 2023:

“But I think it’s a mistake to dismiss the advantage that Apple has built over the past 6 years. While other AI leaders are battling each other to buy GPU chips from Nvidia to build new data center capabilities, Apple controls its own on-device chip strategy—and has sold more than a billion devices with its AI-enabled chips.
The number of neural engine devices in pockets around the world gives Apple an important advantage: a large customer base that can run ML models on their devices. This means that Apple and its developers do not need to build or operate the compute infrastructure to run models for iPhone users.
Apple’s customers have already paid for the compute required to run generative AI models on their phones. While every other tech company is spending billions on inference compute—Apple is being paid for it.”

I still think this is a profound and under-recognized advantage for Apple—and its developers. AI inference is expensive when operating in the cloud. But it is free on device. Today, our phones can’t run the most advanced models. But, history tells us that software will become more efficient and our phones will be able to handle more. Apple can take advantage of this free compute itself, and it can extend this advantage to its developers by integrating AIX into developer toolkits.

Overall, AIX makes me the most excited I’ve been about AI since the initial launch of large language models. The integrated and universal capabilities of AIX are a step towards our dream of a Mind for our Minds. And it is great to see that not all of Big Tech are lemmings following in OpenAI’s path.

Finally, from October 2023:

“Apple is demonstrating an integrated approach for AI, using transformers for auto-correct, language translation, scene analysis in photos, image captioning, and more. While these implementations may not grab the headlines like ChatGPT, they are core to Apple’s services. They also demonstrate that Apple hasn’t missed the generative AI wave—it is simply charting a different path.”

This Week from Artificiality

  • Our Research: The Future of (Generative AI) Search. Explore the future of search with generative AI. Discover how Apple Intelligence, context-based understanding, intent-driven interactions, and integrated workflows are transforming search. Learn about the trust challenges and the critical balance needed for reliable, AI-powered search experiences.
  • Our Ideas: The Synergy of Human Creativity and AI. Discover how human creativity and AI collaborate in the face of advancements. Explore the unique qualities of human flexibility, diverse responses, and AI's ability to overcome creative blocks. Learn how architecture, analogical reasoning, and rap benefit from AI's divergent thinking capabilities.
  • The Science: The ARC Prize and What it Means for AGI. Explore the debate on achieving AGI: scaling laws vs new approaches. Learn about the ARC prize, a $1M competition challenging the current consensus and proposing a benchmark focused on skill acquisition. Discover why benchmarks matter in shaping AI's future and driving industry perceptions.

Bits & Bytes from Elsewhere

  • Time published an article by Michelle Peng about how AI can be used to help individuals improve their job skills and performance, based on a report by Charter. The article highlights five promising use cases for AI-based training: curriculum development, skills practice, AI tutor and content enhancement, mentorship and connection, and skills assessment. The article quotes Helen and the report quotes us both—check them out!
  • Related to our generative AI search research, the Verge reports that Google's AI Overviews are still recommending 2 tablespoons of glue to pizza sauce to prevent cheese from sliding off. The craziest part is that the AI overview referenced an article from Business Insider that was about the error. How meta (but not Meta, in this case).

Helen's Book of the Week

Then I Am Myself the World, What Consciousness Is and How to Expand It, by Christof Koch

Koch is a legend in neuroscience for his work on consciousness and its relationship to information processing, Tononi's integrated information theory (IIT) of consciousness. Koch famously lost a long-running bet with David Chalmers. He wagered Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now but had to hand over some very nice wine because we still do not know.

Koch's writing is fun to read, personal and engaging. His chapter on the ins and outs of IIT is a good summary if you're unfamiliar with the ideas and don't want to tackle the math that underlies the idea.

But I don't think the ideas about IIT or panpsychism are the reason to read this book. The reason to read it is for its humanism—if you want to read about how a famed scientist of consciousness has experienced profound changes to his own mind. Psychedelics and a near death experience are here.

The other reason to read it is as an example of a recent shift in thinking around the role of consciousness and human experience. There is an emerging group of philosophers and scientists, including Adam Frank, Marcelo Gleiser, and Evan Thompson, who question the place of consciousness in science. In their book The Blind Spot (which I'll talk about in the coming weeks), they argue that human experience needs to be central to scientific inquiry. Koch's ideas are parallel in that he sees consciousness as having causal power over itself, that is, consciousness is a change agent in itself so cannot be "separated" from the practice of studying it.

Nowadays, it seems that any talk of consciousness is incomplete without a discussion of consciousness in machines. Koch does a good job at explaining current ideas around broader instantiations of consciousness—separating function from structure. He debunks some of the weirder Silicon Valley ideas of whole-brain simulations with his IIT view that consciousness is not solely computation. That it is far more causally complex and unfolds accordingly.

Consciousness is not a clever algorithm. Causal power is not something intangible, ethereal, but something physical—the extent to which the system's recent past specifies its present state (cause power) and the extent to which this current state specifies its immediate future (effect power). And here's the rub: causal power, the ability to influence oneself, cannot be simulated. Not now or in the future. It must be built into the system, part of the physics of the system.

In other words, if you want to build a conscious machine, it has to be built for it.

IIT may or may not be one of the winning ideas in consciousness but I do appreciate reading about his experiences and life story while being educated in his perspective.


Facts & Figures about AI & Complex Change

  • 51%: Percentage of young people (ages 14-22) who have used generative AI at some point. (Common Sense Media)
  • 4%: Percentage of young people (ages 14-22) who use generative AI daily. (Common Sense Media)
  • 10%: Percentage of white young people (ages 14-22) who use generative AI at least weekly. (Common Sense Media)
  • 22%: Percentage of Black young paople (ages 14-22) who use generative AI at least weekly. (Common Sense Media)
  • 41%: Percentage of young people (ages 14-22) who believe that generative AI is likely to have both a positive and negative impact on their lives in the next 10 years. (Common Sense Media)
  • 17%: Percentage of Cisgender young people (ages 14-22) who believe generative AI will have a mostly negative impact on their lives in the next 10 years. (Common Sense Media)
  • 28%: Percentage of LGBTQ+ young people (ages 14-22) who believe generative AI will have a mostly negative impact on their lives in the next 10 years. (Common Sense Media)
  • 53%: Percentage of young people (ages 14-22) who use generative AI to get information, of those who have used generative AI at all. (Common Sense Media)
  • 51%: Percentage of young people (ages 14-22) who use generative AI to brainstorm ideas, of those who have used generative AI at all. (Common Sense Media)
  • 46%: Percentage of young people (ages 14-22) who use generative AI for help with homework, of those who have used generative AI at all. (Common Sense Media)
  • 31%: Percentage of young people (ages 14-22) who use generative AI to make pictures or images, of those who have used generative AI at all. (Common Sense Media)
  • 70%: Percentage of companies using LLMs which choose open source. (Databricks)
  • 53%: Percentage of executives who are concerned people will make decisions using unreliable information from AI. (Asana)
  • 39%: Percentage of monthly generative AI users who report increased productivity. (Asana)
  • 73%: Percentage of weekly generative AI users who report increased productivity. (Asana)
  • 89%: Percentage of daily generative AI users who report increased productivity. (Asana)

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.