The $1 Trillion Question

The $1 Trillion Question, Mortality in the Age of Generative Ghosts, Your Mind and AI, How to Design AI Tutors for Learning, The Imagining Summit Preview: Adam Cutler, and Helen's Book of the Week.

An abstract image of a data center made out of gold coins

In late June, Goldman Sachs published a report questioning whether Big Tech will earn an acceptable return on the expected $1 trillion in capex spending to support AI. Through several interviews and contributions with GS analysts and outsiders, the report presents contrasting views on the future of AI.

On one end of the spectrum are those who evaluate the potential impact of AI based on its current state: the current capabilities of generative AI models, realized productivity gains, and established workflows. On the other end of the spectrum are those who anticipate change: software will improve, costs will decline, and people will adapt.

I’m with the latter group for three reasons:

  1. AI at the Edge. The skeptics question whether Big Tech can earn an adequate return from its data center build-out, assuming that the value must be realized only through cloud services. I believe this calculation makes a significant error by overlooking the value of AI deployed at the edge. As we have previously talked about, we think the AI industry will experience an enormous change as AI models move to desktop and mobile devices, reducing or replacing the need for data center compute. Today, every query you make to a generative AI model is sent to the cloud. But soon (starting with Apple Intelligence later this year), AI processes will begin to be run on your device—eliminating the large cloud cost that the skeptics worry about. I see edge AI as an extension of the spend on data centers since models will be trained and updated in the cloud but will run on device for free.
  2. Beyond the Transformer. The skeptics rightfully point to generative AI’s current limitations: hallucination, limited productivity gains, limited applications, etc. While I agree with much of this critique, I believe the skeptics misunderstand the current state of generative AI. Most of today’s generative AI applications are minimally designed applications which provide access to a single model. Yes, these models are powerful and perform tasks that were unthinkable only a few years ago. But the applications themselves are primitive. We have yet to see what designers come up with that provide novel and useful workflows, powered by generative AI models. We also have only scratched the surface on what’s possible when combining multiple models, alternatives to transformers, and other AI advances. Many of these advances are in the pipeline, so ignoring their potential impact seems like an oversight even if we don't really know what will happen when.
  3. Intimate Data. Our core thesis for the future of AI, the Intimacy Economy, tells us that the true power of generative AI will be realized through an intimate understanding of users by machines. This intimate understanding is what we predict will allow machines to understand user context, interpret user intent, and take action on behalf of users. We believe that this intimate data, paired with the privacy and security of edge AI, will provide a leap in capabilities. Yes, this will take time to develop, but there’s nothing to indicate that the current infrastructure build-out will not be applicable to the Intimacy Economy.

As the early Web was underestimated before secure connections, multimedia, broadband speeds, and much more, I think the current skeptics are underestimating the potential of this new platform technology. Will today’s generative AI provide an adequate return on investment? No. But Big Tech isn’t investing for today’s generative AI—it’s investing for what’s next.


This Week from Artificiality

  • Our Ideas: Your Mind and AI. Magritte's "The Son of Man" serves as a metaphor for our AI-dominated world, where digital experiences increasingly obscure our view of reality. The concept of model collapse in AI mirrors the cognitive effects of solitary confinement in humans, highlighting the dangers of digital isolation. Our minds, like predictive engines, struggle to maintain coherence without diverse input, leading to distorted perceptions. The seductive illusion of connection through AI assistants poses risks to our emotional well-being, as exemplified by users forming deep bonds with chatbots. True reality engagement requires deliberate choices: opting for genuine human interactions, cultivating self-awareness, and periodically disconnecting from AI. This balance allows us to harness AI's benefits while preserving our grasp on reality. The challenge lies in developing metacognitive skills to navigate this new landscape, recognizing when to peer beyond the "green apple" of our AI-mediated experiences and engage with the world as it truly is.
  • The Science: How to Design AI Tutors for Learning. Can students learn better with generative AI? The promise of edtech has always been about the personalization of learning at scale. Generative AI now offers the capability to understand a learner's knowledge state and respond like a teacher. But will this vision work, and what might be the downsides? A recent study suggests that generative AI tutors can help students learn but only when the AI is specifically designed for learning. Without specific design parameters targeted at learning, students who rely on AI tutors will likely perform worse when the digital crutch is taken away.

The Imagining Summit Preview: Adam Cutler

Adam Cutler. Designing for the Unexpected: Innovation

Designing for the Unexpected: Innovation

Adam Cutler, a distinguished designer from IBM, will take us beyond creativity into the realm of innovation. In this session, Adam will help us see how we can design AI in a way that fosters new and unexpected uses, especially when placed in the hands of the next generation or inventive experts who play with ideas and recombine old with new. While we can't predict the exact outcomes, Adam will guide us through an imaginative exploration of the potential. This session promises to spark collective creativity and imagine surprising possibilities that the future of AI might hold.

Check out the agenda for The Imagining Summit and send us a message if you would like to join us. We're excited to meet our Artificiality readers in person!

💡
The Imagining Summit will be held on October 12-14, 2024 in Bend, Oregon. Dedicated to imagining a hopeful future with AI, The Imagining Summit gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things. Due to limited space, The Imagining Summit will be invite-only event. Follow the link and request an invite to be a part of this exciting event!

Helen's Book of the Week

Brave New Words, How AI Will Revolutionize Education (and Why That's a Good Thing), Salmon Khan.

In his latest book, Sal Khan, the founder of Khan Academy, explores how AI will transform education. Sal Khan, known for his tech-forward approach to education, embodies what we might call a thoughtful AI optimist, distinguishing himself from the more unthinking enthusiasts.

If you want a solid summary of the promise of AI in education then this is a good place to start. Kahn explains his perspective on AI in education: one-on-one tutoring remains the best way to learn but the only way to do this at scale is with AI personalization. Modern generative AI can do this as well as provide instance feedback and data-driven guidance for students, teachers, and parents.

One of Sal Khan's key messages is that AI tutors are meant to scale education for all learners, not to replace human teachers. He believes teachers will be more important than ever, as AI can free up their time for more meaningful interactions. Yes—there's enduring value in the human touch in education. What stands out is Khan's discussion on the economics of education—comparing the costs of human-based versus tech-based learning. It's undeniable that AI will be essential for achieving scale and managing costs, making it a crucial component of future education systems.

Something I would have liked to see him address is the impact of attention economy economics on education. What happens when learners' attention is shaped by algorithms? What is his view on the role of Big Tech platforms in this context? I really wanted his perspective on these philosophical and competitive issues, as they are crucial to understanding the future landscape of education technology.

If you don't want to read the book I recommend listening to The Next Big Idea interview with him. But if you're a teacher or a parent and you want a thorough summary it's worth reading the book in full.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.