Helen's Books of the Week

As meta-researchers, we consume ideas and research from a variety of sources. Books, in particular, are an important source. And Helen reads a lot of them. Each week she profiles one book in our newsletter—and this is the the full list.

An abstract image of books on a bookshelf

The Line: AI and the Future of Personhood, by James Boyle

In The Line, Jamie argues that the lines defining personhood and humanity are more fluid than traditionally believed, challenging our assumptions by exploring diverse contexts like animal rights, corporate personhood, and AI development. Hypothetical scenarios, such as a conscious AI demanding freedom or a genetically engineered being with enhanced intelligence, question whether designed entities could ever be considered "persons."

Jamie demonstrates with many examples how debates around personhood are not confined to a single discipline but permeate philosophy, law, art, and morality. And there are surprises here—namely, you might assume that the legal concept of corporations as persons was carefully thought out, but you’d be wrong.

Instead, the idea that corporations are “persons” under the law wasn’t a carefully planned decision, but rather a messy accident—a result of legal shortcuts and historical quirks. The Citizens United decision, though widely condemned as a radical expansion of rights, is just another chapter in a 135-year saga where corporate rights have been inconsistently justified and repeatedly contested. We don’t have a coherent philosophy or solid legal grounding for why corporations have ended up with these rights. This should make us wary when thinking about extending personhood to AI. If corporate personhood happened almost by mistake, the stakes are even higher when deciding if and how artificial beings fit into our constitutional framework. The message is stark—we can’t afford to be naive about how our current standards came to be shaped by history.

This book is fun to read—Jamie has a distinctive style and humor that makes for many laughs along the way. He’s got a talent for crafting catchy phrases—take "sentences don’t imply sentience" when talking about LLMs. I catch myself saying it like it was my own!

The idea that resonated most with me was realizing how our definitions of who qualifies as "human" hinge on an unstable balance between empathy and pragmatism. This suggests that our classifications will always be influenced by our subjective experiences—whether something feels repulsive or evokes empathy, or simply because it’s more convenient to categorize it based on legal or practical needs, like whether we want to be able to sue it.

We believe this book is both ahead of its time and right on time. How? Because it sharpens our understanding of concepts that are difficult to grasp—that the boundaries between the organic and synthetic are blurring, and this shift will create profound existential challenges that we need to start preparing for now.

I'll leave you with my favorite quote from The Line, it captures the enormity of the transformation we might face when confronting synthetic beings, framing it as a profound existential moment, a challenge that is both urgent and transformative:

"Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." 

September 29, 2024


The AI Mirror, by Shannon Vallor

In The AI Mirror, Shannon Vallor suggests that, rather than propelling us into the future, AI more often reflects the limitations of our past. Vallor, a philosopher specializing in technology and ethics, frames AI as a shallow mimic of human cognition, reflecting our triumphs and failures alike, and constraining our reality to something less than we are.

Vallor's metaphor is clever because it works on different levels:

  • Reflection, not reality: Just as a mirror produces a reflection of your body rather than a real body, AI systems trained on human thought and behavior are not actual minds, but reflections of human intelligence
  • Backward-looking: AI mirrors point backward, showing only where data says we have already been, not where we might go in the future
  • Flawed and distorted: Like mirrors that can distort images, AI systems are "immensely powerful but flawed mirrors" that reflect human errors, biases, and failures of wisdom
  • Lack of understanding: Similar to how a mirror image lacks real understanding or consciousness, AI systems can mimic outputs of an understanding mind without possessing actual comprehension
  • Risk of narcissism: Vallor warns against becoming entranced by our own reflection in AI, similar to the myth of Narcissus who became captivated by his reflection. Don't we all know someone like that...

AI's backward-looking quality, she argues, makes AI ill-equipped to solve today’s complex problems. By simply reproducing old patterns, these systems risk trapping us in outdated paradigms.

There are solutions here. Ultimately, Vallor challenges the deterministic narrative that AI will inevitably surpass or undermine human agency. In many ways the mirror metaphor emphasizes the gap between AI's reflective capabilities and genuine human intelligence, and there's a lot to see.

I think this is one of the best books on AI for a general audience that has been published this year. Vallor’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Vallor suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us.

September 14, 2024


The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, by Matt Beane

AI is changing the traditional apprenticeship mode, altering how we learn and develop skills across industries. This is creating a tension between short-term performance gains and long-term skill development. 

Dr. Matt Beane, an assistant professor at UC Santa Barbara and author of "The Skill Code," has studied this change. His research shows that while AI can significantly boost productivity it may be undermining critical aspects of skill development. Much of Beane’s work has been observing the relationship between junior and senior surgeons in the operating theater. "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship," he told us.

"In robotic surgery, the junior surgeon now sets up a robot, attaches it to a patient then heads to a control console to sit there for four hours and watch the senior surgeon operate." This scenario, repeated in hospitals worldwide, epitomizes a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices. See one, do one, teach one, is becoming See one, and if-you're-lucky do one, and not-on-your-life teach one, Beane writes.

Beane argues that three key elements are essential for developing expertise: challenge, complexity, and connection. "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?" Struggle matters in the learning process. Struggle doesn’t just build skills, it builds trust and respect with others, which is a critical aspect of how the entire system of human expertise works. 

August 31, 2024


The AI-Savvy Leader, 9 Ways to Take Back Control and Make AI Work, by David De Cremer

De Cremer is the dean of the D'Amore-McKim School of Business at Northeastern University and has spent years involves in all-things-business AI, including guiding the development of EY's AI Lab. So I was interested to read this short-and-to-the-point book about what it means to lead in this AI age.

Net-net, De Cremer has words for leaders: you aren't doing your job. 87% of AI initiatives fail. Why? "Leaders are not leading," he says. AI adoption and deployment is complex, and so is navigating the narratives around it. Leaders have to act yet that is hard to do when you don't understand what it even is. 

If you've been involved in AI for a while, much of this book may feel familiar. However, if you are new to leading AI initiatives in business, you’ll find it quite valuable, especially if you are open to developing the right mindset. 

What is the "right mindset"? Here, I wholeheartedly agree with De Cremer, whose insights are particularly resonant. I'd like to share a section directly: on the two fundamentally opposing perspectives around how to use AI—do you side with those who are in the endgame of making AI more and more like the human brain, or are you looking for something else?

As a leader, you have a decision to make. Which of these two perspectives do you adopt when bringing AI into your organization?
Perspective 1: AI is an increasingly cheap way to replace people and achieve new levels of productivity and efficiency.
Perspective 2: It's a powerful tool to augment—but not replace—human intelligence and unlock more innovation and creativity in workers

Everything tees off from here. As a leader do you understand:

  • how to frame AI and recognize that your technologists don't?
  • that your employees are your first stakeholders in AI?
  • that there is no efficiency without time for reflection and learning (aka down time)?
  • that real augmentation is hard work but arguably better than over-biasing automation, which results in de-skilling and diminishes the power of human intelligence?

This book isn't just a clarion call to leaders; it's also for those responsible for fostering human flourishing in organizations, such as learning and development groups. It provides a foundation to build your business case and to articulate the importance of integrating human creativity with machines, emphasizing the vital role of human intelligence in the workplace.

June 30, 2024


The Importance of Being Educable, A New Theory of Human Uniqueness by Leslie Valiant

The latest book by Turing Award winner, Leslie Valiant, builds on his previous work on the "Theory of the Learnable" otherwise known as Probably Approximately Correct or PAC learning. PAC learning is when an algorithm takes experience from the past to create a hypothesis that can be used to make a future decision based on controlled error. Today, this sounds simple, but when he first proposed it in a 1983 paper, it was groundbreaking for combining artificial and natural learning into one, ultimately giving rise to the research area we now know as computational learning theory.

Valiant's latest book is important for this reason: it lays out the fundamentals of what makes humans special—our ability to absorb, apply, and share knowledge which he calls "educability". Our brains have unique abilities to learn based on particular ways in which we process information, unequaled by any other species on the planet—and for now—AI. 

What's special about this book is how effectively the author explains abstract concepts around learning algorithms and then applies them to human learning. As a reader, you constantly move between abstractions and relatable, grounded examples. This approach builds a sense that: 1) learning is computable, and 2) AI must eventually reach this level but are nowhere close today. 

As human learning and machine learning continue their arms race, I found this book incredibly helpful for understanding the underlying fundamentals of 'educability' algorithms and their implications for advancing human knowledge. The book advocates for greater investment in learning and education, emphasizing what makes us unique.

Check out our interview with Leslie on this week's podcast.

June 23, 2024


Then I Am Myself the World, What Consciousness Is and How to Expand It, by Christof Koch

Koch is a legend in neuroscience for his work on consciousness and its relationship to information processing, Tononi's integrated information theory (IIT) of consciousness. Koch famously lost a long-running bet with David Chalmers. He wagered Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now but had to hand over some very nice wine because we still do not know. 

Koch's writing is fun to read, personal and engaging. His chapter on the ins and outs of IIT is a good summary if you're unfamiliar with the ideas and don't want to tackle the math that underlies the idea. 

But I don't think the ideas about IIT or panpsychism are the reason to read this book. The reason to read it is for its humanism—if you want to read about how a famed scientist of consciousness has experienced profound changes to his own mind. Psychedelics and a near death experience are here. 

The other reason to read it is as an example of a recent shift in thinking around the role of consciousness and human experience. There is an emerging group of philosophers and scientists, including Adam Frank, Marcelo Gleiser, and Evan Thompson, who question the place of consciousness in science. In their book The Blind Spot (which I'll talk about in the coming weeks), they argue that human experience needs to be central to scientific inquiry. Koch's ideas are parallel in that he sees consciousness as having causal power over itself, that is, consciousness is a change agent in itself so cannot be "separated" from the practice of studying it.

Nowadays, it seems that any talk of consciousness is incomplete without a discussion of consciousness in machines. Koch does a good job at explaining current ideas around broader instantiations of consciousness—separating function from structure. He debunks some of the weirder Silicon Valley ideas of whole-brain simulations with his IIT view that consciousness is not solely computation. That it is far more causally complex and unfolds accordingly. 

Consciousness is not a clever algorithm. Causal power is not something intangible, ethereal, but something physical—the extent to which the system's recent past specifies its present state (cause power) and the extent to which this current state specifies its immediate future (effect power). And here's the rub: causal power, the ability to influence oneself, cannot be simulated. Not now or in the future. It must be built into the system, part of the physics of the system.

In other words, if you want to build a conscious machine, it has to be built for it.

IIT may or may not be one of the winning ideas in consciousness but I do appreciate reading about his experiences and life story while being educated in his perspective.

June 16, 2024


The Age of Magical Overthinking, Notes on Modern Irrationality, by Amanda Montell

Montell, the author of Cultish and Wordslut, asks: How do we escape the relentless cycle of confusion, obsession, second-guessing, and information overload? She explores the dilemmas of the 21st century, noting how "magical thinking"—the belief that our internal thoughts and feelings can influence unrelated external events—is unraveling our society.

The book reads more like a series of essays than a cohesive narrative, but Montell highlights the mismatch between our brain's functioning and a world dominated by the internet, media saturation, and AI systems.

Montell's modern, social media, celebrity-centric take on cognitive biases is often a cheeky perspective. The book is accessible and true to the scholarship behind the scenes, including some of our favorite scholars: Steven Sloman and Barbara Tversky both feature at points.

I couldn't find much on overthinking itself, despite searching for it. Maybe overthinking is just something we all do in our 30s. I know I did.

The book can give you a different kind of insight into cognitive biases. But it doesn't help you head them off. For that, I'd recommend pairing this book with ours—Make Better Decisions—where we take the scholarship (including Sloman's and Tversky's) to help you develop a practice in this zone.

June 9, 2024


The Battle for Your Brain, Defending the Right to Think Freely in the Age of Neurotechnology, Nita A. Farahany

I read this book a year ago when it first came out. If you haven't noticed the rapid advances in brain-computer interfaces and neuro-tech used for marketing and job surveillance, now is the time to pay attention. This book is concerned with the future of our cognitive liberties in a world where our thoughts and mental states can be monitored, manipulated, and commodified.

Farahany is an expert in the ethics of neuroscience. She describes examples of brain-tracking technologies that are being used today as well what's coming. A world where your brain's activity can reveal your political beliefs, your thoughts can be used as evidence in court, and your emotions can be manipulated by external forces is no longer science fiction. Farahany lays out the urgent need for safeguards to protect fundamental human rights.

Farahany's argument includes an exploration of "cognitive liberty"—the right to self-determination of our thoughts and mental experiences. It's not too far in the future when AirPods or other wearables will be able to translate our thoughts. Staying informed about these issues now is a smart move.

June 9, 2024


The AI Mirror, How to Reclaim Our Humanity in an Age of Machine Thinking, by Shannon Vallor

Vallor's book on AI and humans is the best of its kind I've read in years. Her writing style is a perfect fit for how I absorb information. Crystal clear and jargon-free, her descriptions of the tech itself are spot-on yet totally accessible.

Vallor is the Baillie Gifford Chair of the Ethics of Data and AI at the University of Edinburgh and has worked with AI researchers and developers for many years. She describes herself as a virtue ethicist and evaluates AI's reflection of us by considering how it may alter our perspectives on viruses versus vices. She embraces, elaborates on, and wrings every drop out of the metaphor of AI as a mirror.

This starts with how AI is built on a foundation of historical data, which means that humanity can't afford to rely on it. If we do, we risk dooming ourselves to being trapped in the past. "The conservative nature of AI mirrors not only pushes our past failures into our present and future; it makes these tools brittle and prone to rare but potentially spectacular failures," she writes. Touché.

Many otherwise familiar ideas gain new depth through her interpretations and scholarship. I discovered numerous concepts she brings to light from the history of technology philosophy. For instance, the notion that AI mirrors make us more like machines (instead of making machines more like humans) was termed "reverse adaptation" by the philosopher of technology Langdon Winner in 1977. Today, we see this with workplace surveillance transforming workers into metric-monitored automatons of efficiency.

Perhaps what I appreciated most in this book was her scathing appraisal of AGI, much of which I completely agree with. There are so many brilliant sentences! One that particularly stands out encapsulates the emerging anxiety about AGI (despite Sam Altman's enthusiasm for the idea) by drawing a parallel to the factories of the nineteenth century: "Visions of AGI overlords that cruelly turn the tables on their former masters mirror the same zero-sum games of petty dominance and retribution that today drain our own lives of joy."

But it's not all doom and gloom. Vallor highlights instances of AI being used as a tool for cultural recovery within Indigenous cultures, serving as a mechanism for reclamation. Another example is "reverse bias," where AI helps doctors become more aware of the historical under-treatment of Black people's pain. These are small but significant glimmers of hope. They highlight one of the values of AI: by revealing and measuring such issues, we can learn to reason about them differently.

The AI Mirror is worth your time if you're looking for a realistically skeptical view of tech with glimmers of hope for a more virtuous future with AI.

June 2, 2024


AI: Unexplainable, Unpredictable, Uncontrollable by Roman V. Yampolskiy

AI Safety is an important topic and this book aims to lay out in detail the theoretical and philosophical arguments for AI as something humans will not be able to control. It is a highly analytical book but remains quite readable, if not enjoyable (if you overlook the fact that we are all going to die if he's correct).

Making AI safe is not trivial. Yampolskiy takes the reader through the ugly truth hidden in the math of AI, the fractal nature of safety fixes, the asymmetries of vulnerability, and many more factors that add up to the disaster for humanity that would be machine superintelligence. We would have no way to compete—especially if such an AI decided it wanted to compete with us. As the saying goes, there aren't any examples of a lower intelligence doing well when a higher intelligence comes along. 

Not everything in this book worked for me. The chapter on consciousness was bizarre and seems to fly in the face of the current state of consciousness science. For instance, Yampolskiy claims that "computers can experience illusions, and so are conscious." 

I just don't buy this and favor Sam Harris's perspective on consciousness and illusions as a counterpoint to the claim that computers' ability to experience illusions is evidence of their consciousness. Harris argues that "consciousness is the one thing in this universe that cannot be an illusion." His reasoning is grounded in the idea that the very experience of an illusion presupposes the existence of a conscious observer who is being deceived. In other words, illusions are not evidence of consciousness. Rather, consciousness is a prerequisite for the experience of illusions. While the ability to experience illusions may be a necessary condition for consciousness, it is not a sufficient one. 

Perhaps the immediate takeaway from this book is that anyone considering the future of agentic AI should be thinking upfront about controllability. Controllability should be first and foremost in designers' minds and in no way be left as an afterthought, something that can be added in ex-post. This perspective is even more important with the increasing capabilities of agentic AI.

May 26, 2024


How We Live Is How We Die by Pema Chödrön

If you are after a complete antidote to AI, this book might just fit the bill. When I was diagnosed with cancer in 2018, I took up meditation as a way to deal with the waves of panic that accompanies such a diagnosis. Along the way I began to realize that meditation is not about stress relief or anxiety reduction or even relaxation. That it's something altogether different. I have listened to Buddhist teachings in my mediation app (which contains a satisfying mix of Eastern philosophy and modern neuroscience as well as various meditation practices) but I have not read any teachings until I picked this book up in a local store.

Pema Chödrön was born Deirdre Blomfield-Brown in 1936, in New York City and became a novice nun in 1974 and is now the director of Gampo Abbey in rural Cape Breton, Nova Scotia. In this compact and informative book, she takes you through how by reframing your perspective on life and cultivating a new relationship with your neuroses and the emotions they elicit, you can develop skills that will make the process of dying less terrifying. 

Like I say, a beautiful antidote to the machines and all that their masters have had to say this week. 

How we work with our thoughts and emotions now is what we'll take with us when we die. We can't put it off until the end; by then it will be too late. So now is the time. How we live is how we die.

May 19, 2024


Creativity in Large-Scale Contexts: Guiding Creative Engagement and Exploration by Jonathan S. Feinstein 

Creativity is an endlessly fascinating concept. I've never heard anyone say they want to be less creative. We value our creativity enormously. Worries that machines might intrude or takeover feel existential. The idea that network analysis and machine learning might help us better understand creativity might seem ironic to some, but networks are how we are beginning think about creativity.

In a new book called Creativity in Large-Scale Contexts, Yale School of Management Professor Jonathan Feinstein explains his framework for understanding creativity as a network that spans the full context of someone's experience. He demonstrates—through visualizations and analysis of the lives of creatives across art, writing, science, and technology—how networks can capture the structure of creative environments by detailing both the elements and their interconnections. Elements are the building blocks or "raw materials" for creative thinking, while relationships allow the flow of ideas and the generation of novel connections. 

Central to Feinstein's approach are the concepts of guiding conceptions and guiding principles. Guiding conceptions, which are individualistic, intuition-centric, and highly creative, help identify promising directions for exploration and generate seed projects. Guiding principles, on the other hand, are more widely accepted within a field and serve to filter out flawed seed projects while guiding the discovery of key elements that can transform a seed into a high-potential project.

This book offers new insights to deeply understand creativity, not only theoretically but also as practical concepts for innovation and discovery. We found it fascinating as it links the worlds of innovation and human creative pursuits with complexity and machine learning by applying network theory and analysis. It made us think differently about creativity in an AI age and we think it will do the same for you. 

Our podcast interview with Feinstein will be released in a couple of weeks, giving you the perfect opportunity to read his book before hearing him elaborate on them in more depth.

May 12, 2024


When Science Meets Power: What Scientists Need to Know about Government and What Governments Need to Know about Science by Geoff Mulgan

Geoff Mulgan’s When Science Meets Power gave me a different perspective on the relationship between scientific innovation and political authority. 

Mulgan, a seasoned expert in public policy and former CEO of Nesta, describes the complex dynamics that arise when the realms of science and government collide. His analysis is particularly relevant in the context of AI, where advancements have many implications for governance, public policy, and democratic processes. 

This is the third book by Geoff Mulgan that I've read, following Big Mind, which explores collective intelligence, and Prophets at a Tangent, which examines how art fosters social imagination. It seems to represent the culmination of his exploration into society as a complex, collective system. Mulgan has a knack for distilling complex ideas into memorable sound bites. For instance, he discusses the challenge of reconciling scientific "fact" with public acceptance of these facts, stating: "Although science can map and measure, it has no tools for calibrating." This phrase resonates with me as it succinctly captures the idea that the broader collective—whether in society, an organization, or a family—ultimately determines the placement and weight of scientific knowledge within its cultural context.

The COVID-19 pandemic has illustrated this dynamic vividly, showing how different countries interpreted and acted upon the same scientific facts in varied ways. While science provided data on excess deaths, and insights into the effects of isolation and disruptions to children's education, it fell to politics to navigate the associated trade-offs and risks. This serves as a reminder of the "muddled and muddied" relationship between science and politics.

My favorite section of the book is in the concluding chapters, where Mulgan discusses science, synthesis, and metacognition. He emphasizes that all complex issues fundamentally require synthesis, which illustrates the difficulty of this process and highlighting a common epistemological mistake: misinterpreting the relationship between knowledge and action. Mulgan argues that possessing knowledge does not directly translate to specific actions. To show this he identifies 16 types of knowledge that could influence a decision-making process, including statistical, policy, scientific, economic, implementation, futures, and lived experience. Next time you're trying to synthesize something, try compiling such a comprehensive list. I'd be surprised if it doesn't just sharpen your perspective.

As someone who often leans towards the "follow the science" approach, one takeaway from Mulgan’s book for me was a reminder for humility in science regarding its own state of knowledge. He reminds us that science alone cannot answer all of our significant questions because humans inherently seek meaning. Often this philosophical perspective is at odds with scientific perspectives that might illustrate the cosmic irrelevance of humans, challenging the notion that science can be the sole arbiter of truth in our quest for understanding and significance.

I find myself eager to pose a question to Mulgan: As machines develop knowledge from complex, high-dimensional correlations that extend beyond our human capacity to conceptualize, what role will scientists play in attributing significance and meaning to these findings? This question gets to a critical issue that remains largely unaddressed in the evolving landscape of AI—a future where the integration of machine intelligence in our discovery processes challenges the traditional roles of scientists.

May 5, 2024


Why We Remember: Unlocking Memory's Power to Hold on to What Matters by Charan Ranganath, PhD

"Why We Remember" by neuroscientist and psychologist Charan Ranganath, details how our brains record the past and utilize that information to shape our present and future. It's a very accessible book and has been widely reviewed to popular acclaim. 

But here's why it particularly interested me: the book illustrates, almost like a sleight of hand, how memory's crucial role in shaping our sense of self, our world navigation, and our creative and innovative capacities make for a fundamentally different kind of intelligence between AI and humans.

Ranganath isn't explicit about this, these are more my gleanings. There are many of these contrasts implicit in the book, but a couple that stand out to me are context, meaning, and expertise.

Human memory relies on context and meaning, recursively shaping both in the process. The 'where' and 'when' of our memories are processed separately from the 'what.' This contextual encoding allows us to form rich, multidimensional memories that deeply connect to our sense of self and our understanding of the world. Memories are retrieved flexibly and associatively, reflecting our individual identities. In contrast, AI 'memory' is often designed to strip away context, reducing data to decontextualized information. This raises the question: can AI truly understand 'meaning'? It seems that expecting AI to find meaning might be a fundamental category error.

Expertise in a particular domain is not just about the ability to see patterns, but also about the way we find them. This suggests that expertise involves a deep understanding of the context and meaning of the patterns we observe, rather than just accumulating a vast store of information in more and more detail. This implies that AI's kind of memory—for facts and associations—won't result in the same kind of expertise that a human develops. AI's might be more "factual" but a human will retain the valuable skill of insight.

As biological, evolved beings, we remember so that we can make better predictions and decisions about the future. This book successfully liberates the concept of memory from the outdated metaphor of a storehouse. Instead we see memory as an active, dynamic process that is closely tied to our goals, values, and sense of purpose. This connection is something AI cannot replicate in the same manner as humans.

April 28, 2024


AI Needs You, How We Can Change AI's Future and Save Our Own by Verity Harding

Harding has a strong pedigree in AI and public policy. She was the head of policy for Google DeepMind, and is now director of the AI & Geopolitics Project at the Bennett Institute for Public Policy at the University of Cambridge. Many books about AI's place in society and AI alignment are written from the perspective of the uniqueness of AI. This book is grounded in real-world coordination problems—nuclear weapons in orbit, fertility technologies, internet governance. Verity gives the reader a deep and contextually rich history for each case study. In this way, the analogies are rich and practical. The high-level lessons perhaps aren't surprising—will, leadership, coordination, time—but the details matter if we want enduring institutions designed for AI. 

Harding discusses the risks and concerns thoroughly but without the hand-wringing that often accompanies the impact of AI on society. Difficult lessons from the past, social and scientific tensions, and challenges of coordination and complexity are informative and feel very human. Which makes them feel possible. 

An important lesson from this book is one of cultural context. After reading it, I'm more convinced than ever of how critically important it is to recognize the cultural context of the day. And today, that context is a world where trust in institutions has been completely shattered, right as "intelligence" is being institutionalized inside the big companies developing AI. This is a totally unique factor that sets apart how we have to deal with AI compared to other technologies in the past. We can't ignore the cultural backdrop of eroding public trust as we grapple with the rise of powerful AI systems being built within the walls of large corporations. It's a defining feature of the AI landscape today and something we absolutely have to keep front and center.

This is great book if you want to deepen your contextual understanding of the options available to us to shape a world where AI is a fundamental power in its own right. It's a tangible perspective on the reality of the choices we need to make about AI.

April 21, 2024


Co-Intelligence, Living and Working With AI by Ethan Mollick

Ethan Mollick has carved a phenomenal role for himself as the guru of generative AI so it's not surprising that his book is both broad and comprehensive. It's a great summary and introduction for anyone who hasn't yet come to grips with modern AI as well as being a handy reference for those who are already across the topic. 

Mollick's central idea is that AI is an alien intelligence that we need to learn to live and work with in ways that make humans better and faster. The book offers his practical tips for doing this as well as a solid description of how AI works, what we understand, and what we don't understand. Mollick's perspective skews optimistic, as he doesn't delve deeply into the potential downsides, counterintuitive results, or unintended consequences that may arise from the early adoption cases he cites. However, he does balance this optimism by acknowledging the complexity of the social and cultural adoption of generative AI. 

Mollick's book issues a clear call to action that resonates with us, particularly regarding how managers assess their employees. He highlights a crucial shift in the way we evaluate effort, care, and expertise in the age of generative AI. Traditionally, managers relied on proxies such as the number and quality of words produced by an individual to gauge their performance. However, with the advent of generative AI, these measures have become obsolete. Mollick argues that managers must now adapt their assessment strategies to account for the transformative impact of this technology on the nature of work.

It's clear that many of Mollick's ideas will evolve over time as the technology advances and barriers to adoption are overcome through better design. Mollick's work serves as a bridge to this new world, offering valuable insights in our current transitionary point. One idea that stuck with us is Mollick's perspective on the unique opportunity presented by the current state of AI. He argues that the immaturity of AI interfaces and their implications for human-AI interaction encourage us to engage in reflection. In other words, the absence of refined design prompts us to deliberate more deeply on the technology's impact and potential.

Mollick's book is fundamentally about empowering individuals to approach the future of AI with practical knowledge and personal agency. What sets his work apart is his focus on the current state of evidence and his balanced perspective on how humans actually interact with AI and the underlying reasons for their behavior. By providing readers with a grounded understanding of the present landscape, Mollick equips them to navigate the co-intelligent future he envisions, where humans and machines work together seamlessly. His book delivers on its promise to guide readers towards a future of co-intelligence, offering actionable insights and strategies for individuals to harness the potential of AI while maintaining their autonomy and decision-making power.

April 14, 2024

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.