AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
The fundamental takeaway from this paper is that the design of true personal AI assistants necessitates a foundational shift toward responsible design principles from the outset.
A personal assistant for everyone has been a goal in AI since, well, forever. When we first went deep into this topic in 2016 we came to a few conclusions:
Today, these issues are more visible than ever. The critical intersection in the complex Venn diagram—balancing a minimum viable business model with privacy, profit, and trust—remains unresolved. Researchers have documented intricate online collusion among various instances of Generative AI. And it's still uncertain how much people will trust and follow advice from AI, even when they recognize it might be in their best interest.
Today, I think AI assistants will be essential tools, and here's why: the contemporary information sphere is simply overwhelming for humans. We are inundated with excessive information that is often of poor quality and not suited for effective decision-making. Additionally, a substantial portion of this information is generated by machines, primarily intended to trigger responses from other machines.
Just as our bodies evolved during times of caloric scarcity, our minds developed in an era of information scarcity. Now, we find ourselves overindulging in—or being force-fed—a diet of data-driven content designed to hit the "bliss point." This trend is unsustainable, especially with statistics suggesting that 90% of the internet's content will soon be machine-generated. While not inherently detrimental, this mass of content often serves merely as signals to algorithms, adding to the noise rather than the substance.
What role does your personal AI assistant play in this scenario? It acts as your intermediary, communicating with these machines on your behalf. It helps make sense of and distill the overwhelming cacophony of AI-generated information, providing clarity and insight.
Integrating advanced AI assistants into our daily lives brings the need to answer critical questions regarding our agency, autonomy, anthropomorphism, and our overall perception of machines. These questions are not merely theoretical, they have practical implications for how we interact with technology and each other. As these technologies advance, they promise to extend human capabilities and improve efficiency, but they also risk fundamentally altering human interactions, privacy, and personal independence.
A new paper from Google DeepMind is a comprehensive guide on the ethics of AI assistants. It's a monster—273 pages! The paper is centered on the ethics of this technology but there are many interesting perspectives on some key issues which we highlight below.
Advanced AI assistants are engineered to perform a wide array of tasks, from mundane daily chores to complex decision-making processes that traditionally required significant human judgment. Used well, these assistants will enhance our agency and decision skills by offloading tasks or by helping us grapple with more complex decisions.
However, the counterintuitive concern here is the potential erosion of autonomy. As AI assistants become capable of making decisions on behalf of users, there is a tangible risk that individuals may become passive bystanders in their lives and careers.
Over-reliance on AI could lead to a degradation of personal decision-making skills. When machines make choices for us, our own ability to analyze situations and make informed decisions could diminish, akin to a muscle that atrophies from lack of use. Thus, while AI promises to enhance our capabilities, it paradoxically threatens to weaken our mental faculties and decision-making prowess.
The design of AI assistants often involves making them appear or behave in a human-like manner to facilitate easier interaction and acceptance by users. This anthropomorphism can lead to users attributing human qualities to non-human agents, an effect that can significantly skew the perceived capabilities and roles of AI. Surprisingly, while this makes interactions more natural, it also sets up potential emotional dependencies that could impact mental health.
Users might begin to develop emotional attachments to these AI entities, viewing them as companions or confidants. This is particularly true as AI becomes better at mimicking human emotional responses. The novel risk here is the potential for emotional manipulation, whether intentional or accidental, by AI systems designed to interact in deeply personal ways.
As AI assistants handle more tasks with increasing competence, there's a growing concern about users becoming overly dependent on technology. This dependency isn't just about convenience; it's about the potential loss of ability to perform tasks without technological help. What’s counterintuitive is the possibility that instead of becoming more liberated, people might become more incapacitated.
Dependency could extend beyond practical tasks to cognitive functions, such as memory, navigation, and problem-solving, which can be outsourced to AI. This raises crucial questions about the long-term effects of this dependency on human cognitive development and societal norms concerning the value of learning and personal effort.
As AI systems become more embedded in everyday life, the fundamental human perception of machines is shifting. Traditionally viewed as tools or aids, AI systems with advanced capabilities and autonomy are beginning to occupy roles more akin to partners or collaborators. This shift could lead to a redefinition of work, collaboration, and creativity, as machines take on more active roles in these domains.
Counterintuitively, while this might suggest a potential for increased productivity and innovation, it also raises significant concerns about the displacement of human roles and the implications for job markets and individual self-worth. The perception of machines as partners rather than tools introduces complex dynamics in workplace hierarchies and professional relationships.
The fundamental takeaway from this paper is that the design of true personal AI assistants necessitates a foundational shift toward responsible design principles from the outset. It emphasizes that new externalities and a broader systemic scope must be primary considerations. Uniquely, AI assistants necessitate a departure from conventional design processes that typically center on individual users. Instead, these technologies require starting with a systemic perspective that includes the collective network of users. This shift represents a significant challenge for designers, compelling them to integrate broader societal impacts into the early stages of design.
This is a significant resource. Below, a guide for who should read it and why
1. AI Ethicists and Researchers
2. Technology Developers and AI Engineers
3. Policy Makers and Regulatory Bodies
4. Business Leaders and Technology Strategists
5. General Public and AI Users
6. Educators and Academic Institutions
Link to the paper: The Ethics of Advanced AI Assistants
Youtube presentation and discussion led by the lead author, Iason Gabriel.
The Artificiality Weekend Briefing: About AI, Not Written by AI