AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives. To truly grasp this transformation, our approach is rooted in engaging with core concepts such as critical thinking, logical analysis, and the scrutiny of underlying assumptions, principles that are essential in the realm of philosophical inquiry.
While considering the potential risks and implications of OpenAI releasing an AI voice that appears designed to draw people in, I found an interesting r/artificialinteligence Reddit thread of people sharing their preference for talking with AI over other humans.
I believe that accountability becomes more pronounced in the era of AI. While the accountability dynamics between humans and machines may differ, humans will invariably assert that if someone had access to such advanced AI capabilities, they had the means to make more informed decisions.
In the era of generative AI, characterized by seamless co-creation and the economics of intimacy, worries about AI encroaching on agency have more existential elements. They go to the essence of humanity, as machines now actively participate in significant aspects of original thought.
On a surface level, AI might seem like the perfect solution to the challenges of social learning. But is this really true? Is it actually what we want?
In science, traditional human search strategies are like wandering through a wilderness with limited visibility, relying on intuition and serendipity. AI, in contrast, can take in the whole landscape, quickly and effectively exploring the vast range of possible combinations.
Context is everything—whom you're with, where you're going, and why. Machines currently lack the ability to understand this context, but generative AI, especially modern large language models, hold the promise of changing this limitation.
The future of the extended mind is not a fixed destination but an ongoing journey that requires our active participation and reflection as we redefine what it means to be human in an age of artificial intelligence.
Complex change recognizes that change is non-linear, emergent, and deeply interconnected with the system in which it occurs. This is even more important as we adapt to the complexity of generative AI.
AI's potential is vast but many applications remain incremental, simply grafting AI onto outdated frameworks. This is why we embrace complexity. We think about designing incentives, understanding self-organization and distributed control, and planning for emergence and adaptation.
I agree with the government that antitrust is important. That said, I would hate to see data privacy and security diminished just as we are trying to take advantage of AI’s power to be a personalized digital partner.
ChatGPT and similar tools can significantly alter workflows by changing how we match tasks with skills. Think of a two-by-two matrix: on one axis, you have the skill needed for a task; on the other, the worker's proficiency level.
The Artificiality Weekend Briefing: About AI, Not Written by AI