AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
We don’t yet know what OpenAI will look like after the dust settles. But here are our main takeaways at the moment. AI Regulatory Capture, LLMs Thinking about Space and Time, and Generative AI Agents.
For most of the ten years we have worked in AI, few people didn’t pay much attention. This week, everything changed as the drama at OpenAI was the top story in major newspapers day after day. Perhaps the narrative about money vs safety tapped the public concern over AI risk. Perhaps the sudden firing of a prominent and likable founder tapped the public’s obsession with founders as celebrities. Or perhaps the drama simply unfolded like the Game of Thrones with everyone falling for their favorite character.
While the he said / she said drama captures the public’s attention and news cycle, our interest is in the profound impact these events may have on AI and society. OpenAI was founded as a non-profit so that profit motives wouldn’t overwhelm concern for society. That noble objective has slowly dissipated as the company created a for-profit subsidiary to raise billions of dollars and prominent employees left due to concerns over safety to found Anthropic. Throughout it all, CEO Sam Altman lauded his board’s ability to fire him if he didn’t follow the company’s mission. They did just that a week ago. And within a few days, he was back and the board was gone. So much for mission-oriented governance.
We don’t yet know what OpenAI will look like after the dust settles. But here are our main takeaways at the moment:
Hopefully, OpenAI’s new governance and mission will be clearer soon. For now, we encourage you to take a skeptical view and be wary of the Altman disciples who profess that he and his 700 employees are humanity’s only hope.
We are getting closer to an important stage in AI—generative AI agents. This is something on our watchlist because agentic AI is a marker of more general "real world" AI capability. This development matters because of the technical progress around memory, a necessary breakthrough for AGI. (For a discussion on natural general intelligence listen to our interview with Chris Summerfield).
The Stanford research unveiled generative agents that exhibit humanlike behaviors, grounded in a sophisticated memory system. Unlike traditional non-player characters in games, these agents use a large language model to remember interactions, build relationships, and plan events. This technical progress, particularly around memory, marks a critical step towards more lifelike and autonomous AI.
Here’s what’s exciting: significant technical progress on memory (think, ChatGPT remembers everything about you and can do stuff for you without detailed instructions). The advance goes beyond simply generating human-like responses to queries “by thinking through what it means for individual agents to generate believable humanlike behavior independently of human interaction, and converting that into a simple yet workable computational architecture.”
Here’s the worry: These agents are smart but still make dumb AI-style errors. “They spoke very formally even to close family members, used the same dorm lavatory simultaneously, and went to lunch at the local bar rather than the cafe, as though they’d developed a day-drinking problem”.
Opportunities:
- Can populate virtual spaces, communities, and interactive experiences with believable human behavior and social dynamics.
- Can simulate complex interpersonal situations to allow people to safely practice and prepare for difficult conversations.
- Open up new directions to explore foundational HCI questions around cognitive models, prototyping tools, etc.
Threats:
- Risk of users forming inappropriate parasocial relationships with agents if they overly anthropomorphize them.
- Errors in agent reasoning could annoy users or cause outright harm if propagated into real-world systems.
- Could exacerbate risks like deepfakes, misinformation, and persuasion.
🔍 Read the full article on Stanford HAI, read the paper. This breakthrough not only enhances gaming experiences but also opens new avenues in social science research.
Writing and Conversations About AI (Not Written by AI)