AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
Artificiality Co-founders, Helen and Dave Edwards, gave a presentation on AI and Marketing for the Global AI Advertising Summit at The New York Times.
We spoke at The New York Times on AI & Marketing for a global summit of The New York Times advertising team. Scroll down to read the case study below.
As AI diffuses through the global economy, companies are looking to make their workforces “AI-ready.” Digital transformation, big data, analytics, and the cloud all enable new services and faster innovation. For those who deal in data all day, every day—the “data-natives”—this transformation is a way of life. But for others—the “data-curious”—it can be daunting to keep up. Still others—the “data-deniers”—find it a challenge to see where they fit in.‍
Our answer is that everyone has a role in an AI-first company. The real challenge is to get everyone to a common baseline—where everyone in the company understands the power, reach, promise, and perils of modern AI—and is able to contribute to innovation and operational practices with an eye to the AI workflow.
When The New York Times approached us to host a session on AI-readiness for the ad team we knew we were going into an organization that had sophisticated AI already at work. The ad innovation team at The Times had spent two years developing audience models that could then be offered to advertisers as contextual targeting tools. This included using panel-based data to construct an algorithm that scores all The New York Times articles against 18 different emotions such as “curious” or “optimistic.” The technology also predicts how likely an article is to motivate a reader to take a particular action such as making a charitable donation, embarking on a dietary change, or spending a significant amount of money.‍
But building an AI-ready workforce involves much more than having strong data science teams, abundant data, and an AI-ready technology platform. True AI-readiness means having employees at all levels and in all types of roles who understand how machines learn. It means having employees who can spot opportunities to craft new workflows, products, and services that use the best of humans and machines—including being able to intervene when it’s going wrong.‍
We love this question because it instantly reveals people’s perceptions. The Terminator. Autonomous, weaponized drones. Amazon’s recommendation algorithm. Robot pets. Chatbots. Google’s search algorithm. Apple Maps. AI is used for both good and bad, it’s ubiquitous, incredibly useful, and it’s not always right. It’s an everyday thing.
People were well informed about AI risks. Clearly they follow the headlines! Amazon’s hiring algorithm abandoned due to bias against women. COMPAS’ recidivism algorithmunder fire because it’s biased against blacks.
Facebook’s discriminatory housing ads. The list goes on.
In most organizations, fixing machine bias is left to the technologists. That is if it’s done at all. Our approach at Sonder Studio is different. The best fix for AI bias is more holistic.
It’s a diverse team, operating a robust process that includes both technical and non-technical fixes, tackling design and operational issues such as key aspects of UX design (say, adding prompts that help users understand correlations between so-called “neutral variables” and protected classes), important tradeoffs (such as the tradeoff between fairness and accuracy when different user groups have different base rates) and appropriate remedies and controls when things go wrong (who is the “human-in-the-loop?).
At The Times, we found a thirst for understanding these issues—on behalf of readers, advertisers, and for staff. But we also saw something deeper—a level of individual responsibility to take on the challenge of understanding machine bias.
Machine bias, as with human bias, can distort the truth and interfere with our progress toward a more just society. Those who communicate with society now need to have a working knowledge of AI bias, in addition to the confidence and authority to tackle it.
Nothing could be more true to The New York Times brand.
Writing and Conversations About AI (Not Written by AI)