AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
AI ethics is a hot topic. Google Trends hints that peak “AI ethics” might be on its way. Surprisingly (or not) it is highest trending in DC.
Meanwhile in Silicon Valley, AI Ethicist is now a real job. Which is pretty interesting because it seems that tech giants are stumbling from one ethical dilemma to another on a more or less daily basis. And here’s what’s most curious — according to Tristan Harris, all that’s required for Silicon Valley to fundamentally change its impact on the world is for about 1000 people to change their ideas about technology. Which means it should be a finite, albeit difficult, problem.
So how are AI ethicists in Silicon Valley actually doing their jobs? And does their presence make a difference?
Data and Society researchers recently gathered “informants” from various Silicon Valley tech companies - people employed to “do ethics” for big tech. These “ethics owners” were interviewed for their views on the practical progress of ethics in Silicon Valley.
They found a central dilemma - ethics owners have to try to resolve complex social decisions, which are posited and framed (and come under challenge) from outside the company, inside the logic of Silicon Valley. One of the most prevailing Silicon Valley logics is that the solution to a “bad” technology outcome is more technology; technology solutionism. Because AI is used in social applications, this philosophy is being applied to more and more social problems.
Another reinforcing factor is the idea that Silicon Valley is a meritocracy; individual abilities define who gets to work on the most important problems. Which means that the smart people who created the smart technology are also the right people to work on the new problem.
This dynamic is a kind of “doom loop,” whereby an unshakable belief in technology and technologists means that “human” concerns are subject to a primarily technical approach which is almost totally controlled by the people who created the problem in the first place. The researchers see this dynamic as both self-perpetuating and self-defeating:
Given the increasing power and centrality of AI and automated decision-making tools in everyday life, there is an urgent need for a coherent approach to addressing ethics, values and moral consequences. Attempts to institutionalize ethics within entities structured by core logics of corporate power point towards a series of structural, conceptual and procedural pitfalls that may ultimately stymie these efforts.
What makes this a unique challenge for ethicists is that these companies can get really big before they are mature. “Ethics owners” become “ethics coordinators” so their job is to create practices that help everyone get back to “business as usual” as soon as possible. But in an organization that is yet to grapple with many issues (regulation, unforeseen risks, unintended consequences), the tendency is to use the same mechanisms to solve the problem as the ones that caused the problem, rather than make an honest attempt at defining the true nature of the problem before coming up with the solution. Think of how Facebook’s initial stance to the fake news problem was that AI would fix it.
Engineers who make the frontline decisions can position themselves as being able to use their personal judgment. This is one of the primary ways moral judgment gets instantiated inside AI. By trusting engineers to figure out ethics as a technical process or by using a set of AI-management tools, they are charged with both discerning and evaluating the ethical stakes of their products.
Then what happens? If engineers are the ones seen to be best positioned to evaluate a hypothetical harm (perhaps by sitting in a room and thinking about other people’s lives), they also have the power to dismiss the concern as not realistic, not relevant or not worth bothering about given the probabilities. Too bad if you’re a minority - your probabilities are already low. This problem is also identified in and academic paper, The Ethics of AI Ethics. Engineers lack knowledge of the long term or broader societal consequences, which leads to a lack of any sense of accountability for what they do as individuals. Essentially, there’s no connection between the technologist’s mind and the moral significance of their work.
What’s considered AI ethics has become correlated with the ease of a technical fix. So instead of AI ethics being about fundamental rights and human dignity, they are about accountability, privacy, anti-discrimination and explainability. This is handy, especially if the goal is to avoid more difficult subjects such as manipulating behavior using nudging, reducing social cohesion by AI ranking and filtering, the political abuse of AI, the lack of diversity in the AI community and deciding between human version machine autonomy.
We need a different way to frame AI ethics. We shouldn’t stop with all the good technology work that has been done - debiasing tools, explainability and transparency tools, design tools to help with human-in-the-loop and accountability structuring. We shouldn’t create any hard delineations that mean that technical experts are disempowered from working on ethical-technical fixes because, ultimately, we will need AI to monitor and manage AI.
But we can’t lose sight of what ethics are really about and have to find ways to do more. We particularly need to short-circuit the dynamic of technological solutionism and meritocracy because it means that it is much easier for those on the side of the technology to dismiss any criticism and any oversight or regulation from outsiders.
Ethics are not a technical problem in need of a technical solution. Ethics are “a tension between the everydayness of the present and the possibility of a different better everydayness.” This implies that, for AI ethics to have a meaningful impact, these companies must also have some beliefs or values about what “better” is and be transparent in how they intend to make machines that create “betterness.”
For now though, when tomorrow brings the next ethical disaster, we are stuck with an immature model - AI ethicists attempting to reduce risk and algorithmic harm while technologists are tasked with coding the next fix.
This week:
The Artificiality Weekend Briefing: About AI, Not Written by AI