Newsletter
A Lawsuit for a Human Future
Elon Musk's lawsuit against OpenAI and Sam Altman is the one time I would like to be on jury duty.
This Week:
Elon Musk filed a lawsuit against OpenAI and Sam Altman this week, claiming it has abandoned the principles of benefiting humanity. Elon has made his displeasure clear since he left the board in 2018, but a lawsuit formalizes his criticism and potentially sets up the tech lawsuit of the decade.
You might dismiss the lawsuit as a personal feud, especially when Elon is involved. You might also see it as a competitive move, since Elon has started his own OpenAI competitor. Or you might see it simply as a way for Elon to recapture the money he donated to help start OpenAI. Any or all of those things might be true. But I see it as a more profound conflict that may affect us all.
OpenAI was founded as a non-profit with the directive to benefit humanity because its founders were concerned about how a profit motive might amplify AI’s dangers and corrupt its future. Its initial structure was bold and embraced the complexity of attempting to benefit everyone with the money of a few. And, although I have a mixed history with and view of Elon, I was impressed with his involvement because I agreed with him: AI’s immense potential is only matched by its peril.
Today, OpenAI has dueling goals: benefitting humanity and creating AI that surpasses humans in most economically valuable tasks. I’m obsessed with the logical fallacy that these two goals can exist within the same profit-driven organization.
An intelligence that surpasses humans only needs capital to fuel its ability to capture the value of human labor—a dynamic we call “capital capture.” Given the cost arbitrage between human employees and machine employees, what might be the limit to OpenAI’s growth if it succeeds? OpenAI appears to have no limit to the capital it can raise—which implies it would also have no limit to the human labor it could capture to provide a good investment return.
In what way might this also be good for humanity? Sam spins a story about freeing humans to pursue our dreams, supported by a universal basic income. This clearly makes no sense, as there is no rational way for machines to replace all human value while maintaining a functioning economy and society. The only logical outcome is complete dystopia, run by a few trillionaires—or quadrillionaires.
I won’t claim that keeping OpenAI as a non-profit would prevent this outcome. Societal-scale risks related to AI will remain even if OpenAI isn’t motivated to provide profits. But I do hope that a lawsuit between two of Silicon Valley’s titans might bring the issue into the public sphere so we can collectively debate and decide on the future we want before it’s too late to prevent them from crafting our future for us.
This is one time I would like to be on jury duty...
This Week from Artificiality:
A review of research by Phil Tetlock and other experts on crafting better prompts by investigating if human forecasting can be improved through the use of a large language model.
This research opens up vast possibilities for AI's role in solving complex problems but also underscores the importance of understanding and this emergent behavior especially as we head towards a world of multimodal models and agentic AI.
An inteview with Angel Acosta, founder of the Acosta Institute.
ChatGPT and similar tools can significantly alter workflows by changing how we match tasks with skills. Think of a two-by-two matrix: on one axis, you have the skill needed for a task; on the other, the worker's proficiency level.