Announcing the Artificiality Summit 2025!
Announcing the Artificiality Summit 2025! Don't miss the super early bird special at the end of the email...
OpenAI Fires CEO Sam Altman, AI Vision Controlling Your Phone, and Are we as close to AGI as Sam Altman says? š¤
Welcome to the first Artificiality Weekly Briefing. In this newsletter, we will highlight key stories, research, ideas, and productions from the week.
And what a week to start with!
On Friday afternoon, OpenAI announced that its board fired CEO Sam Altman because he āwas not consistently candid in his communications with the board.ā The board also removed Chair and President Greg Brockman from the board, who subsequently quit along with a few other senior technical leaders.
Silicon Valley is consumed with rumors about what led to Altmanās dismissal. Only the six members of the board know for sure. That said, letās take a walk through some most-noted possibilities.
A Friday afternoon firing often indicates an explosive personal issue. Companies prefer announcing these kinds of issues at the end of the week to give them the slow weekend news cycle to manage the fall-out. In this case, however, a personal issue doesnāt make sense given that Brockman was pushed from the board but not fired as well. And it doesnāt make sense why Brockman and others would quit if Altman was fired āfor causeā related to a personal indiscretion.
Some have proposed that Altman created conflicts of interest due to outside pursuits like his blockchain projects and new venture fund. These kinds of conflicts can certainly be a problem. But would that be such an explosive issue to prompt a Friday afternoon firing? Usually, a board would work on a transition plan, not a firing squad.
Many are sharing rumors that Altman must have hidden important product safety issues from the board. But that board includes Ilya Sutskever, OpenAIās Chief Scientist. What could Altman have known and hidden from his chief scientist? Again, this rumor doesnāt seem to add up.
Despite that, a conflict between Altman and Sutskever seems to be the most logical explanation, given that Sutskever appears to have organized Altmanās firing. And, in our minds, the most logical conflict is based in the core conflict of OpenAI: is it a non-profit creating AI for the benefit of humanity or is it a for-profit company creating a next generation tech giant? Is it a non-profit funded by donations to help save humanity, or is it a for-profit that will use immense quantities of capital to capture the value from replacing labor with AI?
OpenAI was initially created as a non-profit to separate the companyās mission from profit motives. The thought process was that AI is so importantāand potentially so dangerousāthat it shouldnāt be maligned with seeking profit. This argument makes a lot of sense. Perhaps it isnāt a great idea to develop a labor-replacing technology at a company that has a motivation to maximize profit.
OpenAIās mission and structure was muddied when the company created a for-profit subsidiary that raised billions, primarily from Microsoft. Suddenly, a wealth creation opportunity and expectation appeared, creating a core tension for those who wanted to maintain the initial mission. On one side of the argument has been Altman and Brockman, who saw capital as essential to success given how much it costs to develop and deploy AI. Altman and Brockman have aggressively pushed out new product features, in part, to raise investor interest and capital. On the other side of the argument has been Sutskever, who wanted more caution, care, and consideration for potential effects. It appears that the outside board members sided with Sutskever and Altman along with his aggressive capital capture strategy has been shown the door.
OpenAIās identity crisis fits within the broader question of potential harm from capitalistic goals mixed with AI power. OpenAIās initial solution was to house AI within a non-profit. Governments around the world are pursuing regulatory limits. Leading figures have endorsed pausing new developments. To date, none of these solutions have slowed OpenAIās pursuit of Altmanās goal: being the first to create artificial general intelligence (AGI).
Altman can be both credited with and faulted for causing the current AI race. In his pursuit to be first to AGI, he sparked a wave of tech emergence from Google, Facebook, Amazon, Microsoft, and others who all sought to catch up with OpenAI. That race has, within just a year, created a new world of generative AI tools with immense power and potential. But the speed of that race and the accompanying cheerleading has also blinded many to the many perils.
Perhaps a new, more collaborative AI development path may emerge from OpenAIās leadership change. Hopefully, the company can also develop more effective governance practices. Whether dismissing Altman was the right course for the company, the boardās handling of the matter was sloppy. Given the society-changing potential of its technology, we would feel a lot better if OpenAIās board didnāt seem to be learning on the job.
What to watch for: After the dust settles, watch for two things.
Thereās some buzz about research showing that GPT-4V can navigate a phone interface and take action like buying a product (albeit only accurately 75% of the time). What could go wrong with an error rate of 25%, OpenAIās security and privacy holes, and availability to anyone?
Itās important not to imply commercial applications from academic research. This paper shows what GPT-4V can doānot what it is good at or what we should use it for.
In contrast, weāre more interested in the overlooked research from Apple showing similar functionalityāfrom a company we trust not to allow hackers to take over our phones. Appleās research appears more focused on testing apps but it certainly could be applied to Siri to take action on apps if the company found it useful and safe. In particular, we wonder about Apple using this functionality to advance its accessibility goals.
On November 6, OpenAI made a splash at its first developer-focused event, announcing a wide range of new capabilities, features, and services. For a first birthday party, it was quite impressive but also awkward. OpenAIās then-CEO Sam Altmanās presentation had the excitement and clumsiness of the parent of a one year oldācelebrating its advances with the lack of surety of a new parent.
In some ways, everyone in the OpenAI universe has something to celebrate. But everyone also has something to be wary about and Altman missed the mark explaining some logical inconsistencies in OpenAIās product announcements and plans.
In this article, we discuss four parts of the announcement with a skeptic's mindset:
1. Smaller is better. And easier to evade regulators.
2. Cannibal ChatGPT. Eating anything useful.
3. GPTease. What are they really for?
4. Copyright Shield. Copywrong.
Of course, everything about these announcements are now up in the air since it appears Altman was ousted, in part, due to his aggressive product release strategy. Will the new leadership stick with these products or will they change tack?
A new paper from DeepMind suggests maybe not. It shows Transformer models - the core tech behind chatbots like ChatGPT - struggle to generalize beyond their training data. The researchers found that while Transformers can learn new tasks from just a few examples if they're similar to training data, they fail on anything even slightly different.
š„This reliance on pre-training coverage, not innate generalization abilities, suggests today's AI still lacks the flexible learning of human intelligence. AI pioneer Sam Altman claimed recently, the basic algorithms for AGI may already exist in models like GPT-3.
But this paper indicates major gaps remain around out-of-distribution generalization. Our models cannot learn truly new concepts easily without lots of retraining.
So while the raw computational power of models like GPT-4 is impressive, they may still be missing core ingredients for human-like adaptability and transfer learning.
The path to Artificial General Intelligence likely requires breakthroughs beyond sheer scale and data. We need AI that learns more flexibly across tasks, like humans.
This paper suggests we aren't quite as close to AGI as some may hope. Current models still specialize on their training distribution, rather than learn broadly.
šµāš«Extraordinary progress but maybe thereās more to āintelligenceā than we often conceptualize.
The Artificiality Weekend Briefing: About AI, Not Written by AI