The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
Prior to generative AI, it was reasonable to expect software to work as expected. Developers wrote code to follow specifications and QA engineers tested the result. If the software didn’t perform as specified, the mistake was described as a “bug” and sent back to engineering to fix.
Generative AI, however, is unpredictable by design. That’s one of the reasons it is so powerful. It can find patterns across uncounted dimensions in the data cosmos and weave together concepts into a novel creation. It’s not following rules, it’s creating them.
The problem is that we don’t yet have a way to define the boundary between desirable and undesirable unpredictability. And, this week, we saw the most advanced generative AI tools lurch into undesirable unpredictability.
On February 20, OpenAI announced it was “investigating reports of unexpected responses from ChatGPT”—or what the rest of the world simply described as “weird.” It turns out that OpenAI created the problem by creating an error in how the model translates words into vectors to predict the next best word. And this problem resulted in ChatGPT creating gibberish responses.
On February 21, Google announced that Gemini was creating inaccurate images and subsequently paused Gemini’s image creation capabilities. Google designed Gemini using an AI model called Imagen 2 which is supposed to avoid some of the biased results of previous image generators. However, the goal to represent greater diversity went too far, for instance generating images of America’s Founding Fathers that included dark-skinned women.
We’re early in the development of generative AI so it’s not surprising to find issues. But, how will we know when we are able to rely on these tools? It’s fine for the early adopters to roll with punches when a tool goes down for a day or two. But what about those who are trying to incorporate these tools into their workflows? Will you be able to tell your boss or your customer: sorry, my generative AI was misbehaving today?
A Bit More on Weirdness:
"Weirdness" is a real challenge reliance and trust in LLMs. Here are three types of error to be aware of aside from what is considered to be a "normal" amount of hallucination: LLM Drift, Prompt Drift & Cascading.
LLM Drift refers to significant changes in a model's responses over a short period, because of fundamental alterations in the model's functioning.
Prompt Drift describes how the same input can yield different outputs over time due to changes in the model, the data it's fed, or even the model's migration to a newer version.
Cascading compounds these challenges by amplifying deviations through a sequence of processes, each step potentially veering further from the intended outcome.
Sensemaking is going to change. AI will allow us to find story-less, a-narrative yet meaningful correlations. Our minds will have to be open to a new kind of awe: that which a machine can make sense of that we cannot.
This research shows how flexible these models are: meta-prompting aids in decomposing complex tasks, engages distinct expertise, adopting a computational bias when using code in real-time which further enhances performance, then seamlessly integrates the varied outputs.
Now, as more AI collaboration is designed inside of applications, do you still need to learn how to prompt? If you want to get the most out of AI, we would say, yes.
Developing the skill to craft effective prompts is a critical aspect of working with generative AI. It's about understanding what you want, knowing how to articulate that desire in a way the machine understands, and strategically using the AI's strengths to your advantage.
Dave Edwards is a Co-Founder of Artificiality. He previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Apple, CRV, Macromedia, Morgan Stanley, Quartz, and ThinkEquity.