The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
Improve Your Prompts with Many-Shot In-Context Learning
Explore Many-Shot In-Context Learning to enhance AI interactions. Learn five strategies from Google DeepMind's research to optimize your prompts for language models like ChatGPT. Improve your AI prompt engineering skills and maximize AI capabilities.
The concept of "Many-Shot In-Context Learning" (Many-Shot ICL) significantly enhances the capabilities of large language models. So far, models have operated under a "few-shot" learning framework, where they make predictions based on a limited set of examples. A new study from Google Deepmind goes further by showing that LLMs can learn from hundreds or even thousands of examples, dramatically improving their accuracy and flexibility across a wide array of tasks.
The main takeaway is that LLMs are becoming more adept at understanding and executing complex tasks simply by being given more examples of how to perform them.
You can leverage these insights yourself by adjusting your prompts in ways that the researchers found effective at scale, which I've included at the end of this article. Some caveats: Firstly, the conditions under which these models are tested in research—using hundreds to thousands of carefully curated examples per task—are not typical of everyday use. Average users generally interact with ChatGPT using far fewer examples, and often without the specific structuring that research conditions afford. This discrepancy means that while models may demonstrate high capabilities under research conditions, the performance in standard user interactions may not reflect the same level of advancement.
The benefits of Many-Shot ICL are more pronounced in tasks that are complex and well-defined, where the model can leverage large volumes of structured data to learn. Everyday queries to ChatGPT, which can range widely in nature and complexity, might not always fall into categories where many-shot learning provides a significant benefit. So while you can expect gradual improvements in AI performance, these enhancements may not be immediately noticeable in all types of interactions.
Increase the Number of Examples: The key finding from this research is that the more examples you provide to an LLM, the better it learns. When interacting with an AI system, providing it with multiple examples of what you need can lead to more accurate outcomes.
Utilize Domain-Specific Inputs: For tasks that require specialized knowledge, such as legal or medical inquiries, incorporating domain-specific terminology and structured examples can significantly enhance the AI's performance.
Incorporate Model-Generated Rationales: Instead of relying solely on human-generated explanations or examples, allowing the AI to generate its own explanations (and learn from them) can improve its reasoning capabilities. This is especially useful for complex problem-solving tasks.
Simplify Prompts by Removing Rationales: In some cases, especially when the task is well-defined or straightforward, you can streamline the AI's learning process by removing detailed explanations from the prompts. This forces the model to focus more on the input and output, potentially speeding up the response without compromising accuracy.
Sequence the Examples Thoughtfully: The order in which examples are presented can affect learning outcomes. Thoughtfully structuring the sequence of examples — perhaps by complexity or by grouping similar types together — can aid the AI in developing a better understanding of the task.
Try using techniques from Many-Shot ICL to enhance your interactions with GenAI.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.