AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
A guide for organizational transformation for generative AI. By taking these critical steps, organizations can lay the foundation for effective use, setting themselves up for future success in an increasingly AI-driven world.
Five Steps for Adopting Generative AI:
Generative AI is a cultural technology—how it is adopted depends on the culture in which it is placed. This means that leaders have to actively create a culture where humans and AI work productively together. This starts with vision and choices about human agency.
What is your vision for the role and value of human workers?
We like to ask people about their excitement-to-fear ratio when it comes to AI. The more senior people are, the more likely they are to be excited for the opportunities that AI offers. But, within our culture broadly, a lot of people are worried about AI. Pew Research reveals that 52% of Americans are more concerned than excited about AI in daily life while only 10% are more excited than concerned (36% are equally excited and concerned). A survey by the American Psychological Associated shows that if people are worried about AI, they tend to feel more negative about their workplace overall. For example, comparing workers who are worried and not worried about AI, 56% of those worried about AI feel micromanaged at work (vs. 33% for those who are not worried) and 41% believe they do not matter to their employer (vs. 23%).
Allowing employees to express their anxieties about AI and feel part of the conversation is crucial. Empowering them to adopt and use the technology themselves is equally important. Without these steps, employees will adopt technology according to their own cultural standards, which can lead to two extremes: either rejecting AI outright due to fear or over-relying on it due to a belief in its infallibility. Neither of these extremes is desirable for your business. Instead, the goal is to create a middle ground where employees adopt AI appropriately, leveraging its benefits while understanding its limitations. There's a sweet spot you're aiming for here—where the balance is will depend on many factors within your organization.
Organizations are complex, adaptive systems which are composed of numerous interacting agents (individuals, teams, departments, software systems) that give rise to emergent behaviors and properties. Generative AI introduces new capabilities and new ways of working, which makes for complex change. Interactions become even more interconnected and dynamic. Unpredictable outcomes from both people and AI can propagate in unpredictable and non-obvious ways.
This means that leaders have to embrace the non-linear nature of complex systems. Problem solving, innovation, and routine tasks all become even more decentralized, even more self-organizing than they already are. Individuals will be very individualistic in their adoption of AI, which means that it's vital to provide clear guidelines and incentives that encourage local adaptation and experimentation within the nested hierarchies of organizations. This creates the conditions for emergence, where novel solutions and practices can bubble up from the bottom.
Complexity science also emphasizes the importance of feedback loops and adaptation. As teams experiment with AI tools and share their experiences, the organization as a whole can learn and evolve. This requires creating channels for communication and knowledge-sharing across the organization, as well as mechanisms for capturing and integrating lessons learned.
And if you're wondering about the bigger picture here it is this: the concept of phase transitions, where small changes in system parameters can lead to sudden and dramatic shifts in system behavior. Organizations have been designed the same way for 150 years because top-down control works. New ways of working, such as agile, have successfully tweaked the basic formula. But this is different. As AI gets increasingly powerful and diffuses everywhere it is possible that organizations reach a tipping point where the impact of AI on work processes and organizational structures becomes transformative. We just don't know but there is an evidence base in complexity science that tells us this is possible. Leaders will need to be ready to adapt accordingly.
It is crucial to establish clear, acceptable use standards for AI within the organization. As employees increasingly experiment with these powerful tools, ground rules must be set to safeguard proprietary data and ensure transparency in how LLMs are used. This may involve prohibiting the uploading of sensitive information to third-party LLMs and requiring disclosure when AI-generated content is shared.
A significant portion of employees (40%) who use AI do not inform their supervisors. This trend is particularly prominent among younger and more highly educated workers, who are likely responsible for performing critical tasks that generate core information for key decision-makers within the organization. Given the importance of these tasks and the potential impact of AI on the information flow, it is crucial to foster an environment where employees feel comfortable being transparent about their AI usage.
As AI diffuses into workplace tools by default, decisions about how and when to use the tools shouldn't be also by default. For example, the use of sentiment analysis in virtual team meetings. On the positive side, tools such as these can give people insightful metrics on participant engagement and sentiment, potentially identifying key moments where attendees seem most receptive or conversely, disengaged—information that can refine the effectiveness of communication and presentation. Imagine, for instance, pinpointing when your team felt most inspired during a product launch discussion, allowing you to harness those successful approaches in future meetings.
However, it's essential to balance this with privacy concerns and the possible discomfort it may introduce among employees who might feel under surveillance. An over-reliance on algorithm-driven feedback could also inadvertently discount the nuanced, human aspects of communication that AI may not fully interpret. It's vital to have a transparent dialogue with teams about how the AI analyzer will be used, the type of data it will gather, and the measures to protect privacy can help in achieving a balance between leveraging AI's analytical advantages and maintaining a trust-based workplace culture.
While the adoption of Generative AI is largely self-organizing and bottom-up, there are several key considerations that a central organizational unit should address. Some of these considerations are obvious, such as ensuring compliance with acceptable use standards and making technical decisions related to vendor and partner selection. However, there are also less apparent benefits that will become increasingly important as generative AI becomes more prevalent.
One such benefit is the ability to conduct cost-benefit analyses and calculate ROI in a world where productivity benefits are emergent and not always predictable. Another critical aspect is managing data quality, which is essential for one of the highest-value enterprise use cases identified so far: knowledge management.
A central unit can develop capabilities and guidelines for managing the variability and unpredictability of generative AI outputs. It is remarkably easy to generate a wide range of outputs for similar queries, raising the question of who decides which output is the most appropriate. Similarly, determining which use cases are suitable for "good enough" answers provided by generative AI is another important consideration.
By consolidating expertise and resources, a central office can drive economies of scale and break down data silos that hinder effective analysis. Establishing a task force with representatives from IT, legal, and key user groups is an essential first step in developing the necessary rules and practices.
Investing in employee training is paramount to unlocking the potential of AI tools while mitigating their downsides. Everyone who will interact with or rely on LLM-generated content should receive basic training on the quirks and limitations of these tools, particularly their propensity to confabulate, introduce bias, erode team creativity, and generate inconsistent answers.
More advanced training in prompting and refinement will enhance the value employees derive from Generative AI. Executives must also clearly communicate the organization's standards for what constitutes "good enough" output before using AI-generated content. By empowering employees with the knowledge and skills to effectively leverage AI, organizations can harness its power while maintaining the necessary safeguards.
Leaders in Generative AI adoption recognize the importance of reskilling their workforce and are proactively scaling up their efforts in this area. The majority of leading companies acknowledge that GenAI will create new roles within their organizations and anticipate that, on average, nearly half of their employees will require reskilling in GenAI over the next three years.
Companies that are at the forefront of this trend have already gained a significant advantage by adopting the necessary technologies. BCG's 2023 DAI study revealed that leading firms typically have three times as many full-time employees upskilled on AI compared to their counterparts: 21% of organizations planning to invest upwards of $50 million in AI and GenAI in 2024 have already trained more than a quarter of their workforce on the relevant tools. In contrast, only 6% of companies overall have achieved this level of training.
However, most executives report that currently, only 1% to 10% of their employees are trained on GenAI tools. This highlights the need for extensive training across all levels of the organization, including the executive team. In fact, 59% of leaders surveyed admit to having limited or no confidence in their executive team's proficiency in GenAI. It is crucial for companies to prioritize reskilling efforts and ensure that their entire workforce, from entry-level employees to top executives, is well-versed in the use of GenAI tools and technologies.
While the long-term impact of Generative AI remains uncertain, executives who proactively create the right culture for adoption, adapt to complexity, and invest in training will be better positioned to navigate the challenges and opportunities that lie ahead. By taking these critical steps, organizations can lay the foundation for effective use, setting themselves up for future success in an increasingly AI-driven world.
The Artificiality Weekend Briefing: About AI, Not Written by AI