AI agents are evolving with orchestration layers enabling modular, autonomous systems beyond traditional SaaS, reshaping workflows, labor, and software design.
What’s happening in the world of generative AI and what you can do about it. Our report covers current confusion, status, anxiety, and AI in the wild. And we include a self-assessment for organizational benchmarking.
In the months since the public release of ChatGPT, generative AI has captured the world's imagination. Yet as businesses rush to capitalize on AI's potential, they find themselves navigating uncharted territory, where over-hyped expectations collide with technical limitations, ethical quandaries, and human anxieties. This report offers a look at the state of generative AI adoption in Q2 2024, revealing a complex landscape of promise and peril.
Generative AI has powerful potential for individuals, organizations, and society as a whole. However, as our research reveals, the path to successful AI integration is far from smooth. Drawing on various research reports, we uncover a complex picture of the challenges and opportunities in adapting to a world with generative AI.
To help organizations assess their own generative AI readiness, we start this report with a self-assessment guide. This guide, based on insights from our research, enables companies to evaluate their progress across five key dimensions: strategy and leadership, data management, human resources and training, and operations and use case implementation. By scoring their performance against industry benchmarks, organizations can identify areas of strength and weakness, and develop targeted strategies for improvement. Whether an organization is just beginning to explore generative AI or is already a leader in its adoption, this self-assessment guide provides a valuable framework for continuous growth and innovation.
The remainder of the report focuses on four aspects of the current generative AI landscape: confusion, status, anxiety, and real-world applications. By examining each of these areas, we aim to provide an understanding of the current state of AI adoption and its implications for businesses and society.
In the first section, "Confusion," we explore the conflicting data surrounding generative AI adoption, which has led to widespread uncertainty about its true impact. Through an analysis of surveys from McKinsey and the US Census, we highlight the importance of context and methodology in interpreting adoption statistics.
The second section, "Status," offers a detailed look at who is using generative AI, for what purposes, and what companies expect from this technology. We examine usage patterns across age groups, job roles, and industries, revealing a complex picture of experimentation and integration. Additionally, we dig into the challenges faced by organizations in achieving value from generative AI, from data quality to workflow integration.
In the third section, "Anxiety," we analyze the rising concerns about AI's impact on jobs, life, and society as a whole. Our analysis uncovers a generational divide in attitudes towards AI, with younger workers expressing higher levels of concern about job displacement. We also explore the intersections of race, education, and AI anxiety, highlighting the need for inclusive strategies that address the unique concerns of different groups.
The final section, "AI in the Wild," presents a few real-world examples that illustrate the potential failures of AI applications. From chatbots providing inaccurate legal advice to AI systems generating inappropriate images, these case studies serve as cautionary tales, emphasizing the importance of robust guidelines, content moderation, and continuous monitoring in the deployment of generative AI.
This checklist is designed to help you measure your organization’s progress against industry benchmarks as revealed by research as of April 2024. Each item includes a scoring opportunity, leveraging data that follows in this report.
Strategic Leadership
Scoring Criterion:
1-2 points: Less than 20% of your leadership is engaged in strategic planning around GenAI.
3-4 points: About 55% of companies are experimenting with GenAI; you are doing the same.
5 points: Over 75% of your leadership actively integrates GenAI in strategic initiatives.
Did You Know? As of April 2024, 45% of leaders have limited confidence in their teams' proficiency with GenAI, highlighting a major skills gap at the top.
Action: This is the time to createleadership training programs to focus specifically on building GenAI understanding and strategic integration capabilities. Most critically, learn to learn, put in place ways to stay up to date with adoption trends using best-in-class research.
Key Considerations:
Vision Alignment: Ensure your AI strategy aligns with your organization's overall mission, values, and goals. Clearly articulate how AI will enhance, not detract from, your core purpose.
Stakeholder Engagement: Involve diverse stakeholders, including employees, customers, and partners, in shaping your AI strategy. Seek their input to identify priorities, concerns, and opportunities.
Ethical Framework: Establish a clear ethical framework to guide your AI initiatives. Define your principles for responsible AI use, including transparency, fairness, accountability, and privacy protection.
Adaptive Planning: Embrace a flexible, iterative approach to AI strategy. Regularly reassess your plans based on new insights, technologies, and market dynamics.
Data Management
Scoring Criterion:
1-2 points: Basic data governance policies in place without specific provisions for GenAI.
3-4 points: Data quality initiatives are underway.
5 points: Advanced data management systems tailored for GenAI.
Did You Know? As of April 2024, 46% of Chief Data Officers say that data quality is one of the biggest challenges to adopting GenAI.
Action: Get specific on your data priorities, reviewing and refreshing everything in light of GenAI. GenAI’s potential is based in using a broader spectrum of data than you’ve likely used in previous data projects—and that means that you need new policies and procedures. For instance, generative AI can learn from and use unstructured data in new ways. Do you have data governance policies and procedures to allow employees to access GenAI trained on your unstructured data?
Key Considerations:
Data Governance: Implement robust data governance policies and processes to ensure data quality, security, privacy, and ethical use. Regularly audit and update these policies.
Data Diversity: Ensure your AI training data represents the diversity of your customers and stakeholders. Actively mitigate bias and promote fairness in your data practices.
Data Literacy: Foster a culture of data literacy across your organization. Provide training and resources to help all employees understand and effectively use data in the age of AI.
Data Collaboration: Break down data silos and promote responsible data sharing across teams and departments. Encourage collaborative data projects that drive innovation and efficiency.
Training
Scoring Criterion:
1-2 points: Basic GenAI awareness programs with limited reach within the company.
3-4 points: Structured GenAI training programs covering more than 25% of employees.
5 points: More than 50% of your workforce has undergone extensive GenAI training.
Did You Know? Only 6% of companies have trained more than 25% of their workforce on GenAI, indicating a significant opportunity for competitive advantage through better training.
Action: Expand GenAI training programs to include practical applications and ethical considerations, ensuring widespread adoption and proficiency within your organization. You’ll be left behind without moving now on training everyone in the basics of adopting GenAI.
Key Considerations:
Comprehensive Curriculum: Develop a holistic AI training curriculum that covers technical skills, business applications, ethical considerations, and soft skills like critical thinking and emotional intelligence.
Personalized Learning: Offer personalized learning paths based on employees' roles, skills, and career aspirations. Use AI-powered tools to recommend tailored training content and track progress.
Hands-On Practice: Provide ample opportunities for employees to apply their AI learning through practical projects, hackathons, and cross-functional collaborations.
Continuous Learning: Embrace a culture of continuous learning. Regularly update your training programs to keep pace with AI advancements and encourage employees to pursue ongoing self-directed learning.
Implementation
Scoring Criterion:
1-2 points: Initial exploratory GenAI projects without significant operational impact.
3-4 points: GenAI projects are integrated into regular operations, tracking for effectiveness and scalability.
5 points: GenAI is fully integrated into operational strategy, driving major improvements in efficiency and effectiveness.
Did You Know? 28% of surveyed environments have moved beyond GenAI experimentation to more substantive implementations.
Action: Go through your experiments with a fine -toothed comb. What really matters? What is working? What can you scale up confidently? Identify and scale up GenAI applications that have shown success in pilot phases, aiming to embed them deeply into operational processes for transformative outcomes.
Key Considerations:
Incremental Approach: Start with small, focused AI projects that deliver tangible value. Use these successes to build momentum and support for larger initiatives.
Human-Centered Design: Put human needs and experiences at the center of your AI implementations. Regularly gather user feedback and iterate based on their insights.
Cross-Functional Collaboration: Foster close collaboration between AI teams and business units. Ensure AI solutions are deeply integrated with business processes and workflows.
Governance and Accountability: Establish clear governance structures and accountability measures for AI implementations. Define roles, responsibilities, and performance metrics for AI projects.
Employee Sentiment and Inclusion
1-2 points: Limited awareness of varying AI concerns among different employee demographics.
3-4 points: Actively assessing AI sentiment across age, race, education, and job level groups.
5 points: Implementing targeted strategies to address the unique AI concerns of each group.
Did You Know? Younger workers, people of color, and those with less education express significantly higher levels of anxiety about AI's potential impact on their jobs compared to their counterparts.
Action: Create a process for ongoing research to understand how AI sentiment varies across your workforce. Use these insights to develop inclusive AI adoption strategies that address the specific concerns of each group. Consider targeted communications, training programs, and support initiatives to ensure all employees feel heard, valued, and prepared for the AI transition.
Key Considerations:
Generational Divide: Recognize that younger workers may be more apprehensive about AI's long-term impact on their career prospects. Engage them in shaping your AI strategy and provide clear growth pathways.
Racial Equity: Acknowledge that historical and current workplace inequities may heighten AI anxiety among people of color. Ensure your AI initiatives include strong fairness and anti-bias measures, and actively promote diversity in AI roles.
Educational Inclusivity: Provide upskilling and reskilling opportunities for workers of all educational backgrounds to build AI proficiency and adaptability. Emphasize that AI is a tool to enhance, not replace, their contributions.
Leadership Alignment: Bridge the optimism gap between leaders and frontline workers through transparent communication, participatory planning, and empathy. Ensure leaders at all levels are attuned to and actively addressing workforce AI concerns.
Scoring Guide
5-10 points: Your organization is beginning to explore GenAI, but much work is needed to align with industry leaders.
11-20 points: You are on par with many in the industry, with solid foundations in place but room for further integration and leadership in GenAI.
21-25 points: Your organization is a leader in GenAI adoption, setting standards and pushing boundaries in strategic, operational, and technological realms.
💡
Now that you’ve assessed your organization’s AI progress, take the next step towards success. Contact us to learn more about our AI strategy and complex change management services.
Confusion
How Many Companies are Using Gen AI?
Diverging Data: McKinsey reports 55% of companies using generative AI, while the US Census states only 5% are adopting it, highlighting significant discrepancies in adoption figures.
Survey Differences: The US Census's survey covers all companies across the US economy, while McKinsey's focuses on larger companies they work with, potentially explaining the disparity in results.
Contextual Factors: The US Census also frames AI use in terms of production of goods and services, contributing to the differences in adoption rates between the two surveys.
Understanding the Delta: Both results can coexist; understanding which part of the population each survey samples is important to interpreting the data accurately.
What Effect Do Leaders Expect?
Cost Savings Expectations: About half of leaders anticipate AI will deliver cost savings in 2024, with half expecting these savings to exceed 10%.
Lack of Training: Only 6% of companies have trained more than 25% of their employees in generative AI, indicating limited preparedness to achieve cost savings through AI.
Guidance and Confidence: 45% of leaders lack guidance or restrictions on AI use, and a similar percentage express limited or no confidence in their executive team's proficiency with generative AI.
Experimentation vs. Implementation: 90% of companies are either waiting for AI to move beyond the hype or experimenting in small ways, conflicting with expectations for significant cost savings.
Confusion in Expectations: The disparity between high expectations and limited action creates confusion, highlighting the need for clear strategies and comprehensive training to realize AI's potential.
What are Chief Data Officers Doing?
Crucial for Value: 98% of Chief Data Officers (CDOs) believe a data strategy is essential to unlocking generative AI's value.
Lack of Implementation: 57% of CDOs haven't made necessary changes to their data strategies, revealing a gap between recognizing importance and taking action.
Disconnect: This disparity indicates the difficulty in aligning strategic priorities with implementation, limiting the potential benefits of generative AI.
Need for Alignment: The need for both data strategy and implementation to progress together highlights the importance of comprehensive, actionable plans.
Success Hurdles: To maximize generative AI's potential, organizations must bridge the gap between strategy acknowledgment and execution, ensuring successful adoption.
How Valuable is Generative AI?
Productivity Gains: Generative AI offers significant improvements in task proficiency and efficiency, particularly in areas like writing and coding, demonstrating its potential for productivity gains.
Value at the Organizational Level: Despite task-level gains, many organizations are hesitant to invest in tools like Microsoft Co-Pilot or GitHub Co-Pilot, indicating a gap between individual task gains and overall organizational impact.
Workflows and Integration: The disparity between task-level productivity and organizational adoption highlights the need to understand how AI can integrate into workflows, influencing overall productivity.
Aggregating Impact: Bridging the gap between individual tasks and organizational productivity requires a comprehensive approach to ensure task-level gains aggregate into meaningful business outcomes.
Research and Contrast: This contrast between generative AI's potential and its adoption emphasizes the importance of understanding workflows and how they contribute to overall value.
How Do Enterprises Measure ROI on LLM Spending?
Lack of Precise Measurement: A survey by a16z reveals that more than half of companies investing in LLMs believe in a positive ROI, yet they're not measuring it precisely.
Trust vs. Results: This lack of measurement coexists with companies that aren't seeing results, leading to organizations either not spending or trusting in future returns without immediate evidence.
Common Trends: The disparity between companies investing without measurement and those abstaining due to lack of results highlights differing approaches to ROI evaluation.
Contextual Bias: The survey's findings reflect the views of organizations interested in talking to a tech VC, potentially skewing the results toward those with greater trust in AI's future benefits.
💡
Don’t let confusion stall your AI adoption. We can help you make sense of the landscape, benchmark your progress, and chart a clear path forward. Contact us to learn more about our in-person and digital offerings that will help you find the moment of clarity required for confident action.
Status
How Many People are Using ChatGPT?
Significant Growth: Pew Research shows that ChatGPT usage has grown significantly across all age brackets between July 2023 and February 2024, with usage among all adults increasing from 18% to 23%.
Disproportionate Adoption: The highest adoption rate is among younger workers aged 18 to 29, while usage gradually decreases with each older age bracket.
Consistent Increases: Despite disparities in adoption rates between age groups, all categories show steady and notable growth in ChatGPT usage.
Generational Dynamics: The data reflects generational differences in technology adoption, with younger workers leading in embracing ChatGPT, though its adoption is becoming more widespread across all age groups.
What People are Using ChatGPT For
Steady Increase: Pew Research shows consistent growth in ChatGPT usage across various purposes between March 2023 and February 2024, with notable increases across three key categories.
Tasks at Work: The most significant growth is in tasks at work, increasing from 8% to 20% of users utilizing ChatGPT for professional tasks, indicating its growing role in workplace productivity.
Learning and Entertainment: ChatGPT is also increasingly used for learning something new and for entertainment, reflecting its versatility in both personal and professional contexts.
Balanced Growth: All three categories started in a similar range (8-11%), demonstrating ChatGPT's balanced growth across different areas of life.
Expanding Applications: The data highlights ChatGPT's expanding integration into everyday activities, particularly for professional tasks, emphasizing its potential as a multifaceted tool.
Uses of AI: Past and Expected
Steady Increase: US Census data shows consistent growth in AI usage across various categories, with significant increases expected over the next six months compared to the past six months.
Marketing Automation: 28% of organizations reported using AI for marketing automation in the past six months, with this expected to increase to 37% in the next six months, highlighting significant growth in this area.
Data Analytics: AI usage in data analytics is set to grow from 16% to 30%, indicating a growing reliance on AI for insights and decision-making.
Usage Patterns: The growing adoption of AI across various organizational functions emphasizes its increasing role in driving productivity and automation in multiple sectors.
Changes Made by Firms to Use AI
Differing Growth: Data shows growth in AI usage is significantly influenced by company size, with smaller companies expecting substantial growth in training programs and workflows compared to larger companies.
Training and Workflows: About 20-40% of companies expect to increase AI training and workflows over the next six months, reflecting varied strategies in AI adoption across different company sizes.
Small Business Surge: The employee-weighted decline suggests that while smaller companies are expecting significant growth in AI usage, larger companies are not growing at the same rate, leading to an overall shift in adoption patterns.
Expanding AI Adoption: The varied growth rates between smaller and larger companies indicate a need for strategies that cater to different organizational needs, highlighting the expanding reach of AI in diverse business contexts.
Current & Expected AI Use: Retail & Food Service
Sectoral Anomalies: Data shows that retail and food service sectors stand out in their expected AI adoption patterns compared to other sectors, presenting a different trajectory of current and expected AI usage.
Employee-Weighted Growth: While the number of companies expecting to adopt AI is growing moderately, the employee-weighted figure has surged from 3-4% to 20%, indicating substantial growth in AI integration among larger companies in these sectors.
Larger Companies: The data suggests that large retail and food service companies, which employ many workers, are planning significant AI usage increases in the next six months.
Practical Applications: This trend could lead to more announcements like Wendy's integrating generative AI into drive-through operations, signaling broader AI usage in customer-facing services.
Companies are Largely Experimenting
Experimentation Prevalent: Data from shows that 70% of cloud environments have cloud-based managed AI services, but only 28% of companies appear to be doing more than experimenting, indicating limited adoption.
Discussed vs. Implemented: The discrepancy between success stories from big tech and consulting firms and the relatively low level of active AI usage highlights a gap between AI discussions and practical implementation.
Real-World Implications: This reinforces the perception that, despite the discussions and success stories, companies are yet to fully leverage AI's capabilities, emphasizing the need for practical implementation strategies.
Biggest Challenges to Generative AI Adoption
Top Challenges: AWS survey data shows that Chief Data Officers (CDOs) identify the biggest challenges to AI adoption as data quality, identifying the right use cases, creating guardrails for responsible AI use, and security and privacy concerns.
Data Quality: Ensuring quality data is crucial for successful AI integration, yet remains a major hurdle for many organizations, impacting AI's effectiveness.
Guardrails for Responsible Use: The need for guardrails around generative AI emphasizes the importance of responsible use, highlighting the ethical considerations involved in AI integration.
Security and Privacy: Security and privacy concerns are significant barriers to AI adoption, particularly for generative AI, necessitating robust measures to protect sensitive data.
Current Use of Generative AI
Prevalent Experimentation: AWS survey data shows that 26% of companies report employee-level experimentation, 21% allow experimentation with guidelines, and 19% have experimentation at a group level.
Limited Production: Only 6% of organizations have generative AI use cases in production, indicating a substantial gap between experimentation and full-scale integration.
Reinforcing Perceptions: This data affirms the perception that while generative AI is generating excitement, its adoption largely remains experimental, particularly among organizations queried.
Prioritized Future Use Cases
Key Categories: AWS survey data indicates that Chief Data Officers (CDOs) prioritize future AI use cases in customer operations, overall personal productivity, software engineering, and marketing and sales.
Customer Operations and Support: AI is expected to play a significant role in customer operations and support chatbots, reflecting its potential to enhance customer experiences and streamline service.
Productivity Gains: Organizations see AI's potential to boost both overall and personal productivity, highlighting its role in streamlining workflows and automating tasks.
Software and Marketing: AI's integration into software engineering and marketing indicates its expanding role in diverse business functions, enhancing development processes and marketing strategies.
Who is Using Generative AI?
Broad Adoption: McKinsey survey data reveals 88% of generative AI usage is from non-technical employees, indicating its integration has moved beyond the IT group and into broader organizational roles.
Technical vs. Non-Technical: Only 12% of generative AI usage comes from technical roles, with 10% from technical employees and 2% from AI-adjacent roles, reflecting limited adoption in the IT group.
Outside IT: The data indicates experimentation with generative AI is primarily happening outside of IT, demonstrating its adoption by a variety of roles across organizations.
Non-Isolated Adoption: This trend highlights that generative AI usage is not isolated to programmers, developers, or data scientists, but is spreading throughout the organization.
Implications: The shift to broader organizational roles suggests that generative AI's experimentation and adoption are across diverse functions, highlighting its potential to drive cross-functional benefits.
The Challenge to Adoption is in Integration
Integration Hurdles: While LLMs have the potential to revolutionize work, integrating them presents challenges that require significant human involvement for oversight and judgment, complicating their implementation.
Knowledge Management: LLMs offer promising opportunities for knowledge management and decision support by querying multiple datasets in natural language. However, only 11% of data scientists report success in fine-tuning LLMs, indicating difficulties in capturing and curating accurate organizational knowledge.
Output Verification: AI-generated code presents issues of technical debt and the need for verification and debugging, offsetting initial productivity gains and transforming the nature of work in unexpected ways.
Automation and Job Transformation: Generative AI's impact on jobs is complex, with many tasks remaining dynamic and variable. While LLMs can complete many tasks, the remaining "last mile" tasks often require human intervention, indicating a nuanced relationship between automation and job transformation.
💡
Ready to move beyond experimentation and realize the full value of AI? We can help you identify and prioritize the best use cases and craft human-machine workflows that will deliver the results you need. Contact us to learn more.
Anxiety
People Are Getting More Concerned
Rising Concerns: Pew Research data shows a significant increase in societal anxiety about AI, with 52% of people more concerned than excited in 2023, up from 38% in 2022.
Drop in Excitement: The proportion of people more excited than concerned about AI dropped from 15% in 2022 to 10% in 2023, indicating growing apprehension.
ChatGPT's Role: The spread of ChatGPT and other generative AI tools has contributed to increased societal awareness and concern.
Leaders Are More Optimistic
Leaders vs. Frontline: BCG survey data reveals that 62% of leaders are optimistic about AI's impact, compared to 42% of frontline workers, highlighting a significant disparity in attitudes.
Managerial Optimism: Managers fall in between, with 54% optimistic, indicating a gradual decline in optimism from leadership to the frontline.
Concerns on the Frontline: The 50/50 split between optimism and concern among frontline workers highlights their apprehension, despite—or because of—their day-to-day use of AI tools.
Use Case Development: This divide is problematic, as frontline workers, who often generate the best use cases due to daily AI usage, are also more likely to be concerned about AI's impact.
Younger Workers Are More Worried
Worry by Age: Data from the American Psychological Association shows that workers aged 18-25 and 26-43 are most worried about AI making some or all of their job duties obsolete, with concern declining as age increases.
Career Outlook: Younger workers may be more concerned about AI displacement due to having more years ahead in their careers, compared to older workers who are closer to retirement.
Usage Correlation: The higher usage of AI tools among younger workers may also contribute to their heightened concern, as they experience both the power and potential threat of AI first-hand.
Organizational Dynamics: Understanding this age-based divide can help organizations anticipate how different demographic groups might respond to AI adoption, balancing both usage and anxiety.
People of Color Are More Worried
Racial Disparities: Data from the American Psychological Association shows that 50% of Black workers are worried that AI might make some or all of their job duties obsolete, compared to 34% of white workers.
Heightened Concerns: The significantly higher level of concern among people of color reflects disparities in how different demographic groups perceive AI's impact on employment.
Employment Insecurity: These disparities may stem from existing inequalities in the workforce, exacerbating concerns about job displacement and reinforcing fears of AI's impact.
Inclusive Strategies: Addressing these concerns requires strategies that consider the unique perspectives of different demographic groups, ensuring AI's adoption does not disproportionately disadvantage historically marginalized workers.
Workers with Less Education Are More Worried
Educational Divide: American Psychological Association data shows that 44% of workers with a high school degree or less are worried about AI making some or all of their job duties obsolete, compared to 34% of those with a college degree or higher.
Higher Education, Lower Anxiety: Workers with more education tend to be less concerned about AI's impact, reflecting differing perceptions of job security across education levels.
Job Vulnerability: The disparity in concern indicates that workers with less education feel more vulnerable to AI displacement, possibly due to fewer opportunities for upskilling or shifting roles.
Inclusive Retraining: Addressing this educational divide requires inclusive retraining and upskilling initiatives, ensuring workers of all education levels can adapt to AI's impact on the workforce.
Upper Management Likes Monitoring More
Monitoring and AI: American Psychological Association data reveals varied perceptions of monitoring, which can relate to AI's role in surveillance and data analytics.
Management vs. Frontline: Upper management is much more favorable towards monitoring, believing it improves productivity, workplace experience, and safety, compared to individual contributors and frontline workers.
Perceived Benefits: The gap between frontline and upper management views on monitoring highlights differing perceptions of its benefits, with management seeing monitoring as advantageous across productivity, experience, and safety.
Surveillance and AI: While not directly an AI question, the monitoring issue relates to AI's potential for surveillance, reflecting how technology can influence and be perceived within workplace dynamics.
Disruption Expected
New Roles and Change Management: BCG data shows 89% of executives believe generative AI will create new roles, while 74% see a need for significant change management, indicating major shifts in workforce dynamics.
Reskilling Needs: 46% of workers are expected to need reskilling in the next three years, reflecting the disruptive potential of generative AI and the need for training initiatives.
Labor Replacement: The narrative of generative AI as a labor replacement technology, contributes to fears of disruption and challenges for workers.
Addressing Disruption: To mitigate this disruption, organizations need comprehensive strategies, including retraining and change management, to ensure generative AI's impact is positive and balanced across the workforce.
Expected Effects of AI on Employment
No Change Expected: BCG survey data shows that 87% of leaders expect no change to employment levels due to generative AI, indicating a lack of consensus on its impact.
Split Opinions: The remaining 13% are split between expecting an increase or decrease in employment, reflecting divergent views on how AI will reshape the workforce.
Dynamic Landscape: Despite predictions of change and new roles, the balance between differing expectations suggests an evolving landscape, likely to shift in the coming years.
Uncertain Outcomes: The disparity between new roles and static employment levels indicates uncertainty in how generative AI will affect jobs, necessitating strategies to navigate this complexity.
Optimism Requires Particularly Heroic Assumptions
Heroic Assumptions: Peter Cappelli of Wharton cautions against assuming that lower-level employees will be empowered by access to LLMs to take on higher-level tasks, highlighting the complexity and challenges of AI adoption.
Complex Dynamics: The variability and unpredictability of LLMs in workflows protect existing jobs, reflecting the nuanced relationship between AI adoption and employment dynamics.
Confusing Landscape: The current state of AI adoption reveals a mix of theoretical and empirical influences, resulting in a complex, and sometimes confusing, situation that needs further research and clarity.
💡
Managing the human side of AI adoption is essential to success. Our complex change management programs can help you build an inclusive AI strategy, engage your workforce, and build a culture of continuous learning. Contact us to learn more.
AI in the Wild
Answers from NYC’s Business Chatbot Go Against the Law
Chatbot Inaccuracies: The New York City government's chatbot for rules and regulations gave inaccurate responses, such as allowing landlords to discriminate by income and stores to go cashless, despite city laws to the contrary.
Defined Facts: These inaccuracies, identified by journalists, highlight the discrepancy between well-known information and chatbot responses, emphasizing the need for factual accuracy in AI-generated content.
Sycophancy in Responses: The chatbot's responses exhibit sycophancy, where answers reflect the perceived intent or tone of the prompt, potentially leading to skewed or misleading information.
Washington’s Lottery Generates Topless Image
Dream Visualization: The Washington State Lottery's app, "Test Drive a Win," aimed to visualize dreams, generating photos of users realizing their aspirations if they won the lottery.
Inappropriate Imagery: One instance led to an inappropriate image of a woman on a beach, her face superimposed onto a topless body, revealing a significant issue with the app's content generation.
Guardrails Needed: This incident highlights the need for robust guardrails to prevent AI-generated content from producing inappropriate or offensive material, particularly in consumer-facing applications.
Googles SGE Promoting Spam
Spam Propagation: Google's Search Generative Experience (SGE) has been shown to propagate spam results, allowing misleading links to appear in its generative search responses.
Spammers Exploiting AI: Spammers have figured out how to insert their links into the generative search responses.
URL Verification: Examination of the URLs reveals these are clearly spam, highlighting the need for robust mechanisms to filter out unwanted or misleading links in AI-generated content.
💡
Proactive and strategic planning is essential for success with generative AI. Partner with Artificiality to navigate the complexities, mitigate the risks, and unlock the full potential of AI. Contact us to get started.
Dave Edwards is a Co-Founder of Artificiality. He previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Apple, CRV, Macromedia, Morgan Stanley, Quartz, and ThinkEquity.