Predictions for AI in the 2020s

All predictions are wrong but some are useful. Hopefully these are useful!

An abstract image of a crystal ball

This is the final Artificiality for the twenty-teens and it’s a bumper issue. We’re now heading into a decade where artificial and human intelligence start to merge in ways that are impossible to forecast. Having said that, it’s always a fun challenge to make some predictions, so here we go with a dozen predictions for 2020 and 2030.

We will demand to know how AI sees us

The next decade will see a fundamental shift in mindset. We will want to protect the sanctity of our inner lives from manipulation and surveillance by AI. We will think less about what we volunteer as inputs to AI, instead we will be concerned about outputs. Instead of controls and permissions, the next decade will be about AI inferences; what are our individual data “voodoo dolls”? We will consider our inferences to be sensitive personal data.

2020: A social media celebrity will publicly demand to know how a large tech company sees them. They will want to go beyond ad preferences and understand deeper characteristics such as typical behaviors, unstated preferences, personality and who they are seen to be.

2030: We will be able to adjust inferences made about us, especially behavioral and sensitive inferences. There will be tools and services that help us predict the value of our data “voodoo doll” to different entities and in different contexts.

Algorithmic anxiety will become a thing

More people will be affected by algorithmic exclusion — witness the recent Apple Card kerfuffle, when rich white people experienced inscrutable unfairness. Combined with privacy and surveillance concerns, people will experience a new stressor — anger and anxiety from poorly designed AI-enabled decision systems.

2020: The launch of a coaching and advocacy service for succeeding in AI-conducted job interviews.

2030: The DSM will explicitly recognize anxiety derived from repeated dehumanizing experiences with AI.

AI ethics is a top career path

AI ethics is latent — technology has needed ethical input for a long time. AI now makes this a practice and attracts diverse people and thinkers to technology, reinvigorating the humanities and changing how technologists consider the moral consequences of their work.

2020: A major university announces an AI ethics course that involves multiple faculties and disciplines — sciences, philosophy, gender/race/queer studies, as well as math and comp-sci.

2030: >80% of Fortune 500 companies have an Office of AI Ethics.

AI surveillance will be seen for what it is — uniquely invasive

AI is highly privacy disruptive because it underpins new surveillance capabilities in the physical world. Video-based surveillance and facial recognition are on a collision course with our sense of freedom. While tech companies market the story that surveillance is the only way to be safe, individuals will not buy this once they feel personally threatened by the technology. China’s increasing use of the technology will give rise to a heightened sense of moral non-equivalency — freedom versus AI.

2020: The year we experience a backlash against neighborhood surveillance. Multiple cities will ban facial recognition in policing. Amazon will see its favored status as “most trusted tech company” ceded to Microsoft as a result of its Ring partnerships and hands-off approach to the uses of its facial recognition product, Rekognition.

2030: A patchwork of local laws and regulations around AI-based surveillance and facial recognition will finally result in a standard set of federal laws which protect individual rights and punish abusers of AI’s technological capabilities.

We will appreciate that bias is a two-way street.

Data about the world is biased and, left unmitigated, AI amplifies and propagates this bias. AI exposes human bias and humans expose AI bias. Technical fixes for bias are effective but also expose how difficult it is to fill in gaps in datasets and handle situations where human bias is overwhelming. Bias will become a real-time problem for companies.

2020: Bias will be seen as inherent in AI and concerns will go beyond gender, race or other protected characteristics. Bias will be seen as a social issue and non-technical fixes prioritized equally with technical fixes. Data collection will be justified on the basis of filling data gaps.

2030: Technical fixes expose bias in real-time and AI will routinely direct humans and AI on how to fill data gaps. There will be professional certification for data scientists in the increasingly specialized field of bias and fairness mitigation.

AI’s presence or absence will have to be scientifically justified

People will become hyper-aware of the scientific justification for using AI, especially for analyzing human behavior and in high-trust or high-stakes situations.

2020: The first medical ethics case because AI wasn’t used in a diagnostic process with resulting harm. A large employer will abandon the use of psycho-emotional AI analysis, citing lack of efficacy.

2030: AI exposes gaps in the science of human intelligence and directs research. AI is designed to work sympathetically with human cognitive biases — minimizing or maximizing when appropriate. Trust will be dependent on human-like explanations and involvement of a human when it’s important to establish causality.

How AI discriminates will be a game-changer

Fairness in AI is a complex issue because many standard ways of evaluating fairness can be conflicting. AI can be unfair without being illegal because it finds proxies for protected characteristics rather than using them directly.

2020: A state AG takes on a case where it’s suspected that AI proxies are the cause of unfairness or discrimination.

2030: A new challenge arises around AI-powered stereotyping. A new sub-speciality of AI ethics emerges specifically to deal with how AI inferences and classifications cause micro-discrimination based on personalities, behaviors or preferences, eg an individual who is treated disparately in the workplace because they didn’t wear their smartwatch so didn’t track their activity or location.

Transfer learning hits limitations; the “STI of AI”

Pre-trained models and open datasets supplied by the platforms (esp Google, Facebook) are increasingly effective and efficient for deploying AI at scale. But they are also seen as sources of bias or unexplainable outcomes that can “infect” derivative models, giving rise to untraceable behaviors or data labels that can not be effectively excavated.

2020: A major harm-causing AI failure due to transfer learning or a data issue which propagates in an unintended context.

2030: New fields of practice will be established in AI: Data Archeology and AI Forensics. Models are certified as “safe” or “clean.” It will be illegal to delete previous versions of socially significant AI as data labels and model parameters must be available for forensic examination.

Human learning and intelligence will go mainstream in AI research

Leading AI researchers increasingly acknowledge the limits of current deep learning approaches which means that human-analogous AI research will be a growing trend.

2020: The AI research field will explode with terms we have usually thought of as human. System 1 and system 2 thinking, curiosity, attention, intrinsic motivation and cause-and-effect reasoning will become mainstream AI research terms. Leading researchers match these human characteristics with tools for discovering causality based on out-of-distribution data or generalization using sparse factor graphs.

2030: By the end of the decade, there will be a significant breakthrough in the ability of AI to generalize and discover causal factors in data, which go well beyond the statistical associations of today.

We will better understand the learning cycle between humans and machines

Over the next decade the trend towards automating more and more of our lives will accelerate. However, AI will not be evenly applied. Companies that understand how to automate in ways that are beneficial to human skills and enhance human performance — rather than simply replace humans — will see outsize performance from AI. They will know how to access the flywheel of human-machine collaborative learning and knowledge discovery.

2020: AI leaders will shift their focus to designing jobs based on maximizing human and machine skills and tasks together. The first data will come in — companies that design for human-centered AI delegations will see better performance from both human and machine employees.

2030: Companies using AI in manufacturing will have fine-tuned how to design for the right balance of AI and human. >80% of goods are manufactured in places where no human makes a real-time decision. Humans focus on dealing with unpredictability and complex decisions that machines can’t yet make.

Machine employees will speak for themselves

Today, machines at work have no voice or conscience, nor do they have any intent because it is deemed to be the same as the intent of the humans who deploy them. However, as people become more aware of the unintended consequences of AI and the counter-intuitive effects that can result with AI, people will need AI to explain itself. Machine employees will need to be capable of self-regulating for what humans intend rather than what humans asked for. Ultimately this will lead to more pressure on large-scale socially-significant AI to be able to communicate based on human values, not on a narrow specification set early on in the process.

2020: An employee at a tech giant will leak information about how a non-intuitive and inscrutable algorithm caused harm to a group of users because it did what people specified but not what they meant.

2030: AI leaders will publish their machine employee’s “code of conduct” and explain how intent and alignment with human value is specified.

AI Safety will evolve to be a practical field of work

Hazards that arise from AI become a significant threat. Adversarial attacks and deep fakes highlight how vulnerable humans are to uncontrolled AI.

2020: A deep fake causes an international incident. An autonomous drone adversarial attack spurs a backlash and highlights that robots-that-fly make communities uniquely vulnerable. Drones test the limit of the convenience/safety tradeoff. One city passes a delivery-centric no-drone ordinance.

2030: First students graduate from a specialized undergraduate degree in AI Safety.

I’m already looking forward to seeing how these pan out in the next ten years.

Have a great holiday and see you in 2020!


A few must-know links from this week:

  • This landmark piece from NYT on the USA as a location-based surveillance state.
  • PBS show on AI. Nicely done. And following hard on its heals, a new YouTube Originals series called “The Age of AI” premiered on December 18. The first episode features Soul Machines (go kiwi). Trailer is here.
  • Vast new resource released this week by the Oxford Internet Institute’s Project on Computational Propaganda. It’s designed for anyone dealing with misinformation online.
  • 2019 report from Stanford’s Human-centered AI project. Includes a handy tool for searching arxiv.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.