Why is HR so gullible?

This was the question asked this week by a Princeton comp-sci professor.

Abstract image of an interview

This week saw a great quote from Princeton Associate Professor of Computer Science, Arvind Narayanan: “why are HR departments apparently so gullible?”

He was referring to the use of AI in pre-recruitment, where AI algorithms pre-screen candidates and match “good employee” attributes with data gathered from an AI-based interview. Candidate suitability and personality are assessed from videos, games and other types of algorithmic systems. Many of these systems claim to work by analyzing body language, speech patterns, mouse movements, eye tracking, tonality, emotional engagement and expressiveness. The sheer quantity of data is astonishing (and alien); hundreds of thousands of data points on people are gathered in a half hour interview or an online game-playing exercise.

The prize for AI-recruitment companies is big. And the stakes for companies and candidates are big too. Narayanan points out, “These companies have collectively raised hundreds of millions of dollars and are going after clients aggressively. The phenomenon of job candidates being screened out by bogus AI is about to get much, much worse.” The top two companies (based on funds raised) are HireVue ($93 million) and pymetrics ($56.6 million).

Companies marketing these services promise a lot, including helping companies increase diversity by reducing the impact of human bias, increasing the quantity and quality of candidates, decreasing the time and cost of recruitment and matching people by “fit” or soft skills or cognitive ability or personal attributes such as patience, resiliency and grit rather than on previous experience. Customers say these systems help them prepare for the “workforce of the future,” where a person’s cognitive and personal style will matter more than technical skills and previous experience.

The goals are laudable. But is AI, especially pre-recruitment screening, going to get us there? As Narayanan says, many of these products are “little more than random number generators.”

There are three broad areas where these systems are problematic:

  • data and machine bias
  • unscientific, unproven claims with a lack of casualty
  • dehumanizing and ineffective

Bias

There are numerous ways in which AI may be biased. Human programmers can introduce bias when they choose a goal for the AI. For instance, engineers may design an algorithm to look for “good employees” by correlating “good” with bias of the past such as subjective performance assessments. It doesn’t take much imagination to see how this would work in the real world: a group of self-selected employees who have plenty of their own biases attempt to codify highly subjective and context-dependent success factors from the past. The effort required to genuinely, creatively and accurately anticipate objective (yet not easily quantifiable) “soft skills” needed in the future sounds like more of an art project, not one of science.

Training data contains bias. For instance, if the data aren’t representative of racial minorities because they have had less access to technology and haven’t been represented in digital form as much as other groups, then there will be significant bias. An AI simply won’t understand people from these groups in a reliable manner.

Bias in data is pervasive and tenacious. It’s particularly problematic for protected classes and not using protective classes does not guarantee that AI is non-discriminatory. Putting these screening systems in the hands of busy, transaction-focused recruiters is a recipe for perpetuating bias of the past.

There’s also a problem with a lack of rich and specific decision-making factors which can result in giving more meaning to a few data points that are then used to generalize across broad groups of people. For instance, speed and patterns of mouse clicks in a sample group of GenZers being used across groups that include all ages and degree of familiarity with technology.

A particular issue for screening is that AI can be manipulated behind the scenes by employers who want to push one or another outcome and mask their intentions with the argument of “algorithmic neutrality.” This is where AI can be used for bad just as easily as it can be used for good. We really have no way of knowing without employers reporting on the use of these algorithms in context - what are their broader goals, how rapidly are they progressing towards those goals, what are the data that supports claims around fairness and inclusion in the talent acquisition process? Essentially, how do they know it’s working?

It also looks like AI discrimination in hiring can be defended legally because an employer can more easily claim “business necessity” - algorithms sort for so many features that basically anything can now be a business necessity.

Unscientific and unproven claims and lack of causality

In a new paper “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices,” researchers find that one of the biggest problems in AI pre-screening is a lack of transparency and substantive proof about how the AI works. “Models, much less the sensitive employee data used to construct them, are in general kept private.” A key concern is that science just can’t keep up; “academic research has been unable to keep pace with rapidly evolving technology, allowing vendors to push the boundaries of assessments without rigorous independent research.”

I’ve written about this before, specifically emotional AI, where the science that links facial expression to inner emotional state isn’t keeping up with the use of emotion AI in many applications. In this case, unless there is some reliable feedback (such as a dynamic avatar eg Soul Machines) or demonstrable proof of the efficacy of the system, it’s dubious to apply AI to human expression. Applying AI to an interviewee’s expressions and then making predictions about their future job performance sounds, to Narayanan, like “AI snake oil.”

There is certainly evidence for the predictive validity of these assessments. But there are often gaps in the theory. AI finds new correlations but doesn’t necessarily quantify existing relationships. For example, actions in a video game can be highly predictive of personality traits yet experts don’t understand why. There are other important examples; why is the cadence of a candidate’s voice predictive of higher job performance or why does reaction time predict employee retention? If vendors can’t clearly explain causal a relationship, should this be used to accept or reject an applicant? We also don’t know how stable these relationships are over time or whether they generalize well over large populations. Certainly, when an algorithm takes “millions of data points” for each candidate (as advertised by pymetrics), it seems like a stretch to be able to give a justification for why each feature is included.

Dehumanizing and ineffective

Algorithmic anxiety and anger is real. Some colleges now have websites that help students prepare for these interviews. Emma Rasiel, an economics professor at Duke, told the Washington Post that a growing number of students have been unsettled about AI interviews.

“It’s such a new and untried way of communicating who they are that it adds to their anxiety,” Rasiel said. “We’ve got an anxious generation, and now we’re asking them to talk to a computer screen, answering questions to a camera … with no real guidelines on how to make themselves look better or worse.”

People aren’t afraid to take to social media, expressing frustration, often at the perceived inaccuracies of the tests.

“The HR dept insist their system is highly accurate and they will not pass on any applications from people who don't pass the test. So now the entire department, along with a few senior members of other departments are all taking the test, and coming up with some very poor scores. It's basically reading tea leaves or the entrails of a sacrificed goat. People's careers are being derailed by HR astrology. It's clearly not ideal for the orgs that need good people either.” - @TheWrongNoel

So why are HR managers apparently so gullible? These systems are adopted for a lot of reasons. First, the internet has made applying for a job utterly frictionless. So HR departments are flooded with resumes that they somehow have to triage cost effectively. Second, companies are faced with very real incentives to improve recruitment and to find ways to predict employee performance and retention. Fast food restaurants are losing up to 100% of workers each year. That’s a turnover crisis. Third, AI is highly promissory - AI progress has been HUGE. Games, search, recommendation engines, computer vision - all massive progress.

But humans are a much tougher nut to crack and we should all be skeptical of how AI is being used, especially in emotional and social settings where the stakes are high. As the Cornell researchers state, “Hiring decisions are among the most consequential that individuals face, determining key aspects of their lives, including where they live and how much they earn.”

These systems are not going away and they are pretty much unavoidable - do you want a shot at the job or not? There are also signs that they will become the default system because of the way employment law works. It’s a complex aspect of the law but boils down to this: employers are liable if they fail to adopt an alternative practice that could have minimized any business-justified disparity created by their selection procedure. The ready availability of these AI-driven systems, where vendors decide how to de-bias and comply with various anti-discrimination thresholds, may create a legal imperative to use them. Correlative, unexplainable, privately-held AI can redefine how important legal standards are tested and upheld.

The more AI becomes unavoidable, the more an AI replaces a human, the more a fellow-human can’t explain why an AI made a decision, the more people will feel algorithmically oppressed. It’s ironic that the first experience an - ultimately successful - candidate has with an employer that wants high performing, engaged “human” employees, will increasingly be through a machine employee not a human one.

I’ll leave the last word to Twitter:

My former employer used a similar test, which seemed to select only inept sociopaths. - @laclabra

Likewise, if you’re wondering why you can’t find the right people for that crucial position, it might be because Gwyneth in HR is excluding people using repurposed love quizzes from 1980s magazines. - @TheWrongNoel

Let’s start by not allowing companies to call these HR processes if no real H is involved in the process. - @srescio

This week, a few interesting things, worthy of your time.

  • Three interesting ideas in Apple's research on Siri, and voice assistants in general. They are all incremental steps on the path to the holy grail of voice - an assistant that understands the user's intent.
  • ICYMI, Sacha Baron Cohen’s scathing attack on Facebook. Cohen speaks at Never Is Now, the Anti-Defamation League’s summit on antisemitism and hate in New York. Facebook is culpable for a surge in “murderous attacks on religious and ethnic minorities.” A must-watch if you haven’t already.
  • Interview - Azeem Azar’s Exponential View podcast with Meredith Whittaker on Google, AI Now and algorithmic bias.
  • For more deets on the ethics of AI and the EHR, the testimony from the AI Now Institute to the NYC council is worth a read. “Google is not the only cloud provider partnering with hospital systems to help migrate patient data and other health information technology infrastructure to cloud servers owned and managed by large tech firms. Amazon Web Services now provides the ability to subscribe to third party data, enabling healthcare professionals to aggregate data from clinical trials. Microsoft recently announced a partnership with Humana that would provide cloud and AI resources, as it is also helping power Epic Systems’ predictive analytics tools for EHRs.” AI Now raises the concern that privacy is not protected in the new world of cloud and health, particularly in regard to clinical trial information. In reference to a partnership between the University of Chicago Medical Center and Google, “Google is uniquely able to determine the identity of almost every medical record the university released due to its expertise and resources in AI development.”

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.