Connect with us

Business

AI Often Adds To Bias In Recruiting—But There’s A New Approach That Could Change The Game

Published

When you consider that recruiting software—both AI and traditional—mirrors human tendencies, you realize that bias affects every part of the recruiting process, from in-person interviews to resume-scanning software. 

. . .

Most people aren’t trying to be biased, but bias is inherent—it influences how we view any situation, often unconsciously.  

When you think of bias, characteristics like race, gender, and religion likely come to mind. But there’s a much broader context of what bias can actually be. 

Bias comes in many forms. For example, the halo effect occurs when we assume that our initial impression of someone means something about his or her character. The halo effect can lead us to believe, without evidence, that someone who is warm and likable when you meet them is also intelligent and capable. 

Similarity bias is our implicit affinity toward those similar to us. In our flawed minds, relatable traits are positive traits—even when they really aren’t. Is someone who grew up 15 minutes from you, or someone who is also a soccer fan, really more likely to be a better team member?  

These types of biases present a big problem in recruiting and hiring. And not just in human recruiters. When you consider that recruiting software—both AI and traditional—mirrors human tendencies, you realize that bias affects every part of the recruiting process, from in-person interviews to resume-scanning software. 

Bias in recruiting impacts the way human recruiters evaluate candidates.

Because recruiters often see hundreds or even thousands of resumes cross their desks every day, they spend an average of just seven seconds reviewing each one. And when making quick judgment calls to weed out candidates, a person’s name, education, and previous jobs can all unfairly or inappropriately influence a recruiter’s decision to consider them for the job. 

We’re all guilty of bias—myself included. For example, during a recent meeting, we were discussing how a particular candidate was a talented golfer. My biased knee-jerk reaction was to assume the candidate was a man. But, as it turned out, this candidate was a woman.  

But with technology, we have the opportunity to reduce the influence of bias in recruiting and hiring—in theory.  

However, recruiting software is imperfect for a number of reasons. 

A key goal of recruiting software is to surface the best candidates for the job from a pool of applicants. To achieve this, the majority of technology today is based on keyword search and most commonly requires the use of boolean.

But this technology fails to account for three major realities about recruiting. First, it relies on the recruiter to execute strong boolean queries, which is a rare skill these days. Second, it can skip over great candidates who don’t include the right keywords. A lot of people simply aren’t good at creating a resume. That doesn’t mean they wouldn’t be fantastic at the job. Third, some people (who may not be great at their jobs) know how the system works and hack it by dumping keywords throughout their resumes.

Eventually, search providers realized this hack and built workarounds by using synonym keywords and other semantic strategies. Still, these changes didn’t result in a technology that delivered the full picture and showed a recruiter all of the best candidates. The right information still had to be on the candidate’s resume for them to filter through. 

And AI recruiting software isn’t without its drawbacks, either. In fact, the way AI surfaces job candidates can actually add bias rather than reducing it. 

This is because AI and machine learning essentially take human behavior and just dramatically speed it up. So a machine can sort through a mass of resumes or applications and, based on a number of factors, determine the top five or 10 percent. The system weeds out a majority of applicants and says, “Here are the people you should interview.” 

But these systems simply learn what you teach them. So, for example, if your first few hires for a particular role are all Midwestern white guys who went to a particular college, the AI will think that profile is the best candidate. And, as a result, that kind of person will keep surfacing. That’s bias. 

Amazon had been developing recruiting software for years that they eventually learned was biased against women. They ended up scrapping it completely. There is a silver lining to realizing your AI is biased, though—the patterns the AI turns up can teach you a lot about your own biases.

Here’s how ThisWay Global is tackling the problem of bias in recruiting through thoughtfully developing AI recruiting software:

The team built and tore down their entire algorithm 13 times working to eliminate bias entirely.

Through trial and error, they found the best way to reduce bias is to simply add more factors to the equation. Their algorithm goes beyond experience and education—it considers a person’s passions, interests, traits, skills, etc. It also masks all characteristics that are irrelevant to the job and could invite bias: gender, name, age, etc.

And when designing AI recruiting software, they considered that finding the right fit is a two-way street: The company should be good for the candidate, too. For example, a company can have a great reputation on paper but be a horrible fit for a particular employee.

The approach is ontological. Tom Gruber, an AI specialist at Stanford, defines ontology as “the specification of conceptualizations, used to help programs and humans share knowledge.”  

An ontological approach can match the right candidate with the right role in a way keywords just can’t. Here’s a real example: A pharmaceutical manufacturing plant had a job opening for a technician/pipefitter. They needed someone who not only had the technical ability but who also could be trusted with a high security clearance and was willing to work closely with a certain potentially hazardous chemical. So there were a lot of unique, specific requirements for this particular job. 

This is the exact type of hiring situation where ontological technology is incredibly useful. Because it enables you to find the exact right candidate—in this situation, someone who has worked with the right chemicals in a nearby region with the right experience and education. 

It accounts for all factors, even those that would be near impossible for a human recruiter to piece together, and finds candidates that probably would have never been considered. For example, the pharmaceutical technician role was filled by an engineer who had been working on submarines at a nearby military base. Who would have recognized that this person was right for the job? Who would have had this type of information at their disposal at the click of a button? Not the average recruiter, that’s for sure. 

But the AI-backed software did, because it operates truly holistically and comprehensively. It goes beyond keywords to understand how the data is connected to deliver the best candidate—without bias.  

Brandon Metcalf is the CEO and Founder of Place Technology and a partner at Blueprint Advisory. He has extensive experience creating, scaling and leading global companies, with a deep understanding of building successful SaaS and Salesforce products.

Top 10

Copyright © 2019