While AI-driven tools can be helpful, there are potential risks and concerns

It seems like everyone is talking about artificial intelligence and all of the ways it will change the way we live and work in the not-too-distant future. One of the ways that businesses are already using AI is in the recruitment and hiring process.

Employers have found ways to use AI to streamline and simplify the challenges of finding the right people to fill job openings. While the use of AI in hiring offers some exciting possibilities, there are potential risks and concerns for employers considering this option, including avoiding unintentional discrimination that can result from AI-driven recruitment efforts.

Employers can use AI at every stage of the hiring and onboarding process, from sourcing candidates, to screening resumes and conducting interviews. AI can even be useful in completion of newhire paperwork and orientation activities. One common use of AI in the hiring process is to screen resumes of potential employees.

Machine-learning algorithms can analyze resumes to identify candidates who may be good matches based on keywords, education or particular experiences, and the system can screen out those who are unlikely to be a good fit. The result is a smaller, higher-quality pool of candidates for human resources personnel to review. When it comes to interviewing, AI chatbots can generate pre-programmed questions and use machine learning to analyze the candidates’ responses. These AI-conducted preliminary interviews can help streamline the interview process for subsequent live interviews.

Somewhat more controversial is the use of AI tools to conduct skills or personality tests, or to comb the web to find and analyze social media and other online data about candidates.

The Equal Employment Opportunity Commission has taken note of the speed with which AI recruitment tools are being adopted and has identified some significant risks that they present in terms of possible discrimination in hiring.

In order to help employers navigate these uncharted waters, the EEOC recently issued guidance for employers considering the use of AI hiring tools. One of the key recommendations is that employers learn about the AI tools they are using and understand how they work. Employers should know what data inputs are being used by the tools, and ask questions about how the algorithms make decisions. The EEOC suggests that employers test the AI systems before using them to see if they result in unintentionally biased results. Employers should also have a good understanding of what kinds of questions the AI systems ask to ensure that chatbots are not asking potential employees impermissible questions about disabilities or health conditions that could lead to illegal screening-out of candidates in protected classes.

Similarly, the EEOC says that employers should ensure that any AI systems they are considering are fully accessible so that all candidates have an equal opportunity to participate in the hiring process.

The EEOC also recommends that employers be transparent about their use of AI in recruitment and that employers let candidates know that portions of the hiring process are conducted with assistance of AI. The candidates should also know what data is being collected and how it is being analyzed by the algorithms.

As AI continues to play a bigger part in the way that companies conduct business, there are some best practices that companies can consider when thinking about incorporating AI into the hiring process.

First is to think of AI as one of several tools to assist in recruiting and hiring, and not a total replacement for existing systems. No matter how sophisticated the technology gets, there will always be a need for the human element, especially at key parts of the process like final interviews and in making the ultimate hiring decision.

Second, employers should recognize that, while AI has the potential to reduce bias by taking emotion and unconscious decision-making out of the process, there is still a risk that AI systems can inject unintended bias. It is important to understand what data is being used, and how it is being analyzed, and to continuously review and test the results.

Another best practice is to take the EEOC’s advice about being transparent about the use of AI in the process. It will help build trust with the people that you are hoping to have as new colleagues.

Finally, if the use of AI tools are not resulting in the company getting better candidates and new hires, then perhaps the company should pursue different AI tools or return to more “traditional” hiring methods.

Without a doubt, AI has the potential to revolutionize the recruitment and hiring process, as some employers have already discovered. But, as with anything new, employers should carefully consider both the benefits and the risks before adopting AI as part of the company’s hiring practices.

Adam Hamel, a director in McLane Middleton’s Litigation Department, is chair of the firm’s Employment Law Practice Group. He can be reached at adam.hamel@mclane.com.


Print | Back