Blog & News

Back to Posts

Using Artificial Intelligence in Hiring Raises Bias Concerns

Posted on March 21, 2023

Employers have come to rely on artificial intelligence (AI) in many aspects of dealing with job applicants and employees.

It may seem that removing the human element of interactions would protect employers from charges of discrimination. But the US Equal Employment Opportunity Commission (EEOC)  indicated recently that it will keep a careful watch on the use of AI as it may result in what has been termed “algorithmic discrimination.”

An employer may not use AI as a shield against discrimination claims when its use of AI results in discriminatory practices.   Even though there may be no discriminatory intent by the employer, if the technology results in a disparate impact on a protected class, it is ultimately a liability to the employer.

Algorithms have been found to assign job candidates different scores based on factors such as wearing eyeglasses. Other algorithms have been shown to penalize applicants for having a Black-sounding name or mentioning a women’s college in their resume. In other cases, algorithms adversely impact people with a physical disability that limits their capacity to use a keyboard.

Last year the EEOC released guidance focused on preventing bias against applicants and employees with disabilities in violation of the Americans with Disabilities Act (ADA).

One example of discrimination from the guidance is an AI tool that screens out applicants with a gap in their employment history, which could be attributable to medical treatment.  Another example is pre-employment testing software that does not provide accommodations for applicants with disabilities who have difficulty performing the tests.  While this guidance is limited to compliance with the ADA, the EEOC has been active in its enforcement efforts when the use of technology violates other laws that the agency enforces.

The EEOC’s New York office in 2022 sued three related companies whose online software automatically rejected older candidates.  The recruiting software was programmed to reject female applicants over the age of 55 and male candidates aged 60 and over.  The EEOC seeks back pay and liquidated damages for 200 qualified applicants who were rejected, as well as injunctive relief to prevent future age discrimination.  It is unclear how intentional the screening was on the part of the employer, but the EEOC has charged the companies (not the maker of the software) with discrimination in violation of the Age Discrimination in Employment Act.

The EEOC recently held a public hearing titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.”  Higher education professors, non-profit organization representatives, attorneys, and a workforce consultant comprised the panel, which discussed the civil rights implications of using the technology and made suggestions on how to eliminate bias in employers’ use of AI and other technologies.

Testimony included concerns about how to evaluate AI and other automated tools, and whether auditing for bias should be a requirement for their use.  Panelists overwhelmingly agreed that some form of auditing would be necessary to eliminate bias. Others proposed increased transparency so applicants and employees will know when AI is part of the process.

Many panelists urged the EEOC to continue its enforcement efforts regarding AI in employment and requested further guidance.  Expect additional guidance from the EEOC similar to the guidance on disabilities. Meanwhile, employers who use automated systems and AI in their employment practices (estimated to be 30-40% of employers), should become familiar with how it works to ensure that they are compliant with the law.

AIM members with questions about the use of technology in their hiring process or any other employment practice may contact the AIM Employer Hotline at 800-470-6277.