Blog & News


This is a premium post...


If you are not an AIM member - Consider joining. AIM Members receive access to all our premium content online.

If you're an AIM member please login to your AIM account to view this post:


Back to Posts

Is AI A Lawsuit Waiting to Happen?

Posted on May 30, 2023

The use of artificial intelligence (AI) for human-resource functions is on the rise, and it can lead to liability for employers.

As we reported in a March HR Edge article, the Equal Employment Opportunity Commission (EEOC) issued guidance on the use of AI (algorithmic tools) for decision-making to avoid disability discrimination, and advises employers to monitor its use to avoid discrimination of all kinds in the employment process.

The EEOC recently followed up with additional technical assistance on the use of AI, making it clear that reliance on AI does not absolve the employer of liability under Title VII.  Title VII is the primary federal anti-discrimination law, and prohibits discrimination based on race, color, religion, sex, and national origin.

The new technical assistance advises employers to ensure that any AI tool they use to select or categorize employees has been tested for bias, and for employers to conduct self-analyses of their use of AI.  The EEOC is not breaking new ground by assigning responsibility to employers, but is instructing employers not to rely on the developers and vendors of AI software to identify and remove bias.

The focus of the EEOC guidance is on “disparate impact” discrimination resulting from the use of AI.  Disparate impact discrimination arises not from intentional discrimination, but from facially neutral polices or practices that have an adverse impact on persons in a protected group.  For example, a minimum height requirement for hiring might appear to be facially neutral but would likely exclude more women than men from a job as women tend to be shorter than men.  An employer would have to show that the height requirement is job-related and consistent with business necessity to defeat a claim of disparate impact discrimination.

If algorithmic decision-making is leading to a significantly different selection rate of one group over others, then there may be disparate impact discrimination.

The EEOC technical assistance recommends that employers use the ‘four-fifths rule” to determine whether the selection rate is significantly different among groups of people categorized by race, color, religion, sex or national origin.  The four-fifths rule was established by the EEOC in 1978 to measure adverse impact.  If one group was selected at a rate of less than 4/5 (or 80%) of the rate for the group with the highest selection rate, the EEOC will generally regard that as evidence of disparate impact.  The EEOC cautions that the four-fifths rule is not always appropriate to assess disparate impact, and that other statistical measures may be necessary to prevent discrimination.

AI litigation

Discrimination claims are only one form of legal challenge to the use of AI in employment.  Recently a rejected job applicant filed suit in Suffolk Superior Court against an employer, claiming that its “AI-enhanced” interview process violated the Massachusetts law prohibiting the use of lie detectors as a condition of employment (M.G.L. ch. 149, § 19B).

The Massachusetts law, adopted in 1959, and most recently amended in 1985, prohibits the use of “a polygraph or any other device, mechanism, instrument or written examination, . . . to assist in or enable the detection of deception, the verification of truthfulness, or the rendering of a diagnostic opinion regarding the honesty of an individual.”  For the applicant’s 2021 job interview, the company used technology that analyzes facial expressions, eye contact, tone of voice and inflection to assess the candidate’s “integrity and honor.”    The case is at an early stage, but it is a good indication that the use of AI in employment is being closely watched on many fronts.

The most important message of the EEOC technical assistance is that employers have a responsibility to ensure that AI used in the employment context has been evaluated for discriminatory impact.  Employers should question vendors about whether and how this analysis was performed.  While the Massachusetts lie detector case is pending, employers should also determine whether their use of AI is assessing honesty of their employees and applicants, as it could possibly lead to a violation of this law.

Members with questions about the use of AI or any other human resources matter may call the AIM Employer Hotline at 800-477-6277.