Skip to main content
Enforcement Edge
June 1, 2023

Employers Pay Heed: EEOC Follows Warning of AI-Discrimination Crackdown with Compliance Advice for Companies

Enforcement Edge: Shining Light on Government Enforcement

Automated systems, including those that rely on artificial intelligence (AI), have become increasingly common parts of the employment decision-making process — from vetting potential hires, to monitoring performance, to making promotion and dismissal decisions for existing employees. Employers who use these systems themselves or whose vendors use these systems have to take care not to run afoul of laws like Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, national origin, or sex (including pregnancy, sexual orientation, and gender identity).

With AI all over the news, regulators are taking aim at the harms it can cause. Last month, for instance, the chair of the U.S. Equal Employment Opportunity Commission (EEOC) joined leaders of other federal enforcement agencies to announce their intent to crack down on discrimination stemming from the use of AI and other automated systems (see our analysis). Earlier this year, the EEOC published a draft Strategic Enforcement Plan for public comment that, among other things, addressed the use of automated systems in hiring, including how those systems may be used to “intentionally exclude or adversely impact protected groups” (see our prior coverage). More recently, Bloomberg reported that the EEOC is training its staff to enforce employment discrimination laws against unlawful algorithmic bias.

To keep the EEOC’s enforcers at bay, employers should heed its growing body of compliance guidance. Last year, in conjunction with the Department of Justice, the EEOC advised employers on avoiding disability discrimination (see report here). In its latest installment, the EEOC has published a technical assistance document that aims to help employers use automated decision tools without discriminating unlawfully against employees and job seekers. The document focuses on circumstances in which an employer’s automated decision-making procedures — whether supported by AI or otherwise — may have a disparate impact (also known as an “adverse impact”) on a basis that is prohibited by Title VII. The disparate impact concept refers to the risk of discrimination resulting from facially impartial procedures that may nevertheless have a disproportionate negative effect on a protected group of employees.

The EEOC’s guidance serves as a primer for understanding how key aspects of Title VII apply to an employer’s use of AI tools and procedures. For instance, the agency explains that, under longstanding agency guidelines, a selection rate for one group is considered “substantially” different than the selection rate for another group when their ratio is less than four-fifths, or 80%. Therefore, when an employer is considering whether to rely on a vendor to develop or administer an automated decision-making tool, they “may want to ask the vendor specifically whether it relied on the four-fifths rule of thumb when determining whether use of the tool might have an adverse impact on the basis of a characteristic protected by Title VII.” While important, adhering to the four-fifths rule does not provide a safe harbor for employers who use discriminatory AI tools (and a selection rate outside of the four-fifths ratio does not necessarily mean that there is statistical significance). For that reason, employers may also wish to vet tools for statistically significant differences in protected groups’ selection rates in addition to inquiring about the four-fifths rule.

In short, the recent EEOC guidance urges employers to analyze their automated systems in the same careful manner that they would apply to traditional selection procedures. An employer that fails to do so could be held accountable for violating Title VII — even if its tools were designed by a third party, such as a software vendor. Because the effects of algorithms can shift over time, the EEOC also “encourages employers to conduct self-analyses on an ongoing basis,” to ensure that tools remain compliant with antidiscrimination obligations.

In a press release, EEOC Chair Charlotte A. Burrows stated that the technical assistance document “will aid employers and tech developers as they design and adopt new technologies.” She also reminded employers of the core purpose of Title VII itself: “As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality.”

More broadly, Title VII is just one of many existing laws regulating the use of algorithmic systems. Companies using these systems — especially for decisions like those in the employment context, that have significant effects on people — should consider establishing comprehensive AI risk-management programs now, before the enforcers come knocking.

For questions about this post or managing AI’s regulatory and other risks, please contact the authors or other members of Arnold & Porter’s multidisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.