Skip to main content
Enforcement Edge
September 25, 2023

EEOC’s Strategic Enforcement Plan Prioritizes Technology-Related Employment Discrimination

Enforcement Edge: Shining Light on Government Enforcement

In January, we reported on the U.S. Equal Employment Opportunity Commission’s (EEOC) draft Strategic Enforcement Plan (SEP) for 2023-2027. After the confirmation of a third Democratic commissioner ended the partisan deadlock at the agency, the EEOC released the final SEP (now for 2024-2028) on September 21. The final SEP largely tracks the draft and emphasizes enforcement against:

  • “the use of technology, including artificial intelligence and machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups;”
  • “reliance on restrictive application processes or systems, including online systems that are difficult for individuals with disabilities or other protected groups to access;” and
  • “the use of screening tools or requirements that disproportionately impact workers on a protected basis, including those facilitated by artificial intelligence or other automated systems, pre-employment tests, and background checks,” among other priorities.

The SEP is consistent with the April pledge of the EEOC chair — joined by other federal enforcement agency leaders — to crack down on discrimination stemming from the use of artificial intelligence (“AI”) and other automated systems.

To keep the EEOC’s enforcers at bay, employers should heed its growing body of compliance guidance. Last year, in conjunction with the Department of Justice, the EEOC advised employers on avoiding disability discrimination. In May, the EEOC published a technical assistance document that aims to help employers use automated decision tools without discriminating against employees and job seekers in violation of Title VII of the Civil Rights Act of 1964.

Employers’ use of AI and other automated systems may be subject to other laws, as well. For example, New York City employers must comply with municipal bias, audit, and notice requirements (here and here) when they use an “automated employment decision tool” as the sole or most important factor in hiring or promotion decisions.

More broadly, many longstanding laws regulate the use of algorithmic systems. Companies using these systems — especially for decisions that have significant effects on people, like those in the employment context — should consider establishing comprehensive AI risk-management programs now, before the enforcers come knocking.

For questions about this post or managing AI’s regulatory and other risks, please contact the authors or other members of Arnold & Porter’s multidisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.