Skip to main content
Enforcement Edge
January 20, 2023

EEOC’s Draft Enforcement Plan Prioritizes Technology-Related Employment Discrimination

Enforcement Edge: Shining Light on Government Enforcement

Bias—especially pernicious bias against historically disadvantaged groups—is among the major risks posed by the widespread adoption of artificial intelligence (AI). Indeed, algorithmic discrimination has attracted substantial attention from US federal, state, and local policymakers, and regulators are beginning to crack down.

On January 10, the US Equal Employment Opportunity Commission (EEOC) published for public comment a draft Strategic Enforcement Plan (SEP) for 2023–2027. The SEP aims to “focus and coordinate the agency’s work” over a multi-year period to produce “a sustained impact in advancing equal employment opportunity.” The draft SEP is the EEOC’s first to address the use of automated systems for hiring, such as artificial intelligence and machine learning, and proposes to focus on how those systems may be used to “intentionally exclude or adversely impact protected groups.”

As we explained in Avoiding ADA Violations When Using AI Employment Technology, many companies use AI-powered technologies in hiring, promotion, and other employment decisions. Examples include tools that screen applications or resumes; video-interviewing software that evaluates facial expressions and speech patterns; and software that scores “job fit” based on personalities, aptitudes, or skills. The SEP recognizes that these tools and technologies can have adverse effects on members of protected groups, as well as on “particularly vulnerable workers” outside of traditionally protected classes who may be unaware of their rights or reluctant to enforce them. Examples of “particularly vulnerable workers” identified in the SEP include immigrants and migrant workers, individuals with intellectual and developmental disabilities, individuals with arrest or conviction records, LGBTQI+ individuals, older workers, and persons with limited literacy or English proficiency.

In addition to many non-technology related priorities, the EEOC seeks to eliminate technological barriers that can lead to disparate impacts in recruitment, hiring, and promotion, such as:

  • the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups;
  • restrictive application processes or systems, including online systems that are difficult for individuals with disabilities or other protected groups to access; and
  • screening or performance-evaluation tools that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems.

The draft SEP incorporates feedback from three listening sessions that occurred in 2022. The public is encouraged to submit comments on the draft SEP through the Federal eRulemaking Portal by February 9, 2023, after which the EEOC will adopt a final version.

However the SEP may be revised, employers and vendors of employment technology should expect much greater scrutiny of automated employment decision technology and tools in the years to come. Auditing current systems (and new ones before deployment) will be increasingly important to keep both regulators and the plaintiffs’ bar at bay. For those eager to learn more, the EEOC will host a public hearing on January 31 at 10:00 AM EST on “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.”

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.