Skip to main content
All
November 9, 2023

EEOC and DOL Take Aim at AI Bias in Employment: Key Focus Areas and Implications for Employers

Advisory

As we have written about previously (here, here, here, and here), bias in employment — especially against historically disadvantaged groups — is among the major risks posed by the widespread adoption of artificial intelligence (AI), and this risk has attracted substantial attention from U.S. policymakers and regulators. Recently, the Equal Employment Opportunity Commission announced its intent to crack down on discrimination stemming from the use of AI and other automated systems, particularly when it comes to hiring tools such as those that screen job applications or resumes, interpret and grade video interviews, or score “job fit” based on personalities, aptitudes, or skills. Employers, including federal contractors, should be aware that President Biden’s AI executive order (EO) attempts to highlight and address these employment-related risks in a couple of key ways.

First, the EO directs the Secretary of Labor to develop and publish principles and best practices for employers to try to “mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” The Secretary of Labor is to publish these principles and best practices within 180 days of the EO, in consultation with other agencies and outside entities, including specifically “labor unions, and workers.” Notably, there is no requirement to consult with employers or employer groups or trade associations. The EO lays out a number of minimum requirements that must be covered by the principles and best practices, including: (1) job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers (which we interpret as how AI could result in job losses or displacement and how AI could contribute to discriminatory hiring or promotion practices); (2) issues related to the equity, protected-activity, compensation, health, and safety implications of AI in the workplace; and (3) data privacy for workers. Once the Secretary of Labor has published the principles and best practices, the EO asks agency heads — in consultation with the Secretary of Labor — to consider adoption of these guidelines in their programs. Consequently, these principles and best practices may evolve into legal obligations for at least some private employers.

Second, the EO directs the Secretary of Labor to publish guidance for federal contractors, within 365 days of the EO, “regarding nondiscrimination in hiring involving AI and other technology-based systems.” This directive comes on the heels of the Labor Department’s Office of Federal Contractor Compliance Programs (OFCCP) — which oversees federal contractor programs, including affirmative action requirements — expanding its typical list of materials and information that contractors must submit in an OFCCP audit to include AI-related systems and technologies used in recruiting, screening, and hiring employees.  

As federal contractors already must take proactive “affirmative action” to recruit and advance qualified minorities, women, individuals with disabilities, and protected veterans, the forthcoming Department of Labor guidance should, in theory, provide federal contractors with some concrete tools to navigate the use of AI within the contours of an OFCCP-compliant affirmative action program. In addition, all employers should already be carefully reviewing their use of AI and other automated systems in connection with employment decisions to ensure that they comply with anti-discrimination laws such as Title VII and the Americans with Disabilities Act.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.