In recent years, employers have begun to use artificial intelligence, machine learning, algorithms, and other automated systems or technologies (“AI Technologies”) to recruit and hire employees and make other employment decisions. At the same time, federal regulatory agencies and a number of localities have enacted new rules targeting how employers may use AI Technologies. As further discussed below, employers should expect increased enforcement efforts of federal, state, and local agencies (as well as private litigation) regarding their use of AI Technologies.

Most recently, on January 10, 2023, the Equal Employment Opportunity Commission (“EEOC”) issued a Draft Strategic Enforcement Plan (“SEP”) that places employment discrimination in the use of AI Technologies at the top of its strategic priorities list. Specifically, the SEP indicates the EEOC will make a concerted effort to eliminate employment discrimination in “the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”

The SEP indicates that the EEOC plans to take a “targeted approach” in enforcing employment discrimination laws through directed investigations and litigation to “positively influenc[e] employer practices and promot[e] legal compliance.” The EEOC specifically called attention to the “lack of diversity” in the construction and “high tech” industries, “growth industries,” and industries benefitting from substantial federal investments as “areas of particular concern.”

The EEOC has already increased investigation and litigation activity related to the use of AI Technologies. For example, in May 2022, the EEOC commenced its first lawsuit related to allegedly discriminatory use of AI Technologies by an employer. EEOC v. iTutorGroup, Inc., et al., Case No. 1:22-cv-02565 (E.D.N.Y.). The age discrimination lawsuit was filed in the United States District Court for the Eastern District of New York against three integrated companies providing English-language tutoring services for allegedly programming their tutor application software to automatically reject older applicants. In the lawsuit, the EEOC is seeking back pay and liquated damages for more than 200 applicants who were allegedly denied jobs under the defendants’ application program.

The final version of the SEP (which will be issued after a public meeting on January 31, 2023, and a public comment deadline of February 9, 2023) will provide further guidance regarding the EEOC’s approach to enforcing employment discrimination laws in the context of AI Technologies going forward.

While the EEOC’s draft SEP has affirmed its commitment to enforcing all employment discrimination laws implicated by the use of AI Technologies, disability discrimination laws remain a point of emphasis and pose unique obstacles to employers’ use of AI Technologies. In May 2022, the EEOC released guidance specifically addressing how the Americans with Disabilities Act (“ADA”) applies to the use of AI Technologies in recruiting applicants and making employment decisions. In that guidance, the EEOC specifically identified potential ADA violations where: (1) an employer using AI Technologies does not provide legally required reasonable accommodations to applicants and employees with disabilities to ensure fair assessment, (2) the AI Technologies used by the employer—intentionally or unintentionally—screen out individuals with disabilities who could perform the essential functions of a job with a reasonable accommodation, and (3) the AI Technologies used by the employer violate ADA restrictions on disability-related inquiries and medical examinations. The EEOC contends that employers are generally responsible for ADA violations caused by the use of AI Technologies even when such technologies are developed and administered by a third-party vendors.

To avoid potential ADA violations arising out of the use of AI Technologies, employers should consider taking steps such as: (1) providing advance notices advising applicants and employees of the use of AI Technologies (including information about what traits or characteristics are being measured by the technology and the methods used to do so); (2) obtaining consents from applicants and employees regarding such use of AI Technologies; (3) advising applicants and employees how to contact the employer if reasonable accommodations are needed; and (4) providing adequate reasonable accommodations to applicants and employees with disabilities. Due to the plethora of physical and/or mental disabilities that exist, any required reasonable accommodations must be specifically tailored to each individual and there is no one-size-fits-all approach to the use of AI Technologies that ensures compliance with disability discrimination laws.

In addition to federal laws and guidance regarding the use of AI Technologies in employment decisions, employers must be cognizant of developments in state and local laws on this topic. Most prominently, New York City passed a local law effective on January 1, 2023, that prohibits employers from using “automated employment decision tools” to screen candidates or employees for employment decisions unless the tool has been the subject of a “bias audit” not more than one year prior to the use of the tool, among other things. The law also requires employers using automated employment decision tools to provide certain notices to candidates and employees who reside in the city regarding the use of such tools. There are a number of open questions regarding the meaning and application of this law, which the New York City Department of Consumer and Worker Protection (“DCWP”) is currently attempting to clarify through a revised set of proposed rules.  The DCWP held a second public hearing regarding its proposed rules on January 23, 2023, and is delaying enforcement of the law until April 15, 2023.

Other states and local governments have (or may in the future implement) their own laws and regulations regarding the use of AI Technologies in employment matters, leading to complex and varied requirements for employers to ensure their use of AI Technologies is legally compliant in all jurisdictions in which they operate.

In 2023 employers should work with legal counsel to: (1) assess whether their current or prospective use of AI Technologies in employment matters complies with current legal requirements and guidance, (2) create or update legally compliant policies and procedures related to the use of AI Technologies in making employment decisions, (3) respond to and defend enforcement actions and private litigation related to the use of AI Technologies in employment matters, and (4) closely monitor further legal developments that are likely to come—federally and in state and local jurisdictions—related to the use of AI Technologies in employment matters.