Using AI in Human Resources: The Legal Landscape

In Part I of this three-part series on artificial intelligence (AI) in the workplace, we explored some of AI’s uses in the workplace and potential legal complications with the technology. In this Part II, we explore legal conflicts that have occurred, statutes about which companies should keep apprised and how to mitigate the legal risks of using AI in human resources.

Federal Agency Attention and Planned Enforcement

In April 2023, officials from the Justice Department’s Civil Rights Division, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Federal Trade Commission jointly committed to enforcing laws and regulations applicable to their agencies as they relate to AI. As AI becomes increasingly prevalent in day-to-day life, the agencies pledged to “vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

AI’s Biases

AI bias is one of the most common and recurring issues with company use of AI.  AI learns partly through inputs and datasets. When such data sets are biased, the AI may produce biased results. For instance, a tech company has already found bias toward men in machine learning models intended for use in analyzing resumes for software developer jobs. The machine learning model received predominantly male resumes, and so it favored words more likely to be associated with men and discounted women-only colleges. After auditing the AI’s results and recognizing the discrimination, the company decided to end its use of the learning model.

To mitigate against the risk of AI bias, companies should ensure that diverse and thoughtful engineers develop tools and data to prevent biased results. Furthermore, even if inputs are neutral, results could drift toward bias if the AI determines a majority of candidates, including successful candidates, belong to particular protected classes or have particular traits characteristic of a protected class, and such traits correlate with their success. Therefore, companies should regularly audit AI results for bias (which is already required in some jurisdictions such as New York, see below).

The Equal Employment Opportunity Commission (EEOC) has also weighed in on taking steps to prevent AI bias, stating in its May 18, 2023 guidance on AI:

Generally, if an employer is in the process of developing a selection tool and discovers that use of the tool would have an adverse impact on individuals of a particular sex, race, or other group protected by Title VII, it can take steps to reduce the impact or select a different tool in order to avoid engaging in a practice that violates Title VII. One advantage of algorithmic decision-making tools is that the process of developing the tool may itself produce a variety of comparably effective alternative algorithms. Failure to adopt a less discriminatory algorithm that was considered during the development process therefore may give rise to liability.

The EEOC encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently. Generally, employers can proactively change the practice going forward.

When companies use AI to make employment decisions, they must monitor the AI’s results for bias and disparate impact.

Laws on the Books

Illinois

In Illinois, the Artificial Intelligence Video Interview Act aims to protect job applicant privacy when applicants perform video interviews analyzed by AI. Under the statute, when companies request that applicants record video interviews and use AI to analyze the videos, the companies must first:

  1. Notify each applicant before the interview that AI may be used to analyze the video interview;
  2. Explain to the applicant before the interview how the AI works and what it evaluates; and
  3. Obtain consent from the applicant to be evaluated by the AI before the interview.

Companies cannot use AI in such manner without prior consent. Companies also may not share applicant videos, except with persons whose expertise or technology is necessary to evaluate an applicant’s fitness for a position. The Act further states that within 30 days of a request from the applicant, companies must delete an applicant’s video interview and instruct any person who received a copy of the video interview to do the same, including all electronicallygenerated backup copies.

Maryland

In Maryland, a law referred to as H.B. 1202 prohibits company use of facial recognition technology in pre-employment job interviews without the applicant’s written execution of a consent and waiver that states the applicant’s name, interview date, that the applicant consents to the use of facial recognition technology during the interview and that the applicant has read the waiver.

New York City

New York City’s Automated Employment Decision Tools Law requires companies that use an automated employment decision tool to perform bias audits within one year of the tool’s use, make information about the bias audit publicly available, and notify employees and job candidates that the company is using such tools to evaluate them. Enforcement of the law will begin on July 5, 2023.

Under the statute, an automated employment decision tool is “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation” used to “substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

The bias audit must calculate the impact ratio of each equal employment opportunity protected class, including sex, race, ethnicity, and intersectional classes. The impact ratio, which may exclude a protected class comprising less than 2% of the data set, is calculated in one of two ways: (1) the selection rate for a class divided by the selection rate of the most selected class, or (2) the scoring rate for a class divided by the scoring rate for the highest scoring class.

Beyond Equal Employment Opportunity

In addition to equal employment opportunity and bias considerations, companies should be aware that AI tools may conflict with labor and privacy laws. For instance, the National Labor Relations Act (NLRA) protects employees’ right to engage in collective bargaining and other concerted activities for the purpose of collective bargaining or mutual aid or protection. As AI is increasingly used to monitor and manage employee behavior, companies should ensure that these systems do not interfere with employees’ rights under the NLRA.

A number of states, including Illinois, have enacted biometric privacy laws. A number of pending lawsuits concern whether AI and machine -learning systems that analyze employee behavior also collect biometrics in violation of these statutes. Companies should understand how any such AI tools function and seek legal counsel in states with biometric privacy statutes to ensure compliance.

Takeaways

Companies are beginning to learn that their AI programs can unintentionally discriminate against applicants belonging to protected classes in violation of employment statutes, rendering such programs a significant legal risk. Companies that use AI should, on a regular basis, monitor their AI programs for bias and compliance with state and federal law. This best practice is particularly important as more states and municipalities seek to regulate AI in the workplace and in various industries. To mitigate legal risks, it is always a good idea to speak to counsel prior to and during the implementation of AI programs.

In Part III of this series, we will explore how employers and employees are using ChatGPT and other generative AI in the workplace, the benefits and potential pitfalls of the technology, and what may be to come as AI transforms work and HR.

If you have questions or would like more information about the topics raised in this post, please contact a member of Gould & Ratner’s Human Resources and Employment Law Practice.