This post is the third in a three-part series examining the use of artificial intelligence (AI) in Human Resources. In Part I, we explored some of AI’s uses in the workplace and potential legal complications with the technology.
In Part II, we explored legal conflicts that have occurred, statutes about which companies should keep apprised and how to mitigate the legal risks of using AI in human resources. In this conclusion of the series, we explore generative AI and its use in the workplace.
What Is Generative AI?
Since late 2022, ChatGPT and other forms generative AI have captivated users, galvanized companies and occasionally unnerved observers. Generative AI describes algorithmic platforms used to converse with and aid users, offer substantive guidance and create content such as emails, ads, articles presentations, music and art.
ChatGPT, one of the most popular generative AI platforms, collects information from the internet and utilizes machine learning from user interaction and feedback to generate content and perform assigned tasks. As a result, generative AI gets sharper over time and with each use.
Companies are already using generative AI for customer support and engagement. For instance, travel companies have implemented generative AI allowing customers to converse with the technology to plan their vacations by giving the AI details about desired locations and activities and requesting that it generate their travel plans. Online course providers have used generative AI to provide individualized guidance and feedback for students.
What Are the Benefits of Generative AI to Companies?
Companies want efficiency and cost-effective performance. Generative AI provides these benefits by allowing employees to focus on tasks that generally require human touch, like creating ads, providing advanced troubleshooting, analyzing complex data and drafting specific and niche policies, agendas and messages, while AI handles more basic tasks with human oversight or performs the more complex tasks with human monitoring.
What Are the Risks of Generative AI to Companies?
- Inaccurate information: While ChatGPT can be a useful platform for brainstorming and drafting templates, it is often inaccurate and unreliable when asked for facts or data. Recently, attorneys in New York were sanctioned for drafting a brief using ChatGPT, incorporating false citations provided by the AI. Generative AI has been known to “hallucinate” by fabricating answers or parts of answers.
- Breach of Confidentiality: Because generative AI works by learning from information given to it by users, if it’s fed confidential or proprietary information, it may disclose such information to unauthorized individuals.
- Bias: As discussed in prior posts, AI can be highly biased. Because AI learns from its creators and data inputs, which may be biased, and its own experience conversing with users and learning new data, which may also be biased, AI’s outputs can reflect such biases. For instance, if AI is asked to generate a generic picture of people at a party, and the data set it reviews mostly includes white people, the picture it creates is likely to include mostly white people. Even if one specifically requests that the AI generate a diverse picture, if the AI’s data set mostly includes stereotypes of more diverse settings, its output is likely to include stereotypes as well.
What Impact Is Generative AI Already Having on Workers?
Generative AI is not going away, and companies implementing it should train their workers on effective use and best practices to mitigate risk. Because generative AI can replicate many tasks traditionally done by humans, some worry about their job security. Indeed, one of the key components that the Writers Guild of America seeks in their collective bargaining agreement via their current strike is a provision banning such technology from writing television and film scripts and preventing AI from being trained on the writers’ work.
Platforms like ChatGPT cannot currently write “good” television shows and movies, but there is legitimate fear that with time and training, a generative AI could produce quality material, limiting the need for writers. Still, most observers believe that AI is best used as a tool for human work, not a replacement. The impact of this technology on salaries and wages is yet unknown.
Have Companies Already Banned Generative AI?
Yes – Apple, Samsung, Verizon, many Wall Street banks and other companies of various sizes – including law firms – have already banned employee use of ChatGPT to prevent disclosure of confidential information and avoid inaccurate answers.
While blanket prohibitions may be the safest route, companies who wish to utilize the platform may want more tailored approaches to employee use.
Narrow and Tailored Company Policies on Using Generative AI
Companies that permit or mandate employee use of generative AI should implement policies to mitigate legal risks and disclosure of confidential information. Companies may want to include the following in their policies:
- Pledge to use AI legally, ethically and responsibly and avoid using it in a way that could cause harm
- Transparency in how the company uses AI
- Accountability for employees using AI in their substantive work
- Adequate training on effective and responsible AI use
- Explicit disciplinary policy and procedure for violations of AI policy
- Periodic reviews of AI for bias and ethical use
- Employing an AI Officer who oversees implementation and use of the technology and ensures legal and ethical compliance
- Including trade secrets or other confidential information in conversations with generative AI
- Using generative AI in public, non-secure locations and/or without the use of a virtual private network
- Using generative AI by employees with access to trade secrets or other confidential information
- Using AI’s substantive answers without first corroborating the information
- Using AI-generated content and solutions without first checking it for relevant biases
In addition, in addressing confidentiality in internal AI policies, companies may want to update their confidentiality agreements to prohibit employees and former employees from entering or referring to confidential, proprietary or trade secret information in generative AI platforms.
Finally, companies should consult with counsel on compliance questions and in drafting policies.
The Future of AI in the Workplace
The use of AI in human resources and the overall workplace, just like its underlying technology, will be constantly – and rapidly – evolving and improving. Correspondingly, the roles of employers and their HR professionals will also need to evolve to ensure best practices and protections are in place to maximize the benefits of AI and minimize its risks.
If you have questions or would like more information about the topics raised in these series of posts on AI in Human Resources, please contact a member of Gould & Ratner’s Human Resources and Employment Law Practice.