Artificial Intelligence in The Workplace: Coming to A Courthouse Near You

By LCA Senior Judicial Fellow Chief Magistrate Judge Helen C. Adams

“Artificial intelligence is about replacing human decision making with more sophisticated technologies.”[1]Whether you agree or disagree with this quote, artificial intelligence applications can be found throughout the business world.  Artificial intelligence or “AI” is often referred to as predictive analytics, machine learning, deep learning, intelligent retrieval, or image recognition.  AI is the use of algorithms that mimic human intelligence to perform cognitive functions and solve problems through interaction, visual perception, learning, reasoning, natural language processing, and planning.

The purpose of this article is to highlight ways that employers are using AI to assist in making employment decisions.  The article also raises possible legal issues that lawyers and judges may face in litigation involving the use of AI by employers.

As employers grapple with rising costs and a potential labor shortage, they are looking for innovative ways to incorporate AI into the hiring, evaluation, and retention processes.  Let’s look at several ways that AI is being used by employers.

In the hiring process, employers are using AI to scan resumes and propose the best candidates for further review.  According to a 2016 article in the Harvard Business Review, AI is being used by businesses to screen out up to 70% of job applicants without any human interaction.[2] Employers also use AI to scrape online job boards or social media applications to unilaterally identify and target possible candidates for open job positions within the company. AI also can conduct online interviews and evaluate the applicants on a variety of factors including word choice and facial expression before recommending the ‘best candidates.” AI advocates tout the benefits of AI such as reduced costs, the ability to reach more candidates potentially increasing diversity of the applicant pool, the removal of human bias (both conscious and sub-conscious) in the interview process, and the ability to track large amounts of data to analyze how the company’s interview questions and processes are working in finding quality candidates.

Employers also are implementing AI to improve employee training both during the onboarding process and throughout an employee’s tenure.[3]  AI applications allow for “lessons” to be created that capture the work experience and knowledge to be passed down from the previous employee to the new hire.[4] New hires use the AI created lessons to learn their new role without disrupting the workflow of other employees. AI also delivers electronic training during an employee’s tenure, which can be personalized for the job and employee.  As the employee performs in the training module, AI can identify the areas in which the employee is excelling or struggling so that further training can be focused on the weaker areas[5].

Companies have implemented AI in the form of chatbots to enhance human resources departments.  A chatbot is an AI program that simulates interactive human conversation by using key pre-calculated user phrases and auditory or text-based signals.[6]  A chatbot also is known as an artificial conversation entity (ACE), chat robot, talk bot, chatterbot or chatterbox.[7] For example, employers have utilized chatbots to handle pre-interview screening, provide employees with information about company policies or other training topics, assist employees in applying for benefits, and obtain initial information from an employee about a complaint or grievance.

Employers also are using AI to help them evaluate employees for promotion opportunities as well as termination selection. AI can be used to monitor employees work habits, productivity, efficiency and errors on a real-time basis.  For example, certain AI systems can track employees and identify those who are unproductive.  The algorithm can generate warnings and even suggest possible termination based on an employee’s productivity levels.[8] The algorithm also can highlight those employees with the highest productivity and suggest them for raises or promotions, allowing employers to make more data-driven employment decisions.

While AI advocates have identified benefits that AI can provide employers, there also are potential negative consequences that raise serious legal issues with which courts will grapple. AI technologies may create algorithmic bias or “indirect human bias.” In many cases, the employer must train the AI to look for certain attributes or skills in applicants or employees.  One way to do this is to train the AI on the skills or attributes of successful incumbents.  Consider the following example.  Company A is looking for sales people so they train the AI system to look for applicants who have the skills and attributes of the most successful sales people at the company.  Company A has never had a female sale person in its history.  The AI system may not select female candidates as they may not fit the profile that has been generated for a successful sales person.[9] Another example would be an AI system that has been taught to favor certain zip codes due to their proximity to the office and that people who live closer to the office may have a lower absenteeism rate.  Use of that system could constitute disparate impact racial discrimination if the selected zip codes are predominately white neighborhoods.[10]

In 2017, Amazon identified an algorithm bias issue.  Amazon was using an algorithm to compare resumes of applicants to those of successful employees over the past ten-year period. Most of those successful employees were white males[11]. The algorithm unintentionally was taught to favor men over women. The algorithm learned to prioritize words most commonly used for or by males and began penalizing the word “Women” in resumes and candidates who graduated from all-female colleges[12].  Amazon caught the issue fairly quickly, but it is a real-life example of an algorithm that appears to be neutrally coded that can still exhibit bias.

A final example is described in this paragraph. Certain AI technologies claim to analyze an applicant’s facial expressions, tone, gestures, and word choice to evaluate the applicant’s honesty, attitude, positivity, overall sentiment, and language competence. Based on these aspects of their responses, the technology produces an “employability score” which is used by employers to decide who can advance in the application process.[13] This could cause a serious issue for those interviewing with disabilities and even those of different races. “Characteristics such as typical enunciation and speaking at a specific pace are qualities that might correlate with effective [employees].”[14] Therefore, if the facial attributes or the mannerisms of those with disabilities are different from the norm, even if their credentials could be beneficial for the job, AI is unlikely to highlight those features and therefore provides that applicant with a low score.[15] Although AI could try to remove that bias by altering certain variables, people with disabilities are not a “homogeneous group” and with that, every individual may have unique characteristics – therefore, inequalities could still exist.[16] Nevertheless, this type of software also may have a discriminatory impact against African Americans, as the facial recognition software can analyze emotions differently based on race. In fact, “black faces are read as angrier than white faces, even after controlling for the degree of smile.”[17] Even when a black applicant has an ambiguous facial expression, some software programs read their facial expression as annoyed or disapproving, which ultimately will lower their employability score.[18]

States are considering regulations governing the use of Artificial Intelligence in employment. A few states have introduced legislation such as New Jersey and Washington, however only Illinois has passed a statute.[19] Illinois enacted the Artificial Intelligence Video Interview Act to regulate AI used to evaluate video interviews.[20] The Illinois statute requires employers to notify applicants in advance that AI will be used in the interview.[21] Employers must be transparent and explain how the AI works and must receive the interviewee’s consent in advance of the video interview.[22] The video and data gathered from the interview is only distributed to those “whose expertise or technology” is necessary to evaluate the candidate.[23] Additionally, at the applicant’s request, the video and any copies must be destroyed within 30 days.[24]

There are questions left by the statute that lawmakers or judges may need to answer in the future.[25] An important aspect of the statute is the transparency requirement; however, it is not outlined how transparent the employer must be with the applicant.[26] Do the employers only need to explain the AI’s use in analyzing the applicant’s expressions or will they need to explain how the employer uses the data to determine an applicant’s attitude, honesty, and language competence?[27] The transparency is only regarding how the AI works, so the employers will likely not be required to explain how they use the data to make hiring decisions.[28] The consent provision of the bill does ensure candidates are aware and consent to the use of AI, but the statute does not specify what happens if an applicant refuses to consent.[29] At that point can employers refuse to consider the applicant?[30] Another key question is what is the employer and AI company’s responsibility for any data extracted from the video interviews.[31] The applicant can have the video destroyed, but the law does not mention the employer’s responsibility regarding any data extracted from the video. Is an employer required to destroy the extracted data at an applicant’s request or are they free to keep it in their records?[32] Does this destruction provision run afoul of any EEOC requirements to maintain applicant data?  Finally, there is the question of whether the vendors are liable for any issues in the software and whether that is the employer’s responsibility to ensure compliance or indemnification.[33]

On a national level there have been some proposals governing the use of artificial intelligence in employment decisions. The White House sent out an Executive Order in May 2018 prioritizing funding for Artificial Intelligence and developing more uses for the technology.[34] An Artificial Intelligence bill, the Algorithmic Accountability Act, was introduced to Congress in April of 2019.[35] The bill proposed granting the Federal Trade Commission (FTC) authority to create regulations to have AI algorithms checked for “accuracy, fairness, bias, discrimination, privacy, and security.”[36] The bill was referred to the Committee on Energy and Commerce and passed to the Subcommittee on Consumer Protection and Commerce.[37]

AI in the employment context has garnered the attention of the Equal Employment Opportunity Commission as it is looking into two cases of bias with artificial intelligence.[38] As well, the Electronic Privacy Information Center (EPIC), a non-profit focused on civil liberties and First Amendment issues with technology, filed a complaint with the FTC against HireVue, an online video interviewing software company.[39] In the complaint, EPIC accused HireVue of producing biased results as it uses data from people who have previously performed well and the technology “could discriminate against candidates with neurological differences” such as Autism.[40]

With the increase in use of AI, groups such as the UNI Global Union are advocating for the implementation of policies on the use of AI within employment.[41] Some key principles the group asks is for the worker’s to have the right to data collected about themselves, so that is can be corrected, destroyed, and be portable for economy workers, like Uber.[42] They want the right of explanation of the AI, to know what data is collected and how it is used in different employment decisions.[43] Finally, they suggest an ethical approach to AI to ensure it is unbiased, supports fundamental freedoms, and has a human-in-command approach.[44] These principles could become a guide for government regulators, courts, and employers regarding issues that could arise with the use of AI in employment.

As you can see, AI is a complex technology that has the potential to revolutionize the employment arena.  It also has the potential to form the basis for significant legal issues that will need to be resolved through legislation, regulation and litigation.  I hope that this article has piqued your interest in this fascinating area of the law.


Helen C. Adams has served as a U.S. Magistrate Judge for the Southern District of Iowa since February 13, 2014.  She became Chief U.S. Magistrate Judge in 2017.

Judge Adams received her undergraduate degree in Sociology from the University of Iowa in 1985, and her law degree from the University of Iowa in 1988, with high distinction and Order of the Coif.  She began her legal career as a law clerk to U.S. District Judge Harold D. Vietor in the Southern District of Iowa from 1988 to 1990.

From 1990 to 2009 she practiced with the firm of Dickinson, Mackaman, Tyler & Hagen, where she served as President of the firm on two occasions.  In 2009 she joined the corporate legal department of Pioneer Hi-Bred International (now DuPont Pioneer) as associate general counsel, an international agricultural company.


[1] Quote by Falguni Desai, currently Global Head of Strategy & Transformation, Credit Suisse.  The quote appeared in The Age of Artificial Intelligence in Fin Tech, FORBES, June 30, 2016.

[2] David J. Garraux, From the Jestsons to Reality or Almost, NAT’L L. REV., Sept. 2019.

[3] Bernard Marr, Artificial Intelligence in the Workplace, FORBES, May 2019.

[4] See id.

[5] Amit Guatam, How Artificial Intelligence Can Transform Employee Training in 2020 and Beyond, ELEARNING INDUSTRY (Nov. 29, 2019),


[7] Id.

[8] Joseph O’Keefe et al., AI Regulations and Risks to Employers, BLOOMBERG LAW,

[9] SCIENCEDAILY, A New Artificial Intelligence (AI) tool for detecting Unfair Discrimination (July 10, 2019),

[10] Alexandra George, Thwarting Bias in AI systems, CMU Engineering (December 11, 2018),

[11] Joseph O’Keefe et al., AI Regulations and Risks to Employers, BLOOMBERG LAW,

[12] Id.

[13] Alex Engler, For Some Employment Algorithms, Disability Discrimination by Default, BROOKINGS TECHTANK (Oct. 31, 2019),

[14] Id.

[15] Id.

[16] Id.

[17] In Re HireVue, EPIC Complaint to the F.T.C. (Filed Nov. 6, 2019), (citing Lauren Rhue, Racial Influence on Automated Perceptions of Emotions, U. MD. – ROBERT H. SMITH SCHOOL OF BUSINESS, Nov. 9, 2018)

[18] Id. (Lauren Rhue research is a study done with the available data set of professional basketball players’ picture when interacting with facial recognition services)

[19] Joseph O’Keefe, Danielle Moss, & Tony Martinez, AI Regulation and Risks to Employers, BLOOMBERG LAW, Dec. 2019, reproduced byProskauer Rose,

[20] Artificial Intelligence Video Interview Act, 2019 Ill. Legis. Serv. P.A. 101-260 (H.B. 2557) (WEST).

[21] Id. at §5(1).

[22] Id. at §5 (2)-(3).

[23] Id. at §10.

[24] Id. at §15.

[25] Matthew Jedreski, Jeffrey S. Bosley, & K.C. Halm, Illinois becomes first state to regulate employers' use of artificial intelligence to evaluate video interviews, Practitioner Insights Commentaries, 2019 PRINDBRF 0206 (2019).

[26] Id.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] Id.

[32] Id.

[33] Id.

[34] The White House Office of Communications, Artificial Intelligence for the American People, 2018 WL 2148534 (May 10. 2018).

[35] Algorithmic Accountability Act of 2019, H.R. 2231, 116th Cong. (2019).

[36] Id. at (2).

[37] Id.

[38] Rebecca Heilweil, Artificial Intelligence Will Help Determine if you get your Next Job, VOX (Dec. 12, 2019, 8:00 A.M.),

[39] In Re HireVue, EPIC Complaint to the F.T.C. (Filed Nov. 6, 2019),

[40] Id. at 7.

[41] Pamela Wolf, Labor News—Global Union Offers Principles for Employee Data Collection and Ethical Artificial Intelligence, 2017 WL 6350252 (Dec 13, 2017).

[42] Id.

[43] Id.

[44] Id.