Real Property Law
The Exclusionary Impact of Artificial Intelligence (‘AI’) in the Workplace
July 2023
By Katherine C. Tower, JD, MBA and Rinat B. Klier-Erlich, JD, MA
For the last few years we speak a lot about automating certain task, whether by necessity (to remain competitive and save human resources) or to achieve higher goals. However, when it comes to biases, training AI systems on past hiring decisions will merely automate racial and cultural biases that have previously discriminated against ageism, people of color, women, people with disabilities, or other groups. In other words, if the data inputs for design development are not diverse, then the model output will likely be biased.
AI, the configuration of computer systems to perform tasks that generally require human intelligence, is one of the most exciting technological advances, impacting all sectors of society including but not limited to employment, healthcare, legal services, education, finance, national security, criminal justice, and transportation.
Today, the majority of businesses, including state and local government employers, heavily utilize artificial intelligence, machine learning, algorithms, and other automated systems (“AI Technologies”) to help them streamline various stages of the employment process: hiring the most qualified employees more quickly and efficiently; making workers more productive by monitoring their performance; ascertaining pay or promotions; terminating poor performers; and establishing the terms and conditions of employment. Resume scanners use certain key words to prioritize job applications. Video interviewing software evaluates applicants on the basis of their facial expressions and speech patterns. Testing software assesses prospective candidates regarding their personalities, aptitudes, cognitive skills, or perceived ethnicity based on their respective scores on a traditional test.
Chat GPT (“Generative Pre-trained Transformer”) is the most recent mind-boggling innovation launched in November 30, 2022. Chat GPT is a type of artificial intelligence (AI) technology that allows users to naturally communicate with machines. It is designed to mimic human conversation based on chat type of inputs from users. While still in its infancy stage, Chat GPT has the potential to revolutionize the recruitment industry by providing a more efficient, personalized, and diverse hiring process (it also, unfortunately, can be used by applicants to create resumes, writing samples, articles for publications, etc., based on search criteria and without substantive input from the person performing the search).
But like all new AI products, it has the same old biases. Algorithmic decision-making tools, such as chatbots could “screen out” an individual because of a disability. For instance, a chatbot might be programmed with a simple algorithm that rejects all job applicants who, during their communications with the chatbot, indicate that they have significant gaps in their employment history. If a particular applicant had such a gap in employment was due to their disability, such as undergoing treatment, then the chatbot may function to screen out that person because of the disability. Other examples include, working hours (which an employer may be easily able to accommodate, if asked), certain job functions that are not crucial for performing the job. Still further, job descriptions are based on existing employer/employee relationships. This however, does not mean that other forms of relationships may not work. Bias can also be introduced in how the algorithm is designed, and how people interpret the outputs. For example, including data elements such as income may be associated with certain groups, or if certain protected classes have insufficient data, that can prevent them from making the cut. Even how job announcements are posted can create bias, as some sites may cater more to different groups than others. Certain words may also encourage or discourage different groups to apply.
In another example, an algorithm could be developed based on resumes from past successful candidates. The algorithm could then be trained to learn word patterns in the resumes to identify a job applicant’s ideal suitability for a company. Theoretically, the algorithm would be able to simplify a company’s hiring process by determining individuals whose scanned resumes have attributes comparable to the benchmark resumes, indicating that these top choice new candidates might likely be successful in the company. But, the risks of reproducing past discriminatory effects arises when the benchmark resumes used to train the AI are derived from candidates of a predominant gender, age, national origin, race, or other group, and thus might exclude words that are commonly found in resumes of a minority group. Taking for expel, word choices from an older generation or from an applicant with a different national origin.
Similarly, men, for example, are more likely to use assertive words like “leader,” “competitive,” and “dominant”. In contrast, women are more apt to use words like “support,” “understand,” and “interpersonal.” By replicating the gendered ways in which hiring managers judge applicants, the AI may conclude the men are more qualified when compared with their female counterparts based on the active language implemented in their resumes. Also, women tend to downplay their skills on resumes. On the other hand, men will frequently exaggerate their resume skills and include phrases tailored to the position, thereby making their resumes stand out to an algorithm.
In these cases, the diversity of the applicant pool can be affected even before the employer has a chance to evaluate job candidates. Irrespective of the surge in unconscious bias training and diversity initiatives, that largely remains unchanged as machines are only as good as the input provided to them.
Reliance on these fast-evolving algorithmic decision-making technological may consciously or unconsciously lead to unlawful discrimination against ageism, people of color, women, people with a disability, and other groups of applicants, ultimately harming society as a whole. While we continue to educate employers and decision makers, the problem with algorithms is that they have the appearance of neutrality, since they do not use human judgement. What must be realized however, is that the historical data input into algorithms already includes extensive human judgment. The danger is therefore greater, because bias is disguised by the neutrality of a machine.
It was nearly six decades ago that Title VII of the Civil Right Act of 1964 prohibited discrimination on the basis of race, sex, religion, and national origin. A similar provision in the Age Discrimination in Employment Act (ADEA) prohibits ads indicating a preference based on age. Moreover, the Americans with Disabilities Act (ADA), prohibits employers from using tests or selection criteria “that screen out or tend to screen out” individuals with disabilities unless the test or criterion is job-related and consistent with business necessity. Individuals with disabilities who are covered by the Americans with Disabilities Act have the right to request accommodations during the hiring process—rights that are ensured by the Equal Employment Opportunity Commission (EEOC).
However, unregulated algorithmic screening tools cannot always comply with these mandates. Given the risks of discriminatory outcomes, the growing use of AI tools in the workplace raises a number of legal concerns. Thus, federal, state, and local governments are racing to develop standards to address AI’s proliferation in the workplace. In the meantime, employers should take action. In 2018, Amazon pioneered using AI to improve its hiring process Yet despite its best efforts to remain neutral to protected groups, Amazon decided to terminate the program after discovering bias in the AI hiring recommendations.
On the federal level, Senator Ron Wyden in February 2022 re-introduced the Algorithmic Accountability Act which would direct the U.S. Federal Trade Commission to require companies to conduct “impact assessments of automated decision systems and augmented critical decision processes, and for other purposes. In May of 2022, the EEOC and the Department of Justice (DOJ) Civil Rights Division released guidance, warning employers that the use of algorithmic screening tools could be a violation of the ADA. The EEOC is stepping up its enforcement efforts towards AI and machine learning-driven hiring tools to ensure compliance with federal civil rights laws. In fact, the EEOC filed its first age discrimination lawsuit involving the use of AI Technologies against three integrated companies providing English-language tutoring services to students in China for allegedly encoding its online recruitment software to automatically reject more than 200 qualified applicants based in the United States, specifically female applicants age 55 or older and male applicants age 60 or older were excluded from potential job opportunities. (EEOC v. iTutorGroup, Inc., et al., Case No. 1:22-cv-02565 (E.D.N.Y.)).
Additionally, a number of state and local legislators have in recent years introduced and/or passed legislation regulating AI or established task forces to evaluating the use of AI in an effort to at combat the discriminatory impact of these tools in the workplace.
At the end of 2021, D.C. Attorney General Karl Racine announced an OTI-endorsed bill banning algorithmic bias. New York Representative Yvette Clarke, joined by US Senators Ron Wyden of Oregon and Cory Booker of New Jersey to introduce the Algorithmic Accountability Act of 2022. The proposal mandated businesses to conduct “impact assessments” scanning systems for bias, effectiveness, and other characteristics when using AI in making fundamental decisions related to employment, loans, and even housing applications. It also requires the formation of a public repository at the Federal Trade Commission to track and monitor these systems.
Illinois passed bills that include substantive limitations on the use of AI. In August of 2019, Illinois led the way with one of the country’s first AI workplace laws by enacting the Artificial Intelligence Video Interview Act. The Act which took effect in January 2020 requires that employers make certain disclosures as to how AI works and what types of general characteristics it uses to evaluate applicants and obtain consent from applicants when employers use AI video interview technology during the hiring process. The law as amended in 2022 further requires that employers that rely solely on AI to make certain interview decisions maintain records of demographic data, including the applicants’ race and ethnicity. Employers must submit that data on an annual basis to the state, which must conduct an analysis to determine if there was racial bias in the use of the AI. Employers also may not share applicant videos unnecessarily and they must delete an applicant’s interview within 30 days of applicant’s request. Maryland followed with a similar law in 2020, restricting employers’ use of facial recognition services during pre-employment interviews until an employer receives consent from the applicant. Of note, research conducted in 2020 found that facial-analysis technology performed better on lighter-skinned subjects and with men.
Following Colorado, Illinois enacted the Illinois Future of Work Act creates the Illinois Future of Work Task Force in August 2021, to identify and assess the new and emerging technologies, including artificial intelligence, that impact employment, wages, and skill requirements.
Most notably, New York City passed a local law effective on January 1, 2023, that specifically focused on regulating AI associated with typical Human Resources technology. The law prohibits employers from using “automated employment decision tools” to screen candidates or employees for employment decisions unless the tool has undergone a “bias audit” not more than a year prior to the use of the tool. Before such a tool is used to screen a candidate or employee for an employment decision, the employer must first notify the individual that the tool will be used, identify the job qualifications and characteristics that the tool will use in its assessment, and make publicly available on its website a summary of the bias audit and the distribution date of the tool. The candidate also has the right to request an alternative selection process or accommodation upon notification of use of the tool.
While the Illinois and New York City laws touch on regulating notice of the use of AI technology and examining its impact on hiring, the California draft regulations would go further by specifying that companies or third-party agencies using or selling services with AI supervised machine learning or automated decision making system (ADS) that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts potential job applicants could realistically face liability under state anti-discrimination laws, regardless of discriminatory intent, unless the “selection criteria” used “are shown to be job-related for the position in question and are consistent with business necessity.”
The draft regulations would establish specific restrictions on hiring practices such as pre-employment inquiries, applications, interviews, selection devices, and background checks. Further, the proposed regulations would expand employers’ record keeping requirements by requiring them to include machine-learning data as part of their records, and for employers or agencies using ADS, to retain records of assessment criteria used by the ADS. At publication, the regulations are in the pre-rule making phase.
Pending legislation in the District of Columbia, the Stop Discrimination by Algorithms Act goes a step further than California by permitting a private right of action for individual plaintiffs, including potential punitive damages and attorney’s fees. If enacted, this legislation would bar covered entities from making an algorithmic eligibility determination on the basis of an individual’s or class of individuals’ actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important employment opportunities unavailable to an individual or class of individuals.
To avoid inadvertently encoding past intentional or intentional human biases, it will often be necessary for the program designers who build AI systems and the businesses who use them, to take actions to counter discriminatory effects that might otherwise occur, thereby creating AI systems that embrace the full spectrum of inclusion. The key, however, is to carefully select a broader sampling of women, minorities and other diverse individuals in the design, development, deployment, and governance of AI. Of note, discrimination in the workplace is unlawful and has legal consequences to the employer, even when technology automates the discrimination.
Take Aways
- Employers should exercise caution when implementing hiring practices involving AI technologies by taking steps to evaluate and to mitigate any potential discriminatory impact of these tools by investigating whether their technology can successfully pass a “bias audit” conducted by an independent auditor.
- For compliance purposes, employers should closely monitor and stay abreast of developments in federal laws and guidance, as well as state and local laws that may in the future impact the legality of AI technological tools in the workplace.
- For transparency, employers should inform candidates in readily understood terms as to what the evaluation entails by explaining the knowledge, skill, ability, education, experience, quality, or trait that will be measured with the AI tool. Employers should similarly describe how testing will conducted and what it will require such as verbally answering questions, interacting with a chatbot, and the like.
- Employers should invite the job applicant to request for accommodation in order to have the opportunity to ask ahead of time if they feel some disability accommodation is needed.
Katherine C. Tower is Deputy General Counsel-Litigation for the Illinois State Lottery Department. Previously, she was a professional liability attorney for over 20 years. She has been a member of DRI for over ten years, a member of the ABA, member of the ISBA, a former member of the Illinois State Bar Association Judicial Committee, former member of CLM and a former board member of the Minority In-House Counsel Association.
Rinat B. Klier Erlich is a founding partner and heads the Los Angeles office of Zelms Erlich & Mack. For over two decades, Ms. Erlich has concentrated her practice in the area of professional liability litigation, defending, through trials, arbitrations and appeals, various professionals. Ms. Erlich is a former board member of the California Lawyers Association, an Advisor to the Real Property Executive Committee, and a delegate to the American Bar Association House of Delegates. She is also a California Association of Realtors Legal Forum member, and an active member of various industry organizations including the Professional Liability Underwriters Society, Defense Research Institute, Claim Litigation Management Alliance and ClaimsXchange