Employers across the country have become increasingly reliant on artificial intelligence (AI) to manage many aspects of employment including hiring. It is not uncommon for larger companies to rely on AI to assist in hiring related decisions from using tools such as resume scanning, virtual assistants, and testing software measuring aptitude, cognitive skills and personality to allowing AI to select which candidate to hire.
In a quest for enhanced efficiency in employment decisions, AI companies have embraced AI, however recently, evidence is grows that that AI can replicate human biases, putting companies at risk for discrimination lawsuits.
Algorithms are capable of adopting the inherent biases underlying past employment practices or social conventions embedded in their code, which can result in race, sex, age, national origin, color, disability discrimination and other classes protected by law.
In fact, in 2023, a groundbreaking class action lawsuit was filed alleging employment discrimination based on an employer’s use of AI hiring tools. The lead plaintiff was unable to land a job through an employment agency after submitting numerous – 80 to 100 – applications listing his advanced education and solid work history leading to the conclusion that company must be engaging in discriminating by offering applicants screening tools with biased AI algorithms. The case underscores the potential for employment discrimination when companies utilize AI to streamline the hiring process.
The use of AI in hiring has gained attention at the local state and federal levels. Some states have proposed prohibiting the use of certain automated employment decision tools unless employers take affirmative steps to screen the technology, including running a bias audit. Recently, the EEOC has issued guidance on how employers’ use of AI can comply with the Americans with Disabilities Act (ADA) and Title VII.
In 2023, the EEOC issued guidance to assist employers in “determining if their tests and selection procedures are lawful for purposes of Title VII disparate impact analysis.” Disparate impact discrimination occurs when a facially neutral policy or practice has the effect of disproportionately excluding persons based on a protected status. Important to note, is that even if an outside vendor develops the tests/procedures, the employer may be responsible under Title VII if discrimination occurs.
With regard to the ADA, the EEOC concluded that an employer who administers a pre-employment test may be responsible for ADA discrimination if the test discriminates against individuals with disabilities even if the test was developed by an outside vendor. Employers are advised to take necessary steps to eliminate discrimination against disabled candidates when using an AI hiring tool suggesting alternative formats or tests and providing pertinent information to disability hires.
Employers increasingly rely on artificial intelligence (AI) to manage many facets of the employment cycle such as hiring, performance evaluations and termination. Unfortunately, artificial intelligence as a tool for employment decisions is not devoid of biases that can lead to employment discrimination. If you believe that you have been discriminated against when seeking employment due to AI screening, or suffered an adverse employment action including termination based on discrimination related to an AI algorithm, it is important to seek help. Contact the employment law office of Alan C Olson & Associates for immediate assistance at 262-785-9606.