Prevent your AI from causing unintentional disability discrimination
AI discrimination is still a real concern HR leaders shouldn’t ignore. One specific type of discrimination that AI create is disability discrimination. Learn how this could happen and how to prevent it.
Published: August 31, 2022 | by Robert S. Teachout, Legal Editor at Brightmine
Artificial intelligence (AI) is transforming HR and the workplace by enabling and improving organizations’ ability to make data-based human resource decisions. These new technologies can play a key role that positively impacts an organization’s work, workplace and workforce. They can facilitate talent acquisition with smart forms, shape curated and personalized onboarding experiences, facilitate collaboration and even make the workplace more democratic and accessible via AI-driven technology.
When implemented with foresight and care, AI also has the potential to improve compliance and risk management by mitigating discrimination in hiring and promotions. But without proper preparation and precautions, an employer runs the risk of its AI unintentionally screening out qualified candidates for discriminatory reasons, including applicants with disabilities.
The problems in an AI system may arise from inadequate testing, programming bias or the AI tool learning bias based on the data of past employer decisions provided for analysis. To help employers recognize the risk of bias and take steps to mitigate it, the Equal Employment Opportunity Commission (EEOC) released new guidance on the Americans With Disability Act and the use of AI.
Algorithmic tools are often designed to predict whether an applicant can do a job under typical working conditions, or to look for a profile that fits current, successful employees, explained Sharon Rennert, a senior attorney advisor at the EEOC’s ADA/GINA Division during a Brightmine (formerly XpertHR) webinar.
“Typical working conditions do not include the use of reasonable accommodation,” Rennert reminded attendees, “and a typical profile does not reflect the differences between a successful non-disabled employees and a person with a disability who can be successful if provided with a reasonable accommodation.”
An unlawful “screen-out” may occur when an AI tool prevents a job applicant or employee from meeting (or lowers their performance on) a selection criterion, resulting in that individual losing a job opportunity. Take, for instance, a situation in which a chatbot program automatically screens out anyone who has a significant employment gap. Rennert asked, “What if the applicant had a six-month gap due to a mental health condition 20 years earlier, but has since had over 19-and-a-half years of uninterrupted service? In reality, this employee with a disability is very qualified.” Although the criterion is neutral on its face, in practice the AI has caused an unlawful discriminatory hiring decision.
There also is the risk of AI learning bias based on patterns in the data provided to train the system, patterns that can unintentionally disadvantage an individual on the basis of many characteristics. For example, Amazon developed an automated talent search program to review resumes and vet applicants. Screening was based on patterns the AI learned from the resumes of successful candidates submitted to the company over a 10-year period. Amazon had to quickly halt the program after it became apparent that the system’s hiring recommendations were biased against women. It turned out that the sampling of resumes used in the programming were mostly from men, reflecting the male dominance in the tech industry.
Want to make smarter, more-informed decisions?
To minimize disadvantaging persons with disabilities when using an AI decision-making tool, Rennert recommended:
- Using tools that have been pre-tested on individuals with wide range of disabilities (not just one or two), including those with mental health disabilities.
- Ensuring that the decision-making tool only measures abilities and skills that are truly necessary for the specific position.
- Measuring the necessary abilities and skills directly rather than indirectly by way of characteristics that are correlated as generally related to successful performance.
When AI screens out a person with a disability
If a decision-making tool screens out a person based on a disability, an employer must justify the need to apply it to this individual:
- Is the application of this tool to this person job-related and consistent with business necessity? 29 C.F.R. 1630.10(a).
- Does the requirement being measured related to an essential function of the position? If not, the employer can’t apply the disqualifying measurement to this person.
- Does the requirement directly measure the ability to perform the essential function, or does it measure performance indirectly?
Employers also should minimize the risk of unintentional disability discrimination by instituting positive actions to provide reasonable accommodations. An employer should clearly and prominently inform all individuals being rated that reasonable accommodations are available for individuals with disabilities, and provide clear and accessible instructions on how to request one. Consider including such notice in the job profiles posted to internal and external job boards.
In addition, Rennert said, an employer should notify applicants early in the recruiting and hiring process that an AI tool will be used in the application and hiring process. Rennert advocates placing a notice near the beginning of the job application form, in large or bold letters, and not buried in small print near the end.
The information should describe in plain language (and accessible formats):
- Traits the algorithm is designed to assess.
- The assessment method.
- Variables or factors that may affect the rating.
- If known, disabilities that might potentially cause a lower rating.
“Providing this information may help individuals with disabilities determine whether they need to request a reasonable accommodation,” Rennert said.
Finally, Rennert explained, it is important for employers to remember that they are accountable for the hiring decisions they make using AI tools, even if they use a third-party provider. HR professionals are responsible when purchasing and implementing a new AI system to question the vendors about the algorithms — how they actually work and what they measure — and to constantly monitor and evaluate the results.
AI excels at recognizing data and information that can help an organization gain a better understanding of what job candidates have to offer. By making diligent preparations and taking necessary precautions in its use of AI tools, an organization will be able to make less biased and more effective recruiting and hiring decisions.
Start your free trial today
Register today to gain free 7-day access to the Brightmine HR & Compliance Center and stay up to date, compliant and save valuable time.
About the author
Robert S. Teachout, SHRM-SCP
Legal Editor, Brightmine
Robert Teachout has more than 30 years’ experience in legal publishing covering employment laws on the state and federal level. At Brightmine, he covers labor relations, performance appraisals and promotions, succession and workforce planning, HR professional development and employment contracts. He often writes on the intersection of compliance with HR strategy and practice.
Before joining Brightmine, Robert was a senior HR editor at Thompson Information Services, covering FMLA, ADA, EEO issues and federal and state leave laws. Prior to that he was the primary editor of Bloomberg BNA’s State Labor Laws binders and was the principal writer and editor of the State Wage Assignment and Garnishment Handbook. Robert also served as a union unit leader and shop steward in the Washington-Baltimore Newspaper Guild of the Communications Workers of America. Actively involved in the HR profession, Robert is a member of SHRM at both the national and local levels, and gives back to the profession by serving as the communications vice president on the board of his local chapter.