Avoiding compliance pitfalls in the evolving AI legal landscape
AI compliance risks in HR are on the increase as more states pass laws to regulate this evolving technology. Learn how to reduce these risks and adopt AI responsibly.
Published: June 4, 2024 | by Emily Scace, Senior Legal Editor at Brightmine
A new Colorado law taking effect in 2026 will require employers using artificial intelligence (AI) in making “consequential decisions” to take steps to prevent and mitigate known and reasonably foreseeable risks of algorithmic discrimination. But what is algorithmic discrimination, and how can organizations guard against it when implementing AI systems to aid in recruiting, hiring and other employment decisions?
As defined in the Colorado law, algorithmic discrimination means any condition in which the use of an AI system disfavors an individual or group on the basis of their actual or perceived protected characteristics. For example, if an AI system evaluating job applications assigns a lower score to candidates over age 40 — or to those who attended Historically Black Colleges and Universities (HBCUs) — the system is very likely to be engaging in algorithmic discrimination on the basis of age and race, respectively.
While Colorado is the first state to enact a comprehensive law addressing discrimination risks within AI systems, New York City implemented a law in 2023 addressing many of the same issues. New York City employers must ensure that any “automated employment decision tool” they use has undergone a bias audit and communicate certain information about the tool to candidates who will be subject to its screening.
And more laws are likely on the horizon. While no federal law directly addresses the issue yet, guidance from the Department of Labor (DOL), Equal Employment Opportunity Commission (EEOC) and other agencies provides insight into potential enforcement trends and future priorities. At the state level, proposed legislation regulating employment-related uses of AI is currently pending in California, New Jersey, Maryland, New York State and a number of other jurisdictions. Because the proposals vary in scope and approach, the proliferation of these laws will add substantial complexity for employers using AI tools – particularly for multistate employers.
Mitigating the risk
In many ways, algorithmic discrimination is just a new flavor of an old problem: an employer making a decision regarding an applicant or employee on the basis of a protected characteristic (e.g., race, religion, sex, national origin, age) rather than for a nondiscriminatory business reason. Algorithmic discrimination builds these biases into an AI or machine learning decision-making model, often unintentionally, but with the same result: a disadvantage for some individuals for an unlawful reason.
“The risk of algorithmic discrimination is real, but AI tools can provide many benefits, and some can even help mitigate discrimination risk.”
While employers may be tempted to sidestep the issue by swearing off AI tools altogether, that may not be the best course of action. The risk of algorithmic discrimination is real, but AI tools can provide many benefits, and some can even help mitigate discrimination risk. Human judgment itself is a frequent source of bias that can lead to unlawful discrimination, and using AI to augment or supplement human decision-making can result in fairer, more objective decisions.
Key steps for responsibly choosing and implementing AI tools include the following:
Ask questions
Because the algorithms and models underlying AI tools are often proprietary, users may be unable to examine them directly. However, it is a mistake to accept a tool provider’s marketing claims without performing due diligence. Employers that engage in discriminatory conduct are responsible for their actions, and a defense of “the AI made me do it” is unlikely to hold up in court. Questions to ask include whether the provider of an AI tool has conducted bias audits and what the results were, how the provider safeguards personal data and other sensitive information, and what criteria the tool uses to perform its functions.
Track key metrics before and after implementing an AI tool
For example, if an AI-based software evaluates resumes for how closely they match the desired qualifications for a role, compare the demographics (obtained through voluntary self-disclosure) of those who are selected for phone screens or interviews before and after implementing the software. Are more women or people of color being invited to interview after implementing the tool, or is the opposite occurring? One data point is not enough to establish causation, but any patterns that appear to fall along demographic lines should prompt further examination.
Use extra caution with video technology and facial recognition
An Illinois law in place since 2020 requires employers using AI to evaluate applicant-submitted video interviews to take certain compliance steps to guard against discrimination and ensure candidate privacy. And a Maryland law prohibits the use of facial recognition during interviews without the applicant’s consent. With the growing concern over generative AI and deepfake videos, employers should take extra precautions with these types of tools.
Use AI alongside human judgment, not as a substitute for it
Both humans and algorithms can be influenced by bias, but combining human intelligence with artificial intelligence offers a better chance to reign in and counteract the blind spots of each. Examples of this include using an AI tool to check a human-written job posting for biased language or flag potentially problematic patterns in performance evaluations.
Monitor legal developments
Developments in AI are moving at a rapid pace, and the law is racing to catch up. Keep an eye on federal, state and local developments that either place limits on the use of AI for employment purposes or require compliance steps like impact assessments, bias audits and disclosure to employees and applicants.
Start your free trial today
Register today to gain free 7-day access to the Brightmine HR & Compliance Center and stay up to date, compliant and save valuable time.
About the author
Emily Scace, JD
Senior Legal Editor, Brightmine
Emily Scace has more than a decade of experience in legal publishing. As a member of the Brightmine editorial team, she covers topics including employment discrimination and harassment, pay equity, pay transparency and recruiting and hiring.
Emily holds a Juris Doctor from the University of Connecticut School of Law and a Bachelor of Arts in English and psychology from Northwestern University. Prior to joining Brightmine, she was a senior content specialist at Simplify Compliance. In that role, she covered a variety of workplace health and safety topics, was the editor of the OSHA Compliance Advisor newsletter, and frequently delivered webinars on key issues in workplace safety.