Feeding your AI with employee data? Here’s food for thought
Many organizations are training AI with employee data. If you’re one of them, you must consider employee data privacy. This resource lays out important data privacy considerations for HR leaders
Published: March 12, 2024 | by Helena Oroz, Senior Legal Editor at Brightmine
There’s no such thing as a free lunch, right?
Organizations with the means to train their own artificial intelligence (AI) models with employee data might disagree. AI models are insatiable data consumers, and employee data is plentiful, renewable — and free.
Some employers have been quick to warn their employees against feeding publicly available generative AI applications with confidential company information. But employers using employee data to train AI models might be ignoring the privacy implications of their own activities.
In this resource:
Big appetites
In basic terms, an artificial intelligence (AI) model is a program, trained on a set of data, that can autonomously make decisions or predictions. Without getting technical, a quality data set is really important to AI training, and training an AI model requires a lot of data.
There are many takes, hot and otherwise, on whether data is “the new oil” (just do a quick Google search on that phrase). As the Federal Trade Commission’s Technology Blog recently put it, the comparison comes from looking at data as a raw material mined in massive quantities but useful only when refined; and “there is perhaps no data refinery as large-capacity and as data-hungry as AI.”
Few rules
Employee data privacy protection in the United States is limited. A national comprehensive data protection law does not exist, and most experts say such a law is not realistically on the horizon.
Some federal laws provide limited protection for certain employee information, including:
Americans with Disabilities Act
The Americans with Disabilities Act (ADA), which requires a covered employer to restrict access to employee medical information and maintain it in a confidential manner.
Health Insurance Portability and Accountability Act
The Health Insurance Portability and Accountability Act (HIPAA), which restricts how a “covered entity” may use and disclose protected health information. An employer must be acting as a covered entity (i.e., a health plan or covered health care provider) for HIPAA to apply, though. When an employer reviews medical information in its capacity as an employer, the medical information is considered an employment record under HIPAA and excluded from coverage.
Genetic Information and Nondiscrimination Act
The Genetic Information and Nondiscrimination Act (GINA), which restricts an employer from acquiring or sharing employee genetic information and family medical history.
A web of protections has resulted from individual states trying to fill the federal privacy void in various ways, including but definitely not limited to:
Biometrics laws
A handful of states (California, Illinois, Maryland, New York, and Oregon among them) have passed laws that address employer use of finger scans, retinal scans, or other biological/physical characteristics to identify employees.
Electronic surveillance
Several states have laws on the books regarding workplace monitoring. These laws generally require an employer to provide employees with notice regarding monitoring activities on employer-provided equipment and networks.
Consumer data protection
As of this writing, fourteen states have comprehensive data protection laws on the books (not all of which are effective yet) that give covered consumers certain rights with respect to their personal information. These laws give consumers in those states a certain amount of notice and control with respect to their personal data – but only the California Consumer Privacy Act covers employment-related information.
Even the strongest state laws of any variety do not, at this time, outright prohibit an employer from training an AI model (its own or a “model-as-a-service”) with employee data – so long as it is otherwise operating within the bounds of applicable laws in its jurisdiction(s). But therein lies the rub: there are some rules. As just about every federal agency that has anything to do with workplace-related AI issues has stated quite plainly: there is no AI exemption from existing law.
And the laws and regs will keep coming. Many AI-related bills are currently under consideration in state legislatures across the country, several of which propose restrictions on employer collection and disclosure of employee personal information. Employers rushing to get in on the AI feeding frenzy before considering their current privacy obligations, and without keeping an eye on their potential future ones, are betting on unknown odds in an unfamiliar game.
Want to make smarter, more-informed decisions?
Employee data, employee concerns
Many businesses, even those not known as behemoths, have tons of data at the ready. Employers with any sizable workforce have at least two buckets of information at their disposal: information that can be gleaned from the daily work that their employees perform, and personal data about those very same employees.
Why employee data? It’s plentiful, it’s free, and employers already own it. And considering the dearth of regulation in this particular area, it’s not surprising that employers like employee data as an AI training source.
But there are other reasons that employers are feeding AI with employee data:
- Employers want to reap the full benefits of employee work. But they may struggle to “maximize efficiencies” – i.e., create processes to avoid constant wheel reinvention. AI tools can help with these challenges, and in fact, much of the AI-in-the-workplace narrative has focused on such time and cost savings to employers.
- Employers want AI to act like a collective brain. Employees leave, projects end, communications are deleted or archived, and the human intelligence behind it all is lost or unusable to the organization. AI can theoretically solve that problem.
- Employers want tailored AI solutions. Publicly available AI models are trained in large part on publicly-available information, which can’t give employers what they want in terms of solutions based on internal, proprietary information.
The flip side of employer AI excitement is employee AI anxiety. Employees are already anxious about AI technology replacing them. But employees who feel like their employer is actively using their hard work to train the tech that will replace them is next level anxiety.
Employees are also concerned about their privacy, although most don’t understand existing rules (or the lack thereof). According to recent studies from the Pew Research Center:
- A majority of US adults oppose using AI to track workers’ movements on the job or record exactly what people are doing on their work computers, according to.
- Only twenty-three percent of US adults know that the US lacks a national privacy law that sets common standards for companies that collect data through their products and services according to research from the Pew Research Center.
- Seventy-two percent of US adults said they have little to no understanding about the laws and regulations that are currently in place to protect their data privacy.
Employers who do not want uninformed employees to create their own narratives should be up front about why and how they are using AI technology, what role employee data plays in that process, and how they are complying with applicable law. Employees may still have fears, but at least they will have the facts, too. Employers who do not embrace AI transparency may feel the familiar consequences of poor people decisions: decreased morale, low engagement, and ultimately turnover. If an employer can’t keep its employees around, it can’t keep training its AI either.
Real smarts about AI
Employers scraping employee data for AI training purposes should proceed in a thoughtful, measured, and most importantly, informed manner. So, before feeding your AI, consider the following as food for thought:
An employer must have a baseline understanding of differences between AI service offerings. An open, publicly available AI application (think OpenAI’s ChatGPT) might seem like a great option and a suitable tool, but sharing sensitive employee or company information could have serious privacy consequences, as those applications may use the information that is shared with them to train their models.
Sharing any sensitive information with any third party should be carefully considered from all angles. An employer should know and understand the terms under which any third party will use its data and never assume that any particular service offering is secure, compliant or trustworthy. Employers must do their homework and read all the fine print to ensure that their data remains secure.
This is not a DIY situation. An employer interested in AI applications should seek expert consultation on how best to train AI with company data.
In addition to existing employment and privacy laws, employers should not forget about laws on workplace surveillance if monitoring employee work for AI training use. And even where such laws do not exist, best practice is to always obtain employees’ informed consent before engaging in any workplace monitoring.
Conclusion
Employers currently have a tremendous amount of freedom in terms of how they can use most employee information. But smart employers understand that the law on AI in the workplace won’t always be outpaced by the technology – and they aren’t willing to pay the price for uninformed decisions that negatively impact their employees.
Start your free trial today
Register today to gain free 7-day access to the Brightmine HR & Compliance Center and stay up to date, compliant and save valuable time.
About the author
Helena Oroz, JD
Senior Legal Editor, Brightmine
Helena Oroz is an attorney with 19 years of employment law counseling and litigation experience. She joined Brightmine in 2022 as a legal editor covering various leave topics such as paid sick leave and FMLA as well as disability, multistate employer issues and other emerging HR trends.
Helena earned a Bachelor of Arts in communication and English from Denison University and a Juris Doctor from Cleveland-Marshall College of Law, Cleveland State University.
Before joining Brightmine, Helena worked as a law firm associate counseling and defending employers with respect to a wide variety of employment law issues, including leave management, restrictive covenants, harassment and discrimination, the Fair Credit Reporting Act, and COVID-related compliance issues. She also wrote and edited legal content concerning critical employment law issues. Helena also previously worked as in-house Employment Counsel for a global consumer goods company.
Connect with Helena on LinkedIn.