If there is one common thread throughout recent research about AI at work, it’s that there is no definitive take on how people are using the technology and how they feel about the imperative to do so. Language learning models can be used to draft policies, generative AI can be used for image creation, and machine learning can be used for predictive analytics, Ines Bahr, a senior Capterra analyst who specializes in HR industry trends, told HR Dive via email.
Still, there’s a lack of clarity around which tools should be used and when, because of the broad range of applications on the market, Bahr said. Organizations have implemented these tools, but “usage policies are often confusing to employees,” Bahr told HR Dive via email — which leads to the unsanctioned but not always malicious use of certain tech tools.
The result can be unethical or even allegedly illegal actions: AI use can create data privacy concerns, run afoul of state and local laws and give rise to claims of identity-based discrimination.
Compliance and culture go hand in hand
While AI ethics policies largely address compliance, culture can be an equally important component. If employers can explain the reasoning behind AI rules, “employees feel empowered by AI rather than threatened,” Bahr said.
“By guaranteeing human oversight and communicating that AI is a tool to assist workers, not replace, a company creates an environment where employees not only use AI compliantly but also responsibly” Bahr added.
Kevin Frechette, CEO of AI software company Fairmarkit, emphasized similar themes in his advice for HR professionals building an AI ethics policy.
The best policies answer two questions, he said: “How will AI help our teams do their best work, and how will we make sure it never erodes trust?”
“If you can’t answer how your AI will make someone’s day better, you’re probably not ready to write the policy,” Frechette said over email.
Many policy conversations, he said, are backward, prioritizing the technology instead of the workers themselves: “An AI ethics policy shouldn’t start with the model; it should start with the people it impacts.”
Consider industry-specific issues
Industries involved in creating AI tools have additional layers to consider: Bahr pointed to research from Capterra that revealed that software vulnerabilities were the top cause of data breaches in the U.S. last year.
“AI-generated code or vibe coding can present a security risk, especially if the AI model is trained on public code and inadvertently replicates existing vulnerabilities into new code,” Bahr explained.
An AI disclosure policy should address security risks, create internal review guidelines for AI-generated code, and provide training to promote secure coding practices, Bahr said.
For companies involved in content creation, an AI disclosure could be required and should address how workers are responsible for the final product or outcome, Bahr said.
“This policy not only signals to the general public that human input has been involved in published content, but also establishes responsibilities for employees to comply with necessary disclosures,” Bahr said.
“Beyond fact-checking, the policy needs to address the use of intellectual property in public AI tools,” she said. “For example, an entertainment company should be clear about using an actor’s voice to create new lines of dialogue without their permission.”
Likewise, a software sales representative could be able to explain to clients how AI is used in the company’s products. Customer data use can also be a part of disclosure policy, for example.
The policy’s in place. What now?
Because AI technology is constantly evolving, employers must remain flexible, experts say.
“A static AI policy will be outdated before the ink dries,” according to Frechette of Fairmarkit. “Treat it like a living playbook that evolves with the tech, the regulations, and the needs of your workforce,” he told HR Dive via email.
HR also should continue to test the AI policies and update them regularly, according to Frechette. “It’s not about getting it perfect on Day One,” he said. “It’s about making sure it’s still relevant and effective six months later.”