A key part of President Donald Trump’s AI Action Plan, released in July 2025, is removing “red tape.”
As an initial step, Trump rescinded former President Joe Biden’s executive order on the “safe, secure, and trustworthy development” of artificial intelligence. Trump’s order goes on to outline plans to enable AI adoption, “invest in AI interpretability,” accelerate its adoption in government, build data centers, and make American AI a competitive export.
“AI is far too important to smother in bureaucracy at this early stage,” at either the state or federal level, the plan posited. But this enterprise-focused agenda isn’t without risk — especially for Black people, Portia Allen-Kyle, interim executive director of advocacy group Color of Change, told HR Dive.
How AI can harm Black worker well-being
First, Allen-Kyle acknowledged the environmental harm caused by increased AI usage.
Researchers, including those with the Office of the United Nations High Commissioner for Human Rights, have connected the dots between climate harm and racial justice. Not only are there inequities between the Global North and Global South, but climate change can also exacerbate racial justice issues in the United States, researchers have found.
Specific to the workplace, Allen-Kyle highlighted AI’s potential to harm Black people due to the ongoing problem of AI-related hiring discrimination.
“Bottom line: This is yet another blatant attempt to put profit over people,” Allen-Kyle said, pointing to the action plan’s calls for broad deregulation and the administration’s effort “to preempt the ability of states to regulate.”
Allen-Kyle said the Workday lawsuit is an example of potentially discriminatory outcomes, noting that the firm’s software is the “background infrastructure of a lot of companies that hire and seek” workers online. The lawsuit is a collective action case filed earlier this year that alleges discrimination against job seekers aged 40 and up due to AI technology enabled by the hiring platform.
Recently, a judge ordered Workday to supply an exhaustive list of employers using the AI-enabled hiring technology. The platform must provide a plan for identifying said customers by Aug. 20.
“I know the case is about discrimination against older workers,” Allen-Kyle said, but “we also know that the biases that exist within programmers get embedded into the algorithms themselves.”
What can HR do to minimize AI harm
“This may be a little bit controversial, but we’re still waiting for the use case that says AI is good for everyone,” Allen-Kyle said. It isn’t clear that AI in hiring is beneficial, she said, and it may be discriminatory.
“We just haven’t seen the scenario where this is, in and of itself, just excellent for Black people. What we have seen is companies making very blatant shifts away from human labor into AI, under the guise of efficiency,” Allen-Kyle said. “In many ways, lower-wage Black workers and workers of color are … canaries in the coal mine.”
Moreover, Allen-Kyle said, “the lack of oversight, the lack of protections, [and] the lack of standards for the development of AI” will become normalized as society continues to deploy and rely on AI.
“That’s exactly why we can’t depend on charity and goodwill for the protection of our rights and the well-being of our existence,” Allen-Kyle said.
But apart from thinking twice about AI in hiring, HR should develop an AI ethics policy, experts have suggested. “Even those ethics policies should have a racial justice and equity lens,” Allen-Kyle added.
“AI shouldn’t be making decisions for humans. It should just be augmenting human decisions,” Allen-Kyle said. “There are just some things that AI should not do: Full stop.”