As AI leaps forward, concerns rise that innovation is leaving safety behind
When the United States military captured former Venezuelan President Nicolás Maduro in January, it used an AI tool developed by a private U.S. company. It’s unclear exactly what the tool did, but the company’s policy says its products can’t be used for violence or to develop weapons.
Now, the Pentagon is considering cutting ties with that company, Anthropic, because of its insistence on limits for how the military uses its technology, according to Axios.
The tensions between AI safeguards and national security aren’t new. But multiple events in the last month have brought the issue of AI safety – in contexts ranging from weapons development to ethical advertising – into the spotlight.
Why We Wrote This
Artificial intelligence is developing so rapidly that some industry insiders fear safety concerns aren’t getting enough attention. That’s sparking conversation about how to balance innovation, competition, and safeguards.
“A lot of the people who’ve been involved in the field of AI have been thinking about safety in various forms for a long time,” says Miranda Bogen, the founding director of the Center for Democracy and Technology’s AI Governance Lab. “But now those conversations are happening on a much more visible stage.”
This month, researchers resigned from two major U.S. AI companies, citing inadequacies in the companies’ safeguards around things like consumer data collection. In an essay Feb. 9 titled “Something Big is Happening,” investor Matt Shumer warned that AI will not only soon threaten Americans’ jobs en masse, but that it could also start to behave in ways its creators “can’t predict or control.” The essay went viral on social media.
While urging action on very real risks, many AI safety experts caution against overplaying fears about hypothetical scenarios.
“These moments of public attention are valuable because they create openings for the kind of public debate about AI that is essential,” Dr. Alondra Nelson, a former member of the United Nations High-level Advisory Body on Artificial Intelligence, wrote the Monitor in an email while attending a global AI summit in India. “But they are no substitute for democratic deliberation, regulation, and real public accountability.”
Pressure to compete
In December, President Donald Trump issued an executive order blocking “onerous” state laws regulating AI. For example, his order singled out Colorado’s law that bans “algorithmic discrimination” in areas like hiring and education. The president’s order was supported by Republicans who said forcing AI companies to comply with excessive regulations could leave the U.S. at a competitive disadvantage with China.
That sense of competition appears to be central to Anthropic’s move away from the Pentagon. Anthropic wants to ensure its technology is not used to conduct domestic surveillance or develop weapons that fire without human input.
But the Department of Defense, which stated earlier this year that the U.S. military “must build on its lead over our adversaries in integrating [AI],” wants to deploy AI technology without regard to companies’ individual policies, according to reporting by Axios and Reuters.
“We constantly face pressures to set aside what matters most,” wrote Mrinank Sharma, an AI safety researcher, in a publicly-posted resignation letter from Anthropic last week. He did not refer to a specific event that led him to resign, but warned that, “our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
Dr. Bogen says policies designed to compel AI providers to subject their models to certain tests or to invest in safety are often diluted into disclosure requirements or nonbinding recommendations.
“The incentives are so strongly in favor of moving forward quickly, even when there’s a desire to put up guardrails,” she says.
Is the world “in peril”?
Those warning of AI’s dangers have sometimes used existential language.
Zoë Hitzig, a former researcher at OpenAI, cited “deep reservations” about the company’s strategy in an editorial she wrote for The New York Times last week, fearing its decision to start testing ads on ChatGPT “creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Mr. Sharma’s resignation letter from Anthropic warned that “the world is in peril.”
Some experts say such language is counterproductive.
“I find the framing of that ‘point of no return’ to be very disempowering,” says Dr. Bogen.
She does worry that as people choose to turn over more of their decision-making to AI and learn to use the technology in their jobs, they’re creating dependencies that will be increasingly difficult to untangle.
But she says people are ultimately responsible for their choices and actions.
“I don’t think we’ll ever get to the point where it’s truly impossible to … make decisions about how to treat this new technology,” she says.
Deepen your worldview
with Monitor Highlights.
Already a subscriber? Log in to hide ads.
Katherine Elkins, an AI safety investigator for the National Institute of Standards and Technology, says she hopes she’s wrong about some of the risks she sees, like an AI chatbot potentially using someone’s data to manipulate them. But until she’s sure, she wants safety to remain an urgent priority.
“Personally, I have felt it’s better to err on the cautious side and devote my time to thinking about the risks of AI” than to think the technology won’t get better.