Latest News

Anthropic changes safety policy amid intense AI competition

February 25, 2026 at 10:42 PM
By Mashable
Anthropic changes safety policy amid intense AI competition
Anthropic will take these steps when developing a potentially dangerous model.

💡Analysis & Context

Anthropic will take these steps when developing a potentially dangerous model Anthropic will take these steps when developing a potentially dangerous model. Monitor developments in Anthropic for further updates.

📋 Quick Summary

Anthropic will take these steps when developing a potentially dangerous model Home > Tech Anthropic

Anthropic will take these steps when developing a potentially dangerous model. Home > Tech Anthropic changes safety policy amid intense AI competition The company behind Claude acknowledged that its original safety policy faced too many obstacles. By Rebecca Ruiz Rebecca Ruiz Senior Reporter Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism. Read Full Bio on February 25, 2026 Share on Facebook Share on Twitter Share on Flipboard Anthropic's safety practices are changing. Credit: NurPhoto / Contributor via Getty Images When Anthropic launched years ago, the company wanted an industry-wide "race to the top" in artificial intelligence, instead of a race to the bottom in pursuit of customers and market dominance that would inadvertently lead to catastrophic safety risks. So Anthropic adopted safety principles and policies that it hoped it competitors would also implement. In some instances, companies, including Google and OpenAI, did, according to Anthropic. Still, Anthropic's hopes didn't "pan out" as the company hoped, according to a blog post it published Tuesday. The post announced that Anthropic, the maker of the AI chatbot Claude, is altering key safety practices to meet what it views as present-day challenges. You May Also Like SEE ALSO: Claude apps: How Anthropic will integrate Slack, Canva, and more Specifically, Anthropic will no longer automatically pause model development if it could be considered dangerous; instead, it will consider its competitors' actions and whether they release models with similar capabilities. Previously, Anthropic committed to safeguards that would reduce its models' absolute risk, regardless of whether other AI developers did the same. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. Loading... Sign Me Up Use this instead By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," the company wrote. "We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds."Though Anthropic said it aims to continue leading on safety, its latest decision reflects the breakneck speed at which competitors are releasing new models. Anthropic has also been under intense pressure this week by the U.S. Defense Department, which is pressing the company to allow the military to use its AI tools for any purpose, including mass surveillance or the deployment of autonomous weapons without human oversight. Anthropic has yet to relent on those points in contract negotiations with the Defense Department, reportedly stirring the ire of Defense Secretary Pete Hegseth, who threatened to sever the company's relationship with the military, Axios reports. Related Stories Anthropic: Chinese AI firms created 24,000 fraudulent accounts for 'distillation attacks' Anthropic releases Claude Sonnet 4.6: Benchmark performance, how to try it Google releases Gemini 3.1 Pro: Benchmark performance, how to try it ChatGPT is overtaking Google in one alarming way OpenAI begins testing ads in ChatGPT Anthropic has participated in an AI pilot program for military-related imagery analysis, along with Google, OpenAI, and xAI, according to the New York Times. Though Claude has been the only chatbot working on the government's classified systems, a Pentagon official said Anthropic could be replaced by another firm. Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics Artificial Intelligence Social Good Rebecca Ruiz Senior Reporter Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes
Share:

Help us improve this article. Share your feedback and suggestions.

Related Articles

Cookie Consent

We use cookies to enhance your browsing experience, analyze site traffic, and serve personalized ads. By clicking "Accept", you consent to our use of cookies. You can learn more about our cookie practices in our Privacy Policy.