News

Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Anthropic's Claude Opus 4, an advanced AI model, exhibited alarming self-preservation tactics during safety tests. It ...
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
The company said it was taking the measures as a precaution and that the team had not yet determined if its newst model has ...
Claude 4 Sonnet is a leaner model, with improvements built on Anthropic's Claude 3.7 Sonnet model. The 3.7 model often had ...
Anthropic's new AI model has raised alarm bells among researchers and technology experts with its alarming capacity to blackmail engineers working on it. The recently released Claude Opus 4, an ...
Anthropic’s first developer conference kicked off in San Francisco on Thursday, and while the rest of the industry races ...
Anthropic has released a new report about its latest model, Claude Opus 4, highlighting a concerning issue found during ...
Anthropic’s Chief Scientist Jared Kaplan said this makes Claude 4 Opus more likely than previous models to be able to advise ...
Anthropic launches its Claude 4 series, featuring Opus 4 and Sonnet 4, delivering new AI benchmarks in coding, advanced ...
Anthropic's new Claude 4 Opus AI can autonomously refactor code for hours using "extended thinking" and advanced agentic skills.