News
Feedback watches with raised eyebrows as Anthropic's AI Claude is given the job of running the company vending machine, and ...
Bloomberg was allowed, and the New York Times wasn't. Anthropic said it had no knowledge of the list and that its contractor, ...
Chain-of-thought monitorability could improve generative AI safety by assessing how models come to their conclusions and ...
Anthropic claims that the US will require "at least 50 gigawatts of electric capacity for AI by 2028" to maintain its AI ...
Former Anthropic executive raises $15M to launch AI insurance startup, helping enterprises safely deploy artificial intelligence agents through standards and liability coverage.
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.
US lags behind China in AI race due to energy constraints, says report by AI company Anthropic. China added 400 GW of power ...
33mon MSN
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in ...
As AI agents start taking on real-world tasks, one startup is offering a new kind of safety net: audits, standards—and ...
Anthropic released one of its most unsettling findings I have seen so far: AI models can learn things they were never ...
Anthropic has verified in an experiment that several generative artificial intelligences are capable of threatening a person ...
As large language models like Claude 4 express uncertainty about whether they are conscious, researchers race to decode their inner workings, raising profound questions about machine awareness, ethics ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results