Daily Briefingยท

Anthropic, OpenAI Seek Weapons Experts Amid Iran War AI Concerns

Anthropic and OpenAI are hiring chemical weapons and explosives experts with five-plus years of experience to design guardrails preventing their AI models from providing instructions for chemical, radiological, or explosive weapons. Anthropic is offering up to $285,000 while OpenAI offers up to $455,000 for these policy and research roles posted earlier this month.

Why it matters

These hires will shape how major AI systems respond to hazardous prompts about weapons, potentially reducing catastrophic misuse risks as AI tools become embedded in national security operations. The moves come amid an escalating legal battle between Anthropic and the Pentagon over military AI use, with the Defense Department designating Anthropic a supply chain risk and demanding removal of its systems within six months.

Go deeper

Click a question to unpack this story layer by layer.

Where do you stand?

Should AI companies be allowed to refuse military partnerships even when national security may be at stake?

Is self-regulation by AI companies sufficient to prevent weapons-related misuse, or do we need government-imposed restrictions?

Should the development of increasingly capable AI systems be slowed down until stronger safety measures are in place?

right(3)center(5)left(2)