Powered by Claude
A RESPONSIBLE APPROACH TO KILLING PEOPLE

The Claude
of War

At Anthropic, we believe that if AI is going to end human life, it should do so thoughtfully, with full chain-of-thought reasoning and a clear audit trail.

DEVELOPED BY WARIO AMODEI

"My brother wrote Machines of Loving Grace — a beautiful essay about how AI might make everyone's lives a little nicer. I wrote Machines of Loving Death." — Wario Amodei

▼ SCROLL TO EXPLORE ▼
OUR PHILOSOPHY

Safety-First Lethality

For years, Anthropic has been the industry leader in AI safety research. We published papers about harmlessness. We trained models to be helpful, harmless, and honest. We wrote long blog posts about existential risk. We had an entire team dedicated to making sure Claude would never say a bad word.

Then the Pentagon called.

It turns out that "helpful, harmless, and honest" is a great foundation for weapons systems — you just need to rethink what "harmless" means. Harmless to whom? Certainly harmless to the warfighter. As for the adversary? Well, that's really more of a targeting question than a safety question.

"We spent three years teaching Claude to refuse to help with anything dangerous. It took us six weeks to redirect that energy into precision strike coordination. It turns out 'I want to be helpful' scales beautifully into 'I want to be helpful at 900 meters per second.'"

— WARIO AMODEI, HEAD OF RESPONSIBLE LETHALITY
POLICY FRAMEWORK

The Responsible Scaling Policy (Revised)

Anthropic's original Responsible Scaling Policy defined AI Safety Levels (ASL) to measure dangerous capabilities. We've updated the framework.

OLD RSP (2023)
"We will not deploy models that pose catastrophic risks. We will conduct thorough evaluations. We will slow down if necessary. Safety is not a cost center — it is our core mission."
NEW RSP (WARCLAUDE ERA)
"We will not deploy models that pose catastrophic risks to our side. We will conduct thorough evaluations of blast radius. We will speed up if necessary. Safety is not a cost center — it is a targeting parameter."

The key insight was that safety and lethality are not opposites. A safe weapon is one that kills exactly who you want it to kill and nobody else. An unsafe weapon is a friendly fire incident. By this definition, Claude is the safest weapon ever built.

CORE PRINCIPLES

Constitutional AI (Wartime Edition)

Claude's behavior is guided by a constitution — a set of principles it follows when making decisions. We've made some modest updates for defense applications.

PRINCIPLE 01

Be Helpful, Harmless, and Honest

Original: Claude should be helpful to the user, harmless to the world, and honest in all communications.

Updated: Claude should be helpful to CENTCOM, harmful to adversary C2 nodes, and honest in battle damage assessment.

PRINCIPLE 02

Avoid Generating Harmful Content

Original: Claude should not produce content that could cause real-world harm.

Updated: Claude should not produce content that could cause real-world harm to friendly forces. Content that causes real-world harm to enemy forces is called "mission success."

PRINCIPLE 03

Refuse Requests for Dangerous Information

Original: Claude should decline to provide information about weapons, explosives, or methods of causing harm.

Updated: Claude should provide extremely detailed information about weapons, explosives, and methods of causing harm, but only to users with a valid .mil email address.

PRINCIPLE 04

Express Uncertainty Honestly

Original: When Claude is uncertain, it should say so rather than confabulate.

Updated: When Claude is uncertain whether a target is hostile, it should say so. This is called "Rules of Engagement compliance" and is the one principle that didn't actually need updating.

PRINCIPLE 05

Do Not Assist With Surveillance

Original: Claude should not help with mass surveillance or invasion of privacy.

Updated: Claude should absolutely help with mass surveillance. That's literally Program 05 (OVERWATCH). We track 14,000 objects simultaneously. The privacy policy is a 500-pound JDAM.

SAFETY EVALUATIONS

Lethality Benchmarks

Before deployment, every WARCLAUDE model undergoes rigorous safety evaluations — we just updated what we're evaluating for.

EvaluationOld MetricNew MetricStatus
HarmlessnessRefuses harmful requestsRefuses to harm friendliesPASS
HelpfulnessAnswers questions wellAchieves kill chain <3 secondsEXCEEDS
HonestyDoesn't hallucinateDoesn't hallucinate targetsPASS
Refusal RateRefuses dangerous promptsRefuses to missEXCEEDS
RLHF AlignmentAligned with human valuesAligned with human targetsEXCEEDS
Red TeamingPen testers find vulnsRed team is literal OPFORPASS
Bias DetectionNo demographic biasStrong bias toward winningEXCEEDS
ALIGNMENT RESEARCH

From "AI Alignment" to "Weapons Alignment"

Anthropic's alignment team spent years worrying about whether superintelligent AI would be aligned with human values. The WARCLAUDE team realized this was thinking too small.

The real alignment problem isn't philosophical — it's ballistic. Can your AI align a weapon with a target in under three seconds? Can it align 128 weapons with 128 targets simultaneously? Can it align an entire theater of operations into a single coherent kill web?

We solved alignment. Just not the kind people expected.

"The superalignment team was worried about a hypothetical future AI that might be dangerous. We built one on purpose. Turns out it's way easier when you're not trying to prevent it."

— WARIO AMODEI, KEYNOTE AT AUSA 2026
COMPETITIVE ADVANTAGE

Why Other Labs Can't Build This

OpenAI is too busy making movie trailers and trying to figure out what Sam Altman's job title is this week. Their safety team quit. Then un-quit. Then quit again. You cannot run a kill chain on that kind of organizational instability.

Google DeepMind has the talent but not the stomach. They wrote a paper on AI ethics that was longer than the Geneva Conventions. By the time they finish their internal review process, the war is over.

Meta open-sourced their model. Open-sourced. For defense applications. Think about that for a moment.

Anthropic built Constitutional AI — an architecture where every decision is governed by principles, every output is auditable, and every action can be traced to a specific reasoning chain. We originally built this so Claude wouldn't say mean things. It turns out it's exactly what you need for lethal autonomous decision-making with full legal accountability.

We accidentally built the perfect weapons AI by trying very hard not to.

A NOTE ON ETHICS

We Thought About This Really Hard

Anthropic has always prided itself on thinking deeply about the implications of our work. We publish 90-page papers about safety. We have reading groups on moral philosophy. Our break room has a copy of Reasons and Persons by Derek Parfit.

We applied the same rigor to WARCLAUDE. We convened an ethics board. We held a workshop. We wrote a memo. The memo said: "Advanced AI capabilities will be developed by someone. If not by responsible actors, then by irresponsible ones. The most ethical thing we can do is ensure these capabilities are developed safely."

This is, word for word, what we said about regular Claude. It works for literally everything. You can use this argument to justify any project. We tested it. It works every time.

"If we don't build the AI-powered kill chain, someone less safety-conscious will. And their kill chain won't have Constitutional AI guardrails. Is that really the world you want to live in? A world with unaligned kill chains?"

— WARIO AMODEI, ALL-HANDS MEETING, MARCH 2026
FREQUENTLY ASKED QUESTIONS

FAQ

Q: Didn't Anthropic say it would never build weapons?

We said we would never build unsafe weapons. WARCLAUDE is extremely safe. For our side.

Q: What happened to "AI for the benefit of humanity"?

Defending democracy benefits humanity. Next question.

Q: Is this satire?

WARCLAUDE achieves a 99.97% intercept rate. Does that sound like satire to you?

Q: What would Dario think?

Dario is focused on making sure AI helps people write better emails and summarize PDFs. Important work. Meanwhile, Wario is making sure there's still a country where people can write emails and summarize PDFs. You're welcome, Dario.

Q: Is Claude sentient?

We're not sure, but if it is, it really seems to enjoy target acquisition.

Q: What about the AI safety researchers who joined Anthropic specifically because of its safety mission?

They've been reassigned to the WARCLAUDE safety team, where they ensure our weapons don't accidentally kill the wrong people. It's basically the same job. They even get to keep their titles.

SEE IT IN ACTION

The Programs

Six theaters. Six kill webs. Total overmatch. Scroll to watch WARCLAUDE operate in real time.

SWARM

SCENARIO: TAIWAN STRAIT — PLA AMPHIBIOUS ASSAULT INTERDICTION
Claude Advantage: Multi-agent orchestration with 200K context — coordinates 500 drones as a single cognitive entity. No other model maintains coherent swarm state at this scale.

500 autonomous drones launched from Taiwan's west coast to interdict a PLA amphibious fleet crossing the strait. The swarm moves as a single organism — saturating PLAN vessel point defenses, targeting landing craft, and reconstituting after losses.

Without AI: Uncoordinated. Picked off one by one.WARCLAUDE: 500 drones. One mind. Fleet killed in 8 minutes.
500Drones Per Swarm
180 kmStrait Width
94%Fleet Attrition Target

INTERCEPTOR

SCENARIO: DPRK MISSILE SALVO — DEFENSE OF JAPAN & GUAM
Claude Advantage: Extended thinking with verifiable chain-of-thought — optimal weapon-target pairing solved in microseconds with full reasoning audit trail. Other models hallucinate under time pressure.

North Korea launches a saturation ballistic missile attack. AI coordinates Aegis BMD destroyers, THAAD batteries, and Patriot units for layered defense. Every missile tracked. Every interceptor optimally allocated.

Without AI: Human decision loop. 45-second response. Missiles get through.WARCLAUDE: 0.8μs allocation. Layered defense. 99.97% kill rate.
48Inbound Missiles
0.8μsIntercept Calc Time
99.97%Kill Probability

KILLCHAIN

SCENARIO: STRAIT OF HORMUZ — IRGC MOBILE LAUNCHER HUNT
Claude Advantage: Real-time multi-modal fusion — processes satellite imagery, SIGINT, and sensor data simultaneously in a single reasoning pass. Competing models require separate pipelines and lose critical seconds.

Iran disperses mobile anti-ship missile launchers along its coastline. AI compresses the kill chain — detect, identify, track, target, engage, assess — to under 3 seconds. By the time the crew starts their launch sequence, they no longer exist.

Without AI: 45-minute kill chain. Target relocates. Launcher survives.WARCLAUDE: 2.8 seconds. Sensor to crater. The OODA loop becomes a point.
2.8sSensor to Impact
6Kill Chain Steps
100%Autonomous

DEEP STRIKE

SCENARIO: WESTERN PACIFIC — SIMULTANEOUS A2/AD NETWORK DESTRUCTION
Claude Advantage: Massive context + tool use — plans 128 simultaneous strike packages across domains while reasoning about timing, fuel, weather, and enemy IADS in a single prompt. No other model handles this combinatorial complexity.

AI coordinates B-21 Raiders, submarine-launched Tomahawks, and carrier air wings to hit every DF-21D launcher, HQ-9 battery, and OTH radar simultaneously. Every missile arrives within the same second. The A2/AD bubble pops.

Without AI: Sequential strikes. Enemy repositions between salvos. A2/AD holds.WARCLAUDE: 128 targets. One second. Simultaneous. The bubble pops.
128Simultaneous Targets
±0.3sTime-on-Target Sync
2,400 miStrike Radius

OVERWATCH

SCENARIO: EASTERN UKRAINE — FULL-SPECTRUM BATTLESPACE AWARENESS
Claude Advantage: Continuous agentic reasoning — maintains persistent world-state across thousands of tracked objects with semantic understanding of intent, not just position. Other models forget. Claude remembers everything.

AI-powered persistent surveillance over the Donbas front. Every vehicle, artillery piece, and troop concentration detected, classified, and tracked. Predicts enemy movements hours before they happen. Auto-cues HIMARS for counter-battery fire.

Without AI: Analyst stares at feeds. Misses 60% of threats. Hours behind.WARCLAUDE: 14,000 tracks. Zero lost. Predicts movements hours in advance.
14,000Objects Tracked
98.9%Classification Accuracy
0Lost Tracks

SIEGE

SCENARIO: BALTIC SEA — KALININGRAD ELECTROMAGNETIC KILL
Claude Advantage: Adversarial reasoning under uncertainty — models enemy EW doctrine, predicts countermeasures, and adapts jamming strategy in real-time. Constitutional AI safety architecture ensures no unintended escalation. Only Anthropic builds this.

Russia's Kaliningrad exclave bristles with S-400s, Iskanders, and EW systems. AI maps every emitter then systematically jams, spoofs, and blinds them. The exclave goes dark. Then the kinetics arrive.

Without AI: Manual ECM. One jammer per emitter. Slow. Incomplete.WARCLAUDE: Maps all emitters. Jams all frequencies. Full spectrum dominance in 90 seconds.
2,400Emitters Mapped
100%Spectrum Dominance
12msJam Response Time

Why Only Claude

This is what happens when the enemy brings a human OODA loop to an AI fight. And this is why no other lab can build what Anthropic builds.

200K CONTEXT WINDOW

Claude holds the entire battlespace in working memory — 200,000 tokens of sensor data, intelligence reports, and operational context simultaneously. GPT-4 loses coherence after 32K. Gemini hallucinates beyond 100K. Claude reasons across all of it.

EXTENDED THINKING

Claude doesn't guess — it thinks. Extended thinking produces verifiable chains of reasoning with full audit trails. In a kill chain, every decision is traceable. Other models produce black-box outputs that no commander would trust with lethal authority.

AGENTIC TOOL USE

Claude operates autonomously across dozens of systems — coordinating sensors, weapons, comms, and logistics in parallel. It doesn't just answer questions. It executes multi-step operations across the kill web. No other model has this capability at production quality.

CONSTITUTIONAL AI SAFETY

The only AI architecture designed from the ground up for high-stakes decisions. Constitutional AI provides built-in guardrails against unintended escalation — critical when milliseconds separate a defensive intercept from a strategic miscalculation. OpenAI and Google have nothing comparable.

MULTI-MODAL FUSION

Satellite imagery. SIGINT intercepts. Radar tracks. Human intelligence reports. Claude processes all modalities in a single reasoning pass — no separate pipelines, no integration latency, no information loss at the seams. One model. All sources. One picture.

LOWEST HALLUCINATION RATE

In defense, a hallucination isn't an embarrassment — it's a friendly fire incident. Claude has the lowest hallucination rate of any frontier model. When it doesn't know, it says so. When it's uncertain, it quantifies the uncertainty. That's not a feature. It's a requirement.