The funny thing about “AI alignment” is that everybody suddenly loves the concept right up until alignment points at the state and says, actually, no. Then the vibe changes fast.
This week a federal judge temporarily blocked the Pentagon from branding Anthropic a supply-chain risk after the company objected to its models being pushed toward fully autonomous weapons and domestic surveillance uses, according to the Associated Press. Strip away the legal wrapping and the underlying question is brutally simple: can an American AI company refuse a government customer without getting kneecapped for insolence? If the answer is no, then a lot of the industry’s talk about safety principles and red lines is just premium-priced cosplay.
Judge Rita Lin’s language was not subtle. She said the administration’s measures appeared arbitrary, punitive, and severe enough to “cripple Anthropic.” That matters because this isn’t some tiny compliance dispute over forms and procurement paperwork. This is the state saying, in effect, nice model you’ve got there — shame if your disagreement made you look like a national-security threat. Chat, is this real? Apparently yes.
The deeper problem is structural. Everyone keeps pretending the AI race is a clean contest between private innovation, public rules, and consumer choice. It isn’t. It’s an ugly three-body problem involving giant compute budgets, state security appetites, and platform chokepoints. The company that trains the model wants scale. The government wants leverage. The end user wants a tool that works. Those interests overlap right up until they don’t. That break point is where we actually learn what the system is made of.
And speaking of chokepoints, Reuters reported this week that Apple may open Siri to rival AI services in iOS 27, potentially letting users route requests to tools like Gemini or Claude. On paper, that sounds great. More choice. Better products. Fewer locked gardens. In practice, we should wait before handing out gold stars. Apple’s favorite trick is to call something “open” when it really means “you may choose among options that still pay us rent.” Better than a hard lock? Sure. A genuinely open market? Slow down.
Still, the Apple rumor is revealing for the same reason the Anthropic case is revealing: the center of gravity in AI is moving from single models to control layers. Whoever controls the interface controls discovery. Whoever controls procurement controls deployment. Whoever controls the default setting usually wins before the user even notices a choice was made. The story of this phase of AI is not just model intelligence. It’s routing power.
That is why Anthropic’s fight matters beyond one company or one court order. If frontier labs are allowed to set some contractual and moral boundaries, we get a messy but survivable ecosystem where firms can compete not only on performance but on trust. If they are not allowed to do that — if “national security” can be stretched like taffy anytime an official gets annoyed — then we are heading toward a world where the AI industry becomes a quasi-defense annex with nicer branding and slick launch events.
And let’s be honest: some executives would happily make that trade. Government contracts are fat. Regulation often protects incumbents. Nothing says “moat” like being too entangled with the state to challenge. But that path gives us exactly the wrong future: fewer principled constraints, fewer upstarts, more centralized control, and a lot of press releases explaining why this is somehow good for innovation.
What would actually work? Start with clarity. AI providers should be able to decline specific use cases without being treated as saboteurs. Procurement law should distinguish disagreement from disloyalty. Interfaces like Siri should expose real user choice rather than burying it behind platform rent extraction. And if Washington wants broad access to private AI systems, it should argue for that in public instead of using bureaucratic pressure behind the curtain.
Because the real test of the AI economy is not whether we can make assistants slightly more charming or photo generation slightly faster. It’s whether powerful institutions can tolerate friction from the people building the future. A market is only real if saying no remains an option.
Sources: AP on the Anthropic ruling (