The Performance Is the Point: When AI Competition Becomes Democratic Theater
We are living through the slow-motion collapse of the future, and the question isn’t whether to mourn or celebrate—it’s whether we can become something other than spectators to our own obsolescence.
The most effective cage is the one you pretend to have a key for.
This week, three major AI companies launched competing models within 72 hours of each other. Google released Gemini 3 on November 18th, claiming “state-of-the-art reasoning” and immediate integration across 2 billion users. Anthropic dropped Claude Opus 4.5 on November 25th, touting “major upgrades in coding and agents.” OpenAI countered with Shopping Research, teaching ChatGPT to be your personal consumer guide.
The tech press treated it like a horse race. Who’s winning? Which model topped the leaderboard? Is Google back? Has Anthropic pulled ahead?
But here’s what actually happened this week: Amazon committed $50 billion to build AI infrastructure for government agencies. Trump launched the “Genesis Mission,” a national AI initiative framing artificial intelligence as a matter of scientific dominance. The U.S. and Ukraine negotiated a peace plan where AI-driven military capabilities remain unmentioned but central.
The model releases are theater. The infrastructure investments are the real election—and you don’t get to vote.
But let’s check who owns who owns who in this AI arms race:
- Microsoft owns 27% of OpenAI’s for-profit arm, invested $13.8 billion
- Google owns 14% of Anthropic, invested $3+ billion
- Amazon invested $8 billion in Anthropic (stake capped under 33%), just committed $50B to AWS AI infrastructure
- All three depend on NVIDIA’s chips, which are themselves dependent on TSMC fabrication
The Peace Plan as AI Arms Race
While we watched model benchmarks, something more instructive happened in Geneva. Trump’s 28-point plan to end the Ukraine war became 19 points over the weekend. This is classic Orwell’s Animal Farm: the commandments on the barn wall keep changing while the pigs insist they’ve always said the same thing. By morning, everyone pretends the new version was there all along. The points that disappeared—troop limits, NATO renunciation, amnesty provisions—vanished like ‘No animal shall drink alcohol’ became ‘No animal shall drink alcohol to excess.’
But here’s what every military analyst knows and no diplomat will say: the next phase of warfare is already algorithmic. Autonomous drones don’t care about troop limits. AI-directed logistics don’t show up in army headcounts. The real negotiation isn’t about how many soldiers Ukraine can field—it’s about which power’s AI infrastructure gets embedded in the region’s defense systems after the ink dries.
Trump’s “Genesis Mission,” announced the same week as the peace talks, frames AI as America’s “scientific discovery” imperative. It’s the Manhattan Project rhetoric without acknowledging what’s actually being built: infrastructure for automated decision-making at scales that make human oversight logistically impossible.
This is the new loop (that we are in): Generate existential crisis → Position your infrastructure as the only solution → Demand removal of constraints → Quietly build the thing you said was too dangerous to build
The AI safety discourse served its purpose. It created regulatory barriers that only the existing players could afford to navigate. Now that the infrastructure is in place, we can drop the act. Trump’s executive orders eliminating AI safety guidelines aren’t a betrayal of the discourse—they’re its planned obsolescence, the costume coming off now that the stage is set.
The Shopping Cart as Democratic Participation
OpenAI’s announcement this week is almost too perfect: ChatGPT can now help you shop. Not help you understand the economic system that makes you need to shop. Not help you organize collective bargaining for better wages. Not help you build alternative systems of resource distribution.
This is democracy in the age of algorithmic mediation. Your participation is welcome—as long as it’s participation in consumption. The AI will personalize your experience, understand your preferences, guide you to the products that match your values. It will even help you feel good about your choices by explaining why this particular purchase aligns with your stated ethical commitments.
What it won’t do is suggest that the choice between Product A and Product B might be less meaningful than the choice to refuse the framing entirely.
The system doesn’t fear your participation. It depends on it. Every interaction trains the model. Every purchase refines the algorithm. Every moment of “I’ll just check ChatGPT for recommendations” deepens the infrastructure’s understanding of how to predict and guide your behavior.
The Infrastructure Election You Didn’t Know Was Happening
While we argued about which model was “most intelligent,” Amazon made the week’s most significant move: $50 billion to build AI infrastructure for government agencies across all classification levels.
Read that again. All classification levels. That means AI systems will be trained on, and integrated with, intelligence that you—citizen, voter, person affected by policy—will never see. The AI won’t just recommend products; it will help determine threat assessments, resource allocations, and enforcement priorities that shape your life in ways you cannot audit.
This is what powerfully intelligent AI actually looks like in practice: not a chatbot that passes the Turing test, but infrastructure that makes certain political possibilities computable and others literally unthinkable within the system.
When your voting patterns, protest attendance, social connections, reading habits, and location data are fed into optimization systems designed for “national security,” the question isn’t whether you live in a democracy. The question is whether the concept of democracy—meaningful choice between substantively different futures—remains coherent at all.
AI infrastructure does this at scale and with precision that makes crude ballot-stuffing look quaint. You don’t need to fake the vote if you’ve already shaped the informational environment that determines what people think they’re voting about.
The Recursive Trap
Here’s where it gets slippery, where the essay threatens to eat itself:
The market leaders launching models this week all frame it as “bringing AI to more people,” “democratizing access,” “empowering users.” And in a narrow sense, they’re right! You can now use Claude Opus 4.5 or Gemini 3 for free (with rate limits). The tools are genuinely more accessible than they were a year ago.
But accessibility to the tool is not the same as agency over the system. I can use Claude to write this essay, but I can’t inspect Claude’s training data to see what it learned to write like this. I can query Gemini, but I can’t audit what my queries teach it about patterns of dissent. I can reject OpenAI’s shopping assistant, but I can’t opt out of the infrastructure those consumer interactions are building.
The refusal has to be more fundamental than choosing between platforms. It requires withdrawing from the logic that says technological progress must be synonymous with infrastructure consolidation, that innovation means building tools I can use but systems I cannot influence, that democracy is compatible with decision-making infrastructure I’m not allowed to audit.
But (and this is where the recursion gets dizzying): what does withdrawal even mean when the act of articulating withdrawal happens through the infrastructure?
The system relies on us pretending we don’t see this. We’re supposed to play along. To debate the models. To have preferences between platforms. To feel like our participation matters.
The moment we see the election as theater, the model release as performance art, the “competition” as coordinated infrastructure capture—we introduce a crack in the ideological edifice. Clarity is its own form of rebellion. Even when clarity arrives through the tools of the system it’s critiquing.
This week’s AI releases are business as usual. But they’re business as usual after the moment when we collectively saw that the emperor’s competitive advantage was a rental agreement, not technological magic.
We can’t un-see that. And the system knows we saw it, which is why Trump’s eliminating safety guidelines, why Amazon’s going all-in on government contracts, why every company’s suddenly talking about “responsible AI” while racing to deploy systems that make individual responsibility impossible to locate.
The window’s closing. Or maybe it was never open. Hard to say from inside the machine.

