The Vibes Will Continue Until Democracy Improves
How Zuckerberg, Trump, and Musk Privatized Truth in a Single Week
“Power is not a means; it is an end. The object of power is power.”
— George Orwell, 1984
This week, Mark Zuckerberg announced Meta would abandon fact-checking in favor of “community notes,” while Donald Trump dismantled federal AI safety frameworks, and Elon Musk appeared via video link at a rally for Germany’s far-right AfD party. These aren’t separate events—they’re features of the same operating system upgrade. The old imperial logic of manufacturing consent through controlled information is being replaced by something more elegant: vibes-based governance, where truth becomes whatever the network effect amplifies loudest.
Zuckerberg’s pivot isn’t about free speech; it’s about platform optimization. Why pay professional fact-checkers when you can crowdsource epistemic labor to unpaid users? The “community notes” model transforms every Facebook argument into a distributed content moderation workforce, extracting value from our very disagreements. This is surveillance capitalism’s final form—not just monitoring our behavior, but monetizing our attempts to correct each other’s reality.
But here’s what Meta won’t tell you: the system they’re adopting has already failed. Analysis by the Center for Countering Digital Hate found that 74% of misleading election-related posts on X never received visible community notes—despite accurate proposed notes—because they couldn’t achieve “consensus” across polarized viewpoints. The design feature that supposedly prevents bias actually prevents correction. Meta isn’t eliminating fact-checking; it’s institutionalizing the production of alternative facts while calling it democracy.
The new system is more efficient: it doesn’t need to manufacture consent because it can manufacture the very conditions under which consent forms.
Meanwhile, Trump’s systematic dismantling of AI safety measures reveals the perfect symmetry between techno-libertarian capture and authoritarian governance. His January 23rd executive order didn’t just revoke Biden’s AI safety framework—it weaponized the language of innovation against accountability itself. Mandatory testing for high-risk AI models? Bias monitoring? Cybersecurity protocols? All reframed as “harmful Biden policies” impeding American dominance. This isn’t deregulation; it’s the privatization of epistemic authority. When the state abandons its role in ensuring AI systems serve public interest, it cedes that power directly to the platforms. Zuckerberg doesn’t need to lobby for favorable regulation when there’s no regulation at all.
Why invade a country when you can simply adjust its information diet?
The international dimension completes the circuit. Musk’s endorsement of the AfD—a party classified as “suspected extremist” by German domestic intelligence, polling at 20% but barred from mainstream coalition politics—demonstrates how platform power transcends national boundaries. His intervention in German elections isn’t foreign interference in the traditional sense; it’s the logical exercise of algorithmic authority.
What makes this particularly instructive is that Germany has Europe’s most robust platform regulation. The Digital Services Act, NetzDG, and aggressive enforcement—none of them prevent a platform owner from shaping political consciousness directly through recommendation algorithms and strategic amplification. This isn’t a story of unopposed platform power; it’s worse. It’s a demonstration that even the most stringent regulatory approaches can’t fully contain algorithmic authority when the platform owner decides to intervene. Why invade a country when you can simply adjust its information diet?
This convergence reveals what Foucault would recognize as a new form of biopower—not disciplining individual bodies but modulating collective attention. The old media system required elaborate manufacturing of consent through editorial control and institutional gatekeeping. The new system is more efficient: it doesn’t need to manufacture consent because it can manufacture the very conditions under which consent forms. By controlling the algorithmic environment where political consciousness emerges, platform owners don’t need to tell people what to think—they can shape how people think.
The recursive trap: writing about algorithmic manipulation using algorithmic tools, for algorithmic distribution.
The genius is that this feels like liberation. Eliminating fact-checkers appears to restore free speech. Removing AI safety measures seems to unleash innovation. Supporting “anti-establishment” parties looks like democratic participation. Each gesture performs its opposite—the expansion of corporate control masquerades as resistance to institutional authority. This is ideology at its most sophisticated: the system doesn’t suppress dissent but channels it into forms that reinforce the system’s own logic.
Yet there’s something brittle about vibes-based governance. Systems that depend on manufactured authenticity are vulnerable to their own contradictions becoming visible. When the same billionaire who claims to champion free speech selectively throttles links that threaten his business model, when the platform that promises democratic discourse uses engagement algorithms proven to amplify extremism, when the AI systems that claim to serve humanity are optimized for shareholder returns—the gap between performance and reality becomes harder to sustain.
Even this analysis—this packaging of “vibes-based governance” as digestible concept—risks becoming another consumable understanding. Another way to feel smart about capture without escaping it. The recursive trap: writing about algorithmic manipulation using algorithmic tools, for algorithmic distribution. The system doesn’t need to suppress this critique. It just needs to make it go viral.
The real question isn’t whether these systems can maintain their contradictions, but what happens when the vibes stop working. When economic crisis makes platform optimization feel like bread and circuses, when climate disasters make engagement metrics feel absurd, when imperial overreach makes digital sovereignty an existential necessity—what new forms of refusal become possible?
Perhaps the answer lies not in trying to restore the old gatekeeping systems—which were themselves mechanisms of elite control—but in building infrastructures for collective intelligence that can’t be captured by platform logic. Think federated social protocols where users control recommendation algorithms. Think public AI utilities accountable to democratic oversight, not shareholder returns. Think worker-owned platforms where moderation labor isn’t extracted but compensated. Think community notes administered by communities that actually exist, not digital abstractions organized by engagement metrics.
The vibes-based empire is powerful, but it’s also unstable. Its strength—the ability to shape consciousness through algorithmic manipulation—is also its weakness. When people begin to understand how the machine works, they can begin to imagine how to break it. And break it we must, before it breaks us.
“The revolution will not be televised.”
— Gil Scott-Heron, 1971

