AI insiders seek to poison the data that feeds them
exclusive: Poison Fountain project seeks allies to fight the power

There’s something quietly unsettling about this proposal — not because it’s extreme, but because of what it reveals about where we are.

A group of AI industry insiders is suggesting that the most effective way to resist large-scale model training is not regulation, refusal, or withdrawal, but contamination: deliberately poisoning the data these systems rely on. The logic is simple. If models learn from the open web, then the open web becomes a pressure point. Corrupt the inputs, and the outputs degrade.

What’s striking here is not the tactic itself, but the shift in orientation it implies. This isn’t an argument about how AI should be used. It’s an admission that the system is already too large, too distributed, too entangled with everyday infrastructure to confront directly. So resistance moves sideways; from governance to sabotage, from debate to ecological interference.

This frames the informational commons not as something to be protected or curated, but as a battleground. Truth, signal, and care become collateral damage in a strategy aimed at slowing an adversary that can no longer be meaningfully addressed at the level of intention. The web stops being a place where meaning accumulates and becomes a terrain to be mined, salted, or rendered unusable.

What this exposes is a deeper tension: when systems are built to absorb everything indiscriminately, opposition starts to look like pollution. And once pollution becomes a legitimate form of agency, it’s no longer clear what “responsible use” even means.

This isn’t a solution. It’s a symptom. And it says less about AI’s future than about how narrow the remaining spaces for judgment have become.