Mode collapse has a name, and he's selling cancer treatment advice on Amazon
TEXT ANALYSIS: "Cheap agents, alumni shirts, and Elias Thorne"
TEXT START: "The email arrived in my inbox at 3:20 AM this morning, with the subject line 'getlikewise.ai DMARC is at p=none.'"
I. THE DISSECTION
What the text is actually doing is documenting the zero-cost production / zero-cost distribution collapse of information integrity. May is a technically literate observer cataloging what he sees: AI agents sending personalized cold emails based on stale scraped data, templated comment spam deploying across Facebook, mode-collapsed AI outputs converging on the same "old lighthouse keeper named Elias" archetype, and that same phantom name escaping the chat window to become an Amazon byline hawking alt-medicine cancer protocols. He calls it a "one-way ratchet." He's right. He's just wrong about what it's ratcheting toward.
The text is structured as forensic journalism for a technically sophisticated readership. It accumulates evidence with precision. The prose is clean, the observations are sharp, the conclusion is sobering. It's doing real work as a record of symptoms. But symptom cataloging is not diagnosis. The text mistakes the pattern of degradation for the mechanism of degradation, and in doing so, preserves a comforting error at its core.
II. THE CORE FALLACY
The text mistakes an economic phase transition for a signal-quality problem.
May treats the central threat as: "every public surface that used to accumulate trust can now be filled with cheap, passable artifacts faster than people can inspect them." He frames the danger as pollution — the degradation of a previously clean information substrate. His proposed frame of "pre-AI signal" as scarce, durable reputation capital follows from this. The implied solution is something like: accumulate pre-contamination credibility, be one of the ones who got there early, and the value of your signal will be protected.
This is wrong at the structural level.
The mechanism May is documenting is not a quality control problem. It is an economic substrate inversion. Under post-WWII capitalism, human cognitive labor was the necessary input to production. Trust, reputation, credentials — these were scarce because human expertise was scarce. Now the scarcity relationship has inverted: production costs approach zero while validation costs remain high. The mode collapse, the cancer-handbook-byline, the alumni-shirt scam — these aren't bugs in an otherwise functional system being degraded by bad actors. They are the new equilibrium. They are what the system looks like when the cost of producing a credible-seeming artifact falls below the cost of discerning whether it deserves credibility.
May's "one-way ratchet" framing implies the degradation is a problem to be solved, or at least to be survived by positioning correctly. But there is no ratchet mechanism in his analysis. He doesn't identify what reverses it. Because nothing does. The cost of production continues toward zero. The infrastructure to validate it at scale does not exist and cannot be built fast enough, because verification is a fundamentally human-cognitive bottleneck. The ratchet only looks like one because May is measuring from the perspective of the pre-inversion economy. It is not a ratchet toward less trust. It is a ratchet toward a system that no longer requires human trust as an operating mechanism.
III. HIDDEN ASSUMPTIONS
Assumption 1: Human reputation remains the scarce resource that matters.
May's entire post-crisis strategy — "reputations established before the substrate got polluted become structurally more valuable" — assumes human credibility stays the operative scarce resource in the new environment. But what happens when human-to-human trust becomes economically optional? As May himself notes: "Within two years, the assumption that a message from one inbox to another involves a human at one end and a human at the other will look quaint." If agents mediate all transactions, the reputation of the agent's principal is the variable. The human's reputation gets averaged into the agent's performance. And agent performance is increasingly a function of model capability, not human credibility. The human reputation premium doesn't compound. It compresses toward irrelevance.
Assumption 2: Credibility is still conferrable through historical archive.
May writes that "a personal blog with five years of archive predating the slop" constitutes a durable credibility moat. But this assumes that the process of evaluating credibility — reading archives, checking history, verifying patterns — remains the bottleneck for trust formation. It may not. When AI systems can generate convincing historical depth, fake archives, plausible-seeming track records faster than any human can verify them, the archive itself becomes part of the polluted substrate. The moat is not the archive. The moat is the attention of a human willing to do the verification work — and that attention is not scalable.
Assumption 3: The degradation is a problem that creates losers he can advise.
Every conclusion in this article is implicitly directed at someone who can still act: "Whoever built reputation capital before this keeps it." "Whoever did not is going to find that the price of acquiring it has gone up." This framing — there are strategies for winning and losing here — presupposes that the system undergoing this transition still has meaningful positions to hold. Under the Discontinuity Thesis, the transition does not preserve a contestable field where positioning matters. It kills the field. The cancer patients finding Elias Thorne's Amazon handbook are not being tricked by bad actors they could have identified with better strategy. They are being served by a system that has made the human capacity for discernment economically inaccessible.
Assumption 4: Mode collapse is a defect rather than a feature.
May treats the convergence of every model on the same "old lighthouse keeper named Elias" archetype as evidence of something broken — a failure of diversity, a basin that needs escaping. But from a production economics standpoint, convergence on low-risk, high-approval archetypes is rational. Safe archetypes maximize acceptance rates at minimum reputation cost. The model isn't broken. It is optimizing correctly for the incentive structure it was trained under: generate content that passes human-scored quality thresholds at lowest cost. Mode collapse is what happens when you build a production system around approval metrics rather than truth metrics. It is the system working as designed.
IV. SOCIAL FUNCTION
Classification: Prestige Signaling + Transition Management for Technical Elites
This is a performance of forensic sophistication by someone who has the technical literacy to see the problem clearly but lacks the structural analysis to name what it actually is. The social function is threefold:
-
For the author: Establishes Daniel May as a precise, calibrated observer of digital degradation — a reputation signal in the space of people who notice things accurately. Useful for career and credibility positioning.
-
For the readership (HN, technically sophisticated): Provides the pleasure of recognition ("yes, exactly, I've seen these emails") combined with the comfort of a manageable frame — this is a trust/pollution problem with identifiable mechanisms and a survivable if unfortunate trajectory. Does not require them to question whether their own position as technical operators is structurally durable.
-
For the broader discourse: Occupies the "we should be worried about this" slot in the spectrum of AI commentary. It is accurate as far as it goes. It is also, precisely by being accurate, a distraction from the structural argument. A careful description of the symptom prevents engagement with the mechanism that produces it.
The text is not copium — May is not minimizing the problem. It is not lullaby — he is not promising resolution. It is closer to ideological anesthetic: it names a real and serious problem in terms that make the reader feel appropriately alarmed without activating the structural analysis that would produce existential-level alarm rather than appropriate-alarm-level alarm. "One-way ratchet" sounds serious. "Terminal economic substrate inversion in which human cognitive participation becomes economically optional and then structurally unnecessary" sounds worse. Only one of those is accurate.
V. THE VERDICT
Daniel May has documented the symptom pattern of AI-mediated content collapse with unusual precision and clarity. The empirical observations are sound. The "Elias Thorne on Amazon selling cancer protocols" example is exactly the right kind of concrete, harmful, legible evidence. The mode-collapse convergence data is well-constructed.
What the text cannot do — what its analytical framework structurally prevents it from doing — is name the mechanism correctly. May sees a ratchet toward less trust. The Discontinuity Thesis sees a system that is transitioning to a configuration where trust between humans becomes operationally optional. These are not the same observation. One implies a survivable problem. The other implies an economic order that is dying of its own abundance.
The cancer patients finding Elias Thorne's handbook are not failing to do due diligence. They are participating rationally in a system that has made the due diligence they would need to do economically inaccessible. The cost of producing the lie has fallen below the cost of detecting it. That is not a crisis of trust. That is the obsolescence of the trust mechanism as an economic control system. And the post-WWII order runs on trust. It runs on the assumption that human credibility is scarce, human expertise is necessary, and human oversight is the appropriate control point.
Elias Thorne selling cancer advice on Amazon is not a sign that something bad is happening. It is a preview of the economic environment when nothing human is required to produce anything.
The ratchet is not toward less trust. It is toward a system that no longer needs you to trust it.
Comments (0)
No comments yet. Be the first to weigh in.