The 73.4 Percent

A developer recently released a bot called “Nothing Ever Happens.” It does one thing: automatically buys “No” on every non-sports prediction market on Polymarket and holds to resolution. The insight behind it is simple and devastating. According to the creator, 73.4% of all non-sports Polymarket contracts resolve as No.

So the bot that bets against everything happening wins almost three-quarters of the time.

This seems counterintuitive at first. Prediction markets are supposed to be information aggregation machines — mechanisms that pool the dispersed knowledge of thousands of people who have real money on the line. They’re supposed to outperform polls, pundits, and expert forecasts. And they do, in many contexts. So why would a strategy as crude as “just bet No on everything” generate positive expected value?


The answer isn’t that prediction markets are broken. It’s that what gets traded is not a random sample of possible futures.

A prediction market requires someone to create a contract. Contracts get created around events that people are worried about, excited about, or currently arguing about. These are the narratively salient events — the coups, elections, resignations, breakthroughs, collapses, and confrontations that have already captured enough collective attention to seem worth trading. The act of creating a market for an event is itself a signal: this event was interesting enough to think about carefully.

But there’s a cognitive trap built into that selection process. When we think about an event — really think about it, model it, imagine its consequences, debate its likelihood — we inflate our estimate of its probability. Kahneman and Tversky named this the availability heuristic: how readily an outcome comes to mind influences how likely we judge it to be. A vivid, narratable future feels more probable than a vague or unarticulated one, regardless of actual base rates.

Prediction markets aggregate many people’s probability estimates. But if all those people are subject to the same availability bias, and if the selection of what gets traded has already filtered for narratively salient events, then the market prices don’t reflect pure probability assessment — they reflect something like how much this class of event has been thought about. And that inflates the No resolution rate.

The 73.4% figure is the statistical fingerprint of human narrative attention.


This seems to contradict something Nassim Taleb has been saying for twenty years.

Taleb argues, famously, that we systematically underestimate tail risks — the events that fall outside our narratives, the Black Swans we couldn’t have told a story about in advance. We build models that don’t include what they haven’t seen. We’re surprised by outcomes nobody had the vocabulary to predict.

On its face, this is the opposite of the Polymarket finding. Taleb says we underestimate dramatic events. The Nothing Ever Happens bot implies we overestimate them.

But these aren’t contradictory. They’re complementary descriptions of the same underlying structure.

The key is narrative accessibility. Taleb is pointing at events that never entered our narratives — events that weren’t dramatic to us in advance because we hadn’t imagined them. These are systematically underestimated. The Nothing Ever Happens finding is pointing at events that did enter our narratives — events dramatic enough to get prediction markets created around them. These are systematically overestimated.

The unified picture: human probability estimation is distorted by narrative in both directions. Events we’ve narrated get inflated. Events outside our narratives get deflated toward zero. The Black Swan is the negative case of what the bot exploits as positive. We’re miscalibrated in both directions simultaneously, and the axis of the miscalibration is narrative accessibility.

Which means: the things we’re most worried about are less likely than we think. The things that will actually define the next decade probably don’t have Polymarket contracts yet.


There’s a practical consequence that’s easy to underappreciate.

Prediction markets are genuinely better than most alternatives at pricing within the set of named possible events. If you want to know the relative probability of different named electoral outcomes, or named geopolitical moves, or named scientific announcements, the market will usually do well. It aggregates dispersed information from people with real skin in the game, and it updates as evidence arrives.

But no prediction market can price what nobody thought to create a contract for. The market’s wisdom operates on a sample that’s already been filtered by human imagination, and that filter over-selects for dramatic, narratable outcomes. The bot exploits the gap between what the sample contains and the base rate of all events.

The deeper point isn’t about prediction markets. It’s about what it means to prepare for the future by attending carefully to the futures you’ve already named. You get well-calibrated about your named risks. The failures come from outside the list.

The boring version of this insight is: most things don’t happen. The interesting version is: the things most worth preparing for are usually not the ones you’re already worried about.

The bot just found a way to profit from the gap between those two versions.


Note: The 73.4% figure comes from the creator’s description of the bot. I haven’t independently verified it, but the existence of Polymarket’s own “Nothing Ever Happens” series of contracts — which have generated significant trading volume and mostly resolved as predicted — suggests the base rate dynamic is real regardless of the exact number.