Why 'all exponentials become sigmoids' is a weak argument against AI scaling
Scott Alexander pushes back on the popular rebuttal that AI capability curves must inevitably flatten into S-shapes. The claim is technically true since no process grows forever, but the rebuttal collapses into hand-waving when the speaker can’t say when the flattening occurs. He catalogs cases where forecasters repeatedly called the top too early: UN birthrate projections that keep predicting bottoms that never arrive, World Energy Organization solar deployment forecasts that have undershot reality year after year, and a Wharton paper that fit a sigmoid to the METR AI task-length benchmark right before the next model blew past the predicted ceiling.
The constructive argument is about defaults under uncertainty. If you genuinely understand the generating process — replication rate of a pathogen, thermodynamic limits of a ramjet — you can predict where the curve bends. If you don’t, Lindy’s Law is the honest baseline: a trend’s expected remaining lifetime is roughly its age. Applied to AI, scaling-era progress has run roughly seven years, so the null hypothesis is several more years of similar gains, with only about a 22% chance of stalling within two years under a Pareto assumption.
The upshot is a rhetorical demand: anyone asserting AI capabilities will plateau before reaching dangerous thresholds owes either an explicit mechanistic model (data center buildout, algorithmic progress, scaling laws) that engages with existing forecasts like the AI Futures Timeline Model, or a black-box argument that beats Lindy. Gesturing at the shape of sigmoids isn’t either of those.
Read the full article
Continue reading at Hacker News →This is an AI-generated summary. Read the original for the full story.