Python core devs revert incremental GC after pathological real-world slowdowns
Python 3.14 and 3.15 are rolling back the incremental garbage collector after users surfaced cases where seemingly innocuous workloads triggered quadratic-time behavior. The trigger wasn’t even cycle-heavy code — the new heuristics simply ran portions of GC far more often than warranted, and the regression only became visible once someone filed an issue against the main branch without initially suspecting GC at all.
Longtime contributor Tim Peters frames the episode as another instance of a recurring pattern in CPython internals: sorting, pymalloc, and dict collision strategies have all been tuned almost entirely against synthetic benchmarks, with real-world pathologies surfacing only as scattered Stack Overflow complaints about “inexplicable slowdowns.” Repeated calls for production timing data — including from academic researchers like Sebastian Wild on powersort — almost never yield responses, leaving maintainers to play whack-a-mole with worst cases as they emerge.
The practical takeaway is an argument for shipping incremental GC as an opt-in toggle in a production build rather than abandoning it outright. Without exposure to real applications, the feedback loop required to validate or kill the design effectively doesn’t exist, and the GC problem space is too messy for predictable worst-case analysis to substitute for field data.
Read the full article
Continue reading at Hacker News →This is an AI-generated summary. Read the original for the full story.