RC RANDOM CHAOS

Ransomware ships a wiper

A ransomware strain destroys files above 128KB, breaking its own decryption model. What the failure exposes about reversibility assumptions.

· 7 min read

Opening position

A ransomware operation shipped code that destroys any file larger than 128KB. Once destroyed, those files cannot be decrypted. The attacker collects payment, the victim pays, and the data does not return because the data no longer exists in a recoverable form. Security researchers have suggested the responsible code was partly produced with AI assistance or assembled from an older codebase. The specific authorship pipeline is not confirmed.

This is not a story about attacker sophistication. It is a story about an offensive tool that fails its own success condition. The extortion model depends on reversibility. If the payload cannot reverse what it did, the threat actor has shipped a wiper and charged for decryption. The economic model and the technical behaviour are no longer aligned.

For defenders, the operational meaning is narrow and specific. A paid ransom against this strain returns nothing for any file above the 128KB threshold. Backup posture, not negotiation posture, determines outcome. The attacker’s brand promise is broken at the binary level, and no key delivery resolves it.

What actually failed

The payload executed against victim files and produced an irreversible state for any file larger than 128KB. The observable behaviour is destruction, not encryption, above that size boundary. Below the boundary, the behaviour expected of ransomware may apply. Above it, the file is gone in a way the attacker’s own decryptor cannot undo. The mechanism by which destruction occurs above the threshold is not described in the available facts and is not confirmed.

From a victim’s perspective, the failure surface is the entire population of files above 128KB. Document archives, databases, virtual disks, media, backups stored as single artefacts, and most operational data sit above this line. The 128KB threshold is small enough that the practical effect on a real environment is that the majority of valuable files fall into the destroyed bucket. The minority that remain encrypted-and-recoverable are unlikely to constitute a meaningful restoration path on their own.

The attacker-side failure is equally direct. The decryption tool, whatever its quality, has no input to operate on for the destroyed set. Payment does not produce recovery because there is no ciphertext left to decrypt. The ransom transaction completes, the keys are delivered, and the victim still has nothing above 128KB. The control the attacker thought they held, leverage through reversible denial, did not exist at execution time.

Why it failed

Researchers have suggested two non-exclusive origins for the defective code: AI-assisted generation, or reuse of an older codebase. Neither is confirmed as the specific cause of the 128KB destruction behaviour. What is confirmed is that the code shipped, ran in the wild, and produced the destructive outcome on victim systems. The defect was not caught before deployment by whoever assembled and released the build.

The failure is a quality assurance failure at the operator level. Ransomware operators who depend on decryption as the product have a direct incentive to test the round trip on representative file sizes before campaign launch. That step either was not performed, was performed on samples that did not exceed 128KB, or was performed and ignored. The available facts do not specify which. What the facts do specify is the outcome: the round trip does not work above the threshold, and the code reached victims in that state.

The contributing-cause hypotheses, AI-generated code or reused legacy code, point at the same operational gap. Code whose origin is not fully understood by the operator who ships it is code whose failure modes are not fully understood either. A boundary condition at 128KB is the kind of artefact that a buffer size, an integer type, a chunking constant, or an inherited assumption from older code can produce. The specific source of the constant is not confirmed. The fact that it survived to production execution is.

What this exposes

The pattern is the divergence between what an operator believes their code does and what the code actually does at runtime. The 128KB boundary is not a strategic choice. It is an artefact of something the person shipping the build did not inspect closely enough to identify before execution against victim data. The mechanism is unverified code reaching production and being trusted to perform a function whose success the operator did not directly confirm. The result is an offensive tool that behaves as a wiper above a threshold the operator did not know existed in their own payload.

The same mechanism applies wherever code of unclear provenance is shipped without round-trip validation against the conditions it will meet in production. A ransomware build assembled from legacy fragments or generated output, then released without a destructive-and-reversible test on files spanning the size distribution it will encounter, is the offensive analogue of any deployment pipeline that ships a binary without verifying its core function under realistic input. The attacker did not test recovery. The defect that resulted is a boundary condition silently corrupting data above a fixed size. The mechanism is the absence of validation on a function the operator’s economic model fully depended on.

What this exposes about the broader ransomware ecosystem is narrow. Operators who outsource code generation, whether to AI tooling or to inherited codebases they do not fully read, take on the failure modes of that code without owning its internals. The 128KB threshold is one such failure mode made visible by victim outcome. Other thresholds, other corruption conditions, and other silent failures in payloads of similar provenance are not confirmed but are consistent with the same mechanism: code whose behaviour at the edges was never verified by the party shipping it. The pattern is operator trust in code the operator did not validate.

Operator position

For any organisation hit by this strain, the operational position is fixed. Files above 128KB are not recoverable through payment. The decryptor cannot return what the payload destroyed. Negotiation does not change the technical state of the data. Restoration depends on backups that exist outside the affected systems, were not in scope of the payload, and can be validated as intact before restore. If those backups do not exist, the data above the threshold is gone. This is not a posture statement. It is the binary outcome of the payload as it ran.

The broader operator position is that ransomware reversibility cannot be assumed at the point of incident. The control that determines outcome is offline, segmented, integrity-validated backup. Negotiation, key escrow services, and decryptor delivery are downstream of whether the payload preserved decryptable ciphertext in the first place. This strain demonstrates that the answer to that question is not always yes. Treating ransom payment as a recovery option presumes a property of the attacker’s code that the attacker themselves did not verify. That presumption is not safe.

The identity-and-trust framing applies directly. The attacker’s payload is untrusted code executing with whatever privileges the initial access vector granted it. The boundary that should have prevented its execution at scale is the same boundary that should prevent any unverified code from acting on production data: identity-bound execution control, segmentation between data tiers, and recoverable state held outside the execution context of any single compromised identity. Backups inside the blast radius of the compromised identity are not backups for this purpose. They are additional victim files. If a single compromised identity can reach both production data and its recovery copies, the recovery copies are part of the production data set and should be treated as such in threat modelling.

Hard closing truth

A ransomware operator shipped code that destroyed the asset they were extorting payment to return. The economic model required reversibility. The code did not provide it. The operator did not detect this before launch. Whether the cause was AI-assisted code, legacy reuse, or something else is not confirmed. What is confirmed is that the payload ran, the data above 128KB was destroyed, and payment cannot undo that outcome.

The defender takeaway is not that ransomware is becoming less dangerous. It is that the assumption of reversibility, which has shaped a decade of incident response playbooks built around payment as a fallback, is not a property of the payload. It is a property of specific payloads that were tested by their operators and shipped in working condition. This strain is not in that category. Future strains, built from the same kinds of unverified code pipelines, may not be either. Recovery posture must assume the payload is a wiper until a working decryptor is demonstrated against representative samples.

Identity is the boundary. Backups outside that boundary, validated and tested, are the control. Everything else, including the attacker’s promise of decryption, is noise. The strain in question made that explicit at the binary level. The operator position is to act as if every ransomware payload is potentially destructive above some unknown threshold, and to design recovery accordingly. If the system allows unverified code to reach production data and its backups simultaneously, that outcome will continue to occur. The 128KB boundary is one instance. The mechanism is general.

See also: NordVPN for tunneled traffic when operating outside controlled networks.


#ad Contains an affiliate link.

Share

Keep Reading

Stay in the loop

New writing delivered when it's ready. No schedule, no spam.