RC RANDOM CHAOS

Forage simulation maps your broken controls

The Mastercard Forage cybersecurity simulation surfaces the same enforcement drift red teamers exploit in mature security programs. Operator breakdown.

· 9 min read

Opening position

The Mastercard cybersecurity job simulation on Forage is marketed as a career-exploration exercise. That framing is wrong. The tasks in it - phishing triage, vulnerability identification, and security awareness program design - are compressed versions of the exact work that production SOC, GRC, and awareness teams fail at every week. I have run these scenarios on live engagements, from the attacker side. The simulation is not a junior warm-up. It is a rehearsal of attack surface that most defenders treat as solved and most red teams still clear in under a day.

What this program actually does is expose the gap between how defenders describe their work and how that work behaves under contact. Someone new to the field walks through it and thinks they are learning the basics. Someone who runs red team operations reads the same material and sees a map of the controls that routinely break in production. The scenarios are simplified, but the failure modes they model are not. They are the failure modes that show up in post-incident reports after the fact.

Treat the simulation as a diagnostic. Every step a learner struggles with is a step an operational defender also struggles with when the scenario is real, larger, and noisier. The value is not in completing it. The value is in noticing which assumptions it forces you to surface, and whether the controls in your own environment would hold up against the same conditions. That is the frame I am using for the rest of this breakdown.

What actually failed

The phishing analysis task asks the participant to examine a suspicious email and identify indicators. In live environments, this is the step where defence breaks first. Defenders treat phishing triage as URL inspection and header review. On engagements, I have watched triage teams clear obvious lures and miss payloads delivered through legitimate business channels, trusted third-party domains, and previously verified senders. The simulation presents a clean specimen. Production inboxes present a volume problem. The behaviour that fails is not detection of the single email. It is detection at scale, with noise, with time pressure, and with a reputation signal already weighted toward trust.

The vulnerability identification task asks the participant to review systems and flag weaknesses. The failure mode here is not missing a vulnerability. It is ranking them by CVSS score and chasing the highest number. On red team operations, initial access almost never comes from the top-ranked finding. It comes from a chained path through medium-severity issues that individually look acceptable on a dashboard. A stale service account with no MFA, combined with an internal API that trusts any request from a corporate IP, combined with a forgotten VPN split-tunnel rule, produces a full domain takeover. None of the three components would page anyone on their own. The simulation forces a participant to look at systems in isolation. Real defenders do the same thing and call it a program.

The security awareness training design task asks the participant to build a program to reduce human risk. This is where most real organisations produce their weakest control. Programs get designed for completion rate and compliance reporting, not for behaviour change under adversarial conditions. On engagements, I have bypassed mature awareness programs by targeting the exact behaviours those programs rewarded. Participants were trained to click a report-phish button, so I sent lures that rewarded clicking the button and delivered payload through the reporting workflow itself. The simulation rewards a learner for producing a training plan. It does not test whether that plan would survive a determined operator. Neither do most real programs.

Why it failed

The underlying failure is a category error about where the security boundary lives. Defenders continue to treat the perimeter, the endpoint, or the email gateway as the boundary. The boundary is identity. Every task in the simulation collapses into an identity and access problem once you pull on it. The phishing email is a credential harvesting attempt or a session hijack attempt. The vulnerability is a path to an identity with more privilege than it should hold. The awareness training is a behavioural control on how identities are used. When defenders misplace the boundary, they buy tooling for the wrong layer and measure effectiveness against the wrong signal.

The second failure is that controls are treated as deployed rather than enforced. A phishing filter is counted as a control the moment it is licensed. An MFA policy is counted as a control the moment it appears in the identity provider configuration. Neither statement describes whether the control actually blocks the behaviour it was deployed to block. On engagements, I regularly find MFA enforced on the login page and bypassed by legacy authentication protocols still enabled on the same tenant. I find email filters configured and bypassed by internal-to-internal routing because the filter only inspects external mail. The control exists. The enforcement does not. The simulation does not catch this distinction because it cannot. Neither do most defenders, because their own reporting hides it.

The third failure is that trust is granted once and never re-evaluated. A vendor integration gets approved during procurement and retains the same access level five years later. A service account gets provisioned for a project that has since been decommissioned and still holds domain rights. A user gets a temporary elevation for a migration and keeps it. Every one of these is a path I have used. Trust relationships that are not continuously validated decay into liabilities. The simulation presents static systems. Production systems drift. The failure is not a missed configuration. The failure is the absence of a process that re-validates trust on a clock the defender controls rather than a clock the attacker controls.

What this exposes

The mechanism underneath all three tasks is control substitution. A control is designed to enforce a specific boundary. Over time, its presence becomes a substitute for its function. The licence is renewed. The dashboard is green. The audit passes. None of these states describe whether the control stops the behaviour it was bought to stop. The simulation surfaces this pattern in compressed form. In production, the same substitution happens across years, and the drift is invisible until an adversary tests it.

Each of the three tasks maps to the same substitution. In phishing triage, the control is the filter plus the trained user. The substitute is completion metrics and click-through rates. Neither metric describes whether a targeted lure reaches an inbox or whether a reporting workflow contains exploitable behaviour. In vulnerability identification, the control is supposed to be reduction of exploitable paths. The substitute is patch count and CVSS-weighted remediation backlog. Neither metric describes whether a chain of medium-severity findings produces privileged access. In awareness program design, the control is behaviour under adversarial pressure. The substitute is training completion and attestation. Neither metric describes whether a user resists a lure that matches their role, their vendor relationships, or their calendar.

Drift compounds because the substitute metric is easier to produce than the underlying control. A team that reports on filter licence renewals produces a clean quarterly report. A team that reports on whether the filter stops targeted lures produces a report full of gaps. The first report is rewarded. The second is defunded. Over several cycles, the organisation stops measuring enforcement entirely. The simulation reproduces this by rewarding the production of artifacts: an indicator list, a vulnerability table, a training plan. A participant who delivers these artifacts is marked complete. An organisation that delivers these artifacts is marked compliant. Neither state confirms that a boundary holds.

Parallel pattern

The same mechanism appears across the reporting surfaces that boards and executives consume. A quarterly security update typically presents the artifact layer: number of phishing emails blocked, number of vulnerabilities patched, number of employees trained. These figures describe activity. They do not describe exposure. An organisation can post strong activity numbers while its enforcement posture degrades, because activity and enforcement are not the same variable. The simulation is a small-scale model of this reporting structure. A learner’s completion state mirrors an organisation’s compliance state. Both are verified at the artifact layer. Neither is verified at the boundary.

The pattern extends to third-party risk management. Vendor assessments collect SOC 2 reports, questionnaire responses, and control attestations. These are artifacts. They describe what the vendor claims about its controls at a point in time. They do not describe whether those controls hold under adversary pressure in the months between assessments. On engagements, I have moved from a trusted vendor into the target environment through integration points that both sides marked as assessed. The assessment was valid. The enforcement was not continuous. The same substitution that marks the simulation as complete marks the vendor relationship as approved.

The pattern also extends to identity lifecycle management. Joiner, mover, and leaver processes are measured by ticket closure rates and provisioning SLAs. These metrics describe throughput. They do not describe whether stale access was actually revoked, whether temporary elevation was actually rolled back, or whether service accounts retain permissions aligned to current function. The simulation touches this in the vulnerability task by presenting systems in isolation. Real identity drift is invisible to any review that does not continuously validate the relationship between identity, access, and current need. Organisations that do not run that validation are running the simulation’s failure mode at production scale.

Operator position

Treat the simulation as a mirror of your own program. If a learner can complete it by producing artifacts without demonstrating enforcement, your program has the same shape. If the tasks feel like a warm-up, the assumptions you carry into them are the same assumptions an operator will exploit when the scenario is real. The exercise is not beneath a senior practitioner. It is a compressed version of the work that senior practitioners repeatedly fail to enforce at scale. Use it to test your own framing, not to validate a junior candidate’s résumé.

What must now be true in a program that holds. Identity is treated as the boundary and validated continuously rather than at provisioning. Controls are measured by observed enforcement against a defined adversary behaviour, not by licence status or dashboard colour. Phishing defence is measured against targeted lures delivered through trusted channels, not generic bulk mail. Vulnerability management prioritises chains that produce privileged access, not single findings ranked by score. Awareness programs are tested against lures that exploit the behaviours the program itself rewards. Any program that cannot produce evidence against each of these conditions is not a program. It is an artifact.

The simulation does not teach cybersecurity. It reveals the gap between what a defender claims and what a defender enforces. The gap is present in junior participants because they have not yet built the claim. The gap is present in mature organisations because the claim has calcified into reporting and the enforcement has drifted underneath it. Pick one control from each of the three task categories in your own environment. Ask whether its enforcement is observed or assumed. If the answer is assumed, it is not a control. Name it accordingly and change the reporting to match. Anything else is theatre.


#ad Contains an affiliate link.

Share

Keep Reading

Stay in the loop

New writing delivered when it's ready. No schedule, no spam.