RC RANDOM CHAOS

The dashboard pushed every critical CVE to GitHub

Technical analysis of a unified vulnerability dashboard pushed to a public GitHub repo, the scanner token blast radius, and what defenders actually see.

· 7 min read

A unified vulnerability dashboard was pushed to a public GitHub repository with live critical findings intact. The commit included the rendered HTML, the underlying JSON state files, the scanner configuration, and the API tokens used to query the upstream scanners. The team is in crisis response. The exposure is not the dashboard. The exposure is the operational map of every unpatched critical in the estate, indexed by host, service, and CVE, available to anyone watching public push events on GitHub.

The artefact itself is the problem. A unified dashboard aggregates output from multiple sources - typically a SAST tool, an SCA scanner, a network vulnerability scanner like Tenable or Qualys, a container image scanner like Trivy or Grype, and a cloud posture tool like Prowler or ScoutSuite. The aggregation layer normalises findings against CVE IDs, attaches CVSS v3 vectors, maps assets to internal identifiers, and ranks by exploitability and asset criticality. When that aggregation is committed to source control, the commit captures every column the dashboard rendered. Hostname. Internal IP. Service banner. CVE. CVSS base. Patch status. Owning team. SLA breach state. The data is structured. It is greppable. It is the inventory an attacker would otherwise need months of internal reconnaissance to build.

The public exposure window starts at the moment of the push and does not close when the repository is made private. GitHub’s push events are mirrored to the public timeline. GHArchive ingests them. Multiple commercial and free services index public commits in near real time - GitGuardian, TruffleHog cloud, the Internet Archive’s GitHub mirror, and a long tail of opportunistic scrapers running against the events API. A repository flipped private at minute fifteen has been cloned at minute two. The Git history persists in any fork created during the window. Force-pushing a clean history does not retract a clone that already happened. This is operationally identical to a credential leak - the secret is burned the moment it appears, and rotation is the only recovery.

The scanner tokens are the immediate fire. A unified dashboard authenticates to its sources. Tenable.io uses an access key and secret key pair. Qualys uses Basic auth or a session token. Snyk, Wiz, and Lacework use bearer tokens. AWS Inspector uses IAM credentials, often a long-lived access key when the dashboard runs outside the AWS organisation. If those credentials were committed alongside the dashboard config - and in the majority of these incidents they are, embedded in a config.yaml, a .env, or a Terraform state file - the attacker now has authenticated read against the scanner platform itself. That is worse than the dashboard. The scanner platform holds the full historical finding set, the asset inventory, the credentials used for authenticated scans, and in some configurations the agent-side keys that allow command execution on scanned hosts. T1552.001, credentials in files. T1078.004, valid cloud accounts.

The attack path from the artefact is short. The actor clones the repository. They parse the dashboard JSON. They filter for findings where CVSS v3 base is greater than 9.0 and patch status is open. They cross-reference the hostnames against the organisation’s external attack surface - Censys, Shodan, or a direct ASN lookup for the company’s published ranges. Any internet-facing asset with an open critical is now a targeted entry point with a pre-validated exploit primitive. T1595.002, vulnerability scanning, except the scanning was done by the victim and the results were handed over. For internal-only assets, the dashboard is a post-compromise targeting map. Once initial access is established through any other vector - phishing, a compromised contractor, an exposed VPN - the actor pivots directly to the assets with the highest-value bugs without burning detection time on internal scanning.

The threat actor profile that exploits this is not exotic. Initial access brokers monitor GitHub for exactly this class of artefact. The ransomware affiliate ecosystem buys the access. The pattern of harvesting public commits for internal infrastructure data is documented in the Lapsus$ playbook, the ShinyHunters operating pattern, and the Scattered Spider intrusion chain. Lapsus$ specifically demonstrated that internal documentation and configuration data, harvested before the intrusion, collapsed the time from initial access to objective. A unified vuln dashboard is denser than the documentation those groups normally work from. It is the same data a red team operator would build during a two-week engagement, delivered without the engagement.

The scanner token exposure has its own chain. With a Tenable or Qualys read token, the actor pulls the full asset inventory through the API. They enumerate scan policies. They identify which assets are scanned with credentials and which are not - the credentialed scans reveal hosts where the platform holds privileged credentials. If the token has scan-launch permissions, the actor can trigger a custom scan policy against a target the legitimate program does not regularly assess, using the victim’s own scanner as a reconnaissance proxy. The scan traffic originates from a trusted internal source. Network detection treats it as expected. T1590, gather victim network information, executed through the victim’s own tooling.

The telemetry picture for the leak event itself is asymmetric. The push to GitHub is visible to the organisation only if it operates a GitHub audit log export into a SIEM and has a rule on public repository creation or visibility change. The default configuration in GitHub Enterprise Cloud does emit the relevant events - repo.create, repo.access with the visibility field, git.push - but the rules to alert on a private-to-public flip or a push to a newly-public repository are not standard out of the box. Splunk’s GitHub add-on ships the data. Detection content is custom. The window between push and detection is governed by whether someone wrote that rule. For most organisations, the answer is no, and the detection arrives through a third party - a researcher, a customer, or an extortion message.

What fires reliably is downstream. If the scanner tokens are used from a new geolocation or a non-corporate ASN, the scanner platforms themselves emit anomalous authentication events. Tenable.io logs API authentications with source IP. Qualys logs the same. AWS CloudTrail captures GetSessionToken, ListBuckets, and any subsequent enumeration under the leaked IAM key. The IOC is the source IP of the API call diverging from the historical baseline of the CI/CD runner or the dashboard host that normally holds the key. GuardDuty’s UnauthorizedAccess:IAMUser/InstanceCredentialExfiltrationOutsideAWS finding maps directly when an EC2 instance role key is used from outside AWS. The signal exists. The detection requires the token to be used. A patient actor sits on the inventory and waits for an unrelated initial access before correlating.

The public exposure surface around the GitHub commit is monitorable but underused. GitGuardian, TruffleHog, and the GitHub secret scanning partner program detect committed secrets and notify the issuer. Tenable, Snyk, AWS, and several other vendors are partners. A leaked AWS access key in a public push triggers an automated quarantine through AWS’s partner integration within minutes - the key is moved to a deny policy. A Tenable key does not have the same automated revocation pathway across all key types. The window between push and vendor-side quarantine is the live exploitation window, and it is measurable in minutes for AWS, hours to days for most SaaS platforms.

The residual exposure after remediation is the part the crisis response usually under-scopes. Rotating the scanner tokens closes API access. It does not close the dashboard data exposure. The CVE-to-host mapping is now in the wild. Every critical that does not get patched faster than the actor can weaponise remains exploitable with the targeting already done. The remediation backlog is the new attack surface, and its prioritisation needs to invert - internet-facing criticals in the leaked dataset move to immediate, regardless of the SLA they were on yesterday. Internal criticals move up by the same logic, anchored on the assumption that initial access can be acquired by other means. The dashboard told the actor where to go after they get in. The patch sequence has to assume they got the map.

The Git history requires force-pushed deletion and a coordinated request to GitHub support to expunge cached refs, plus an audit of forks created during the exposure window. GitHub’s REST API exposes the forks endpoint. Every fork is a separate clone that survives the source repository’s deletion. Each one is a separate notification target and a separate data exposure to track. The forensic question - who cloned the repository during the window - is partially answerable through the repository’s traffic data, which retains clone counts and referrer data for 14 days. The identities behind those clones are not retained.

The technical reality. A unified vulnerability dashboard is a sensitive asset on the same tier as a credential vault. It aggregates the highest-value targeting data the organisation produces. Its storage, access control, and commit boundaries belong in the same control set as secrets management - encrypted at rest, access-logged, never resident in a Git repository regardless of visibility setting. The pre-commit hooks that block AWS keys do not block dashboard JSON. The visibility flip from private to public on a repository holding scanner output is a higher-severity event than most pipelines model. The detection content for it is single-rule simple and absent from most SIEM deployments.


#ad Contains an affiliate link.

Share

Keep Reading

Stay in the loop

New writing delivered when it's ready. No schedule, no spam.