Where Part 1 Left Off
Part 1 demonstrated that WAVESHAPER.V2 C2 traffic is detectable using manual Wireshark filters — four filters applied in sequence, each confirming a different behavioral indicator. The detection worked. The problem is that manual filter application doesn't scale. Every analyst has to know which filters to run, in what order, and what to look for in the results. One missed filter means a missed indicator.
The natural next step was automation. waveshaper_triage.py already existed as a standalone Python script that read the pcap and ran all eight behavioral checks automatically, outputting a structured threat report. Part 2 wraps that script in a Docker container with a Flask web interface — so it runs identically on any system and is usable from a browser without touching the command line. The full script is available on GitHub.
Synthetic dataset disclosure: The training pcap and detection script were reconstructed from publicly documented WAVESHAPER.V2 behavioral indicators — the 60-second beacon interval, IE8/WinXP User-Agent string, C2 IP at 142.11.206.73:8000, and known URI paths — sourced from GTIG's March 2026 public disclosure. This is not a capture from a real incident. It is a training dataset built to demonstrate what documented attack behavior looks like in a controlled environment.
The Dockerfile and the Build
The initial Dockerfile is four lines — a minimal Python 3.12 base image, the working directory, the script copied in, and the entrypoint. No dependencies beyond the standard library, so no pip install step needed at this stage.
FROM python:3.12-slim # minimal Python 3.12 — no extras WORKDIR /app # working directory inside the container COPY waveshaper_triage.py . # copy script into image ENTRYPOINT ["python3", "waveshaper_triage.py"]
docker build -t waveshaper-triage . docker run --rm \ -v ~/cybersecurity-portfolio/labs:/data \ waveshaper-triage \ /data/waveshaper_v2_training.pcap
[+] Building 2.8s (9/9) FINISHED => [1/3] FROM docker.io/library/python:3.12-slim 1.7s => [2/3] WORKDIR /app 0.1s => [3/3] COPY waveshaper_triage.py . 0.0s => naming to docker.io/library/waveshaper-triage:latest Severity : CRITICAL — WAVESHAPER.V2 INFECTION CONFIRMED Action : Active C2 beaconing detected. Isolate host immediately.
The -v flag mounts the local labs folder into the container as /data — the pcap is readable without being copied into the image. The --rm flag deletes the container after it exits. Full CRITICAL-severity report came back identical to running the script directly.
The next step was adding a Flask endpoint so the tool could be used from a browser. Flask was added to the Dockerfile with one additional line, and ENTRYPOINT was replaced with CMD to start the server instead of the script directly:
FROM python:3.12-slim WORKDIR /app COPY waveshaper_triage.py . COPY waveshaper_server.py . RUN pip install flask --quiet EXPOSE 5000 CMD ["python3", "waveshaper_server.py"]
On rebuild, Docker cached the layers that hadn't changed and only reinstalled Flask — total rebuild under 3 seconds. The browser interface came up at http://127.0.0.1:5001, drag-and-drop upload triggered the analysis automatically, and results came back color-coded with the severity badge and export button matching the Ladon design system.
The Pivot — From File Upload to Demo Scenarios
The browser interface worked — upload a pcap, get the triage report back in the browser, color-coded and scrollable. But a problem became obvious when thinking about the portfolio context. The tool is built around a specific synthetic training pcap that readers don't have. There is no point building a file upload interface for a file nobody can upload.
The better approach for a portfolio piece is pre-baked demo scenarios — clickable cards that run a simulated analysis and show the real output from the training dataset. No upload needed. No server needed. Pure static HTML that works on GitHub Pages.
This is actually a more honest representation of what the tool does. Rather than asking a reader to find and upload a pcap file, the demo shows exactly what the tool found in the training dataset, organized by attack stage — the initial payload delivery, the C2 beaconing pattern, and the full picture with all indicators combined.
Three demo scenarios, each as a clickable card: Stage 1 — SILKBELL payload delivery at T=30s (HIGH severity). Stage 2 — Machine-precise 60-second C2 beacon pattern, 8 POST requests, IE8/WinXP User-Agent (CRITICAL). Stage 3 — Full 7-minute capture, all indicators combined (CRITICAL). Click any card and the triage report appears below, color-coded exactly as it would appear in the real tool. The export button saves the plain-text report.
The decision to drop Docker for the portfolio version and go static was straightforward once the use case was clear. Docker is the right delivery mechanism when someone needs to run the tool against their own files. For a portfolio demo where the goal is showing what the tool found, static HTML is simpler, faster, and works without any setup. The Docker container still exists and can be run locally against real pcap files — it just isn't the right format for a portfolio page.
What This Lab Covers and What It Does Not
The triage tool is WAVESHAPER.V2 specific. It checks for a hardcoded C2 IP address, a specific User-Agent string, exact URI paths, and a documented beacon interval. Those checks came directly from GTIG's public disclosure. Against a pcap containing WAVESHAPER.V2 traffic, it fires on every indicator. Against a pcap containing a different RAT with different infrastructure, most of the signature checks would miss — the behavioral checks for machine-precise beaconing and POST requests to raw IP addresses would still fire, but without the IOC-specific context.
This is the same tension documented in Part 1 — behavioral detection catches unknown threats by pattern, signature detection confirms known threats by fingerprint. The triage tool is intentionally specific because it was built to demonstrate a specific documented attack. A more general behavioral detection tool is a different build problem, and one worth tackling separately.
| Check | Type | Catches Other Attacks? |
|---|---|---|
| C2 IP — 142.11.206.73 | Signature | No — IOC specific |
| IE8/WinXP User-Agent | Signature | Partial — any IE8/WinXP UA |
| Known URI paths | Signature | No — IOC specific |
| POST to raw IP | Behavioral | Yes — general indicator |
| Machine-precise beaconing | Behavioral | Yes — general indicator |
| MZ binary from C2 | Behavioral | Yes — general indicator |
Lab conducted in a controlled environment using a synthetic pcap generated from publicly documented WAVESHAPER.V2 behavioral indicators. IOCs sourced from Google Threat Intelligence Group (GTIG), Tenable Research, and StepSecurity public reporting. No real malware was executed. Training dataset only — not for use with production network captures or environments subject to CMMC, DFARS 252.204-7012, or CUI controls.
Part 3 of this lab moves from synthetic data to a live environment — Kali Linux as the attacker, a Windows VM as the victim, Meterpreter delivered over a host-only network, and Wireshark capturing the traffic in real time. Continue to Part 3 →