Skip to main content
All Posts

What a Network Intrusion Actually Looks Like in Security Onion

4 min read
CybersecuritySOCSecurity OnionIncident ResponseSIEMZeekSuricata
What a Network Intrusion Actually Looks Like in Security Onion

The alert fired. Here is what came next — a walkthrough of the investigative process behind a ransomware intrusion investigation in Security Onion, from IDS alert triage through log correlation to a coherent incident timeline.

The Suricata alert fired. Then four hundred more did. None of them said "ransomware." That is not how this works.

The mistake people make when they first sit in front of Security Onion is treating alerts as conclusions. They are not. An IDS alert is a hypothesis — a rule matched a pattern. The investigation starts after the alert, not at it.

Starting at the right end of the queue

The first task is not reading every alert. It is reducing the queue to a workable set.

For this investigation — a simulated ransomware intrusion across a corporate network — the initial alert count in the Security Onion SOC dashboard was in the hundreds. The filter that mattered: severity combined with destination. Internal-to-internal traffic alerts, even high-severity ones, tell a different story from outbound alerts hitting IPs with no reverse DNS record and no match against known infrastructure.

Three alerts cleared that filter. Two resolved quickly — one matched a known scanner, one had no Zeek conversation worth pursuing. The third was the thread to pull.

From Suricata alert to Zeek connection

The alert was an ET MALWARE classification — a Suricata rule match on a suspicious outbound HTTP request. The destination IP had no hostname. The source was a single workstation.

This is where the pivot happens. Suricata tells you a rule fired. Zeek tells you what the full conversation looked like.

Pivoting from the Suricata alert to the associated conn.log entry in Kibana showed:

  • Connection duration: 12 seconds
  • Bytes transferred: 847 bytes outbound, 1.2 KB inbound
  • Protocol: TCP/80

That is a small payload, a short connection, and no hostname. Consistent with a C2 beacon — an initial check-in that retrieves a small instruction set, not a bulk data transfer.

The http.log entry for the same source IP and timestamp completed the picture: a POST to a path with a randomised-looking URI, a User-Agent string mismatched with the workstation's operating system, and a 200 response.

Finding the entry point

Knowing the C2 callback happened was not enough. The question was how the attacker got onto the workstation.

The dns.log showed the destination IP had been resolved from a domain 8 minutes before the HTTP connection. A WHOIS lookup on that domain showed it had been registered 11 days earlier. The query came from the same workstation.

Connection timeline: DNS resolution, then an 8-minute gap, then the HTTP POST.

That gap is where the payload executed. Looking at the http.log for that workstation in that window showed a file download from an external source immediately before — a .docx file, filename visible in the URI — followed by the C2 callback.

Spear-phishing email. User opened the attachment. Macro executed on open. C2 callback 8 minutes later.

What lateral movement looks like

The ransomware did not execute immediately. The attacker moved first.

In conn.log, the infected workstation began making SMB connections — port 445 — to two other internal hosts approximately 40 minutes after the initial C2 contact. Short duration, repeated at intervals. Not normal workstation behaviour.

The host event logs provided as part of the investigation confirmed the pattern: authentication events on those target hosts using a domain service account that had no reason to authenticate from the infected workstation. The credential harvesting step was not directly visible in the logs — but the authentication evidence only makes sense if credentials had already been taken from the initial host. The lateral movement was real; the mechanism that enabled it was inferred.

The ransomware executed 3 hours after the initial compromise, triggered from a privileged account reached through that lateral chain.

Building the timeline

The final incident report was not a list of Suricata alerts. It was a timeline:

  1. Spear-phishing email delivered
  2. User opened malicious document — macro executed
  3. C2 callback established (HTTP POST to external IP, 8 minutes post-execution)
  4. Credential access — no direct log evidence; inferred from the authentication pattern that followed
  5. Lateral movement to two internal hosts via SMB
  6. Ransomware deployed from compromised admin account

The chain maps cleanly against MITRE ATT&CK: spearphishing attachment (T1566.001), macro execution (T1059.005), C2 over HTTP (T1071.001), lateral movement via valid accounts (T1078), and data encryption for impact (T1486).

Every network-side step was visible in Security Onion. The host logs filled in the authentication layer. None of it was visible from alerts alone.

The analytical work is the correlation — moving between Suricata hits, Zeek conn.log, http.log, dns.log, and host authentication logs until the sequence becomes a coherent chain rather than a collection of flagged packets.

That is what SOC investigation actually is. Not watching dashboards. Building timelines from evidence the network already recorded.