OBJECTIVE 4.9 Given a scenario (PBQ-likely)

Use data sources to support an investigation

Log data (firewall, application, endpoint, OS security, IDS/IPS, network, metadata), data sources (vulnerability scans, automated reports, dashboards, packet captures), and log management (syslog, rsyslog, journalctl, NXLog, retention, security considerations).

Exam approach: “Given a scenario” — expect to identify which data sources are relevant to an investigation, correlate events across multiple log types, and extract IOCs from packet captures or scan reports. The PBQ will likely present logs and ask you to build a timeline or identify the attack vector.

Offensive context: Packet capture analysis uses the same skills as building an interception proxy — identifying magic bytes, length fields, protocol anomalies. Understanding memory exploitation makes RAM acquisition meaningful (you know what you’re looking for in a heap dump). And deploying honey-files as canary traps gives you a specific, high-confidence investigation data source when they get tripped.

Log Data Sources

Each log type tells a different part of the story. An investigation uses multiple sources correlated by time, IP, and user identity.

Firewall Logs

  • What they show: Allowed and denied connections — source IP, destination IP, port, protocol, action, timestamp
  • Investigation use: Identify external connections to/from compromised hosts, find C2 communication, trace lateral movement between zones, confirm which traffic was blocked
  • Key pattern: Outbound connections to unusual ports or IPs from an internal host — potential C2 or data exfiltration

Application Logs

  • What they show: Application-specific events — user actions, errors, API calls, business logic events
  • Investigation use: Trace what an attacker did after gaining access. Which records were viewed? What data was exported? What configuration was changed?
  • Key pattern: Authentication success followed by actions outside the user’s normal pattern — accessing data they’ve never touched before, bulk exports, configuration changes

Endpoint Logs

  • What they show: Process execution, file system changes, registry modifications, USB device connections, login events
  • Investigation use: What malware ran? What files were created/modified? What processes spawned from the initial compromise?
  • Key pattern: Unknown process launching child processes (especially cmd.exe, powershell.exe, bash), writing to temp directories, making network connections

OS Security Logs

  • Windows Event Log: Security events (logon/logoff, privilege use, audit policy changes), System events, Application events. Key event IDs: 4624 (successful logon), 4625 (failed logon), 4672 (admin logon), 4688 (process creation)
  • Linux auth.log / secure: SSH attempts, sudo usage, PAM authentication events
  • Investigation use: Who logged in, when, from where. Failed authentication attempts. Privilege escalation events.

IDS/IPS Logs

  • What they show: Triggered detection rules — signature matches, anomaly alerts, blocked traffic
  • Investigation use: What attacks were detected? What was the earliest alert? Was the IPS blocking the attack or just alerting?
  • Key pattern: Sequence of IDS alerts escalating in severity — reconnaissance (port scan) → exploitation attempt → successful compromise indicator

Network Logs (NetFlow / Traffic Metadata)

  • What they show: Connection metadata without payload — source, destination, ports, bytes transferred, duration, protocol
  • Investigation use: Identify data exfiltration (large outbound transfers), map communication patterns, find beaconing (regular interval connections to external hosts)
  • Key pattern: Regular outbound connections at fixed intervals (every 60 seconds, every 5 minutes) to the same external IP — classic C2 beaconing

Metadata

  • What it includes: Email headers (sender, recipient, routing, timestamps), file metadata (author, creation date, modification history), document properties
  • Investigation use: Trace phishing emails to their origin (email headers show the actual sending server, not just the display name). Identify document staging (when was the exfiltration file created and by whom).

Additional Data Sources

Vulnerability Scans

  • Nessus, Qualys, OpenVAS output — known vulnerabilities on systems
  • Investigation use: Was the compromised system running a known-vulnerable service? Did we know about the vulnerability before the breach?
  • Contextualizes the attack: “The attacker exploited CVE-XXXX-YYYY on this host, which our last scan flagged three weeks ago”

Automated Reports

  • SIEM correlation reports, compliance scan results, configuration audit output
  • Investigation use: What was the security posture at the time of the incident? Were there pre-existing misconfigurations?

Dashboards

  • Real-time and historical views of security metrics — alert volume, traffic patterns, authentication trends
  • Investigation use: Identify when anomalous behavior started. Compare current patterns against historical baseline.
  • Dashboards are for initial triage. Detailed investigation requires drilling into raw logs.

Packet Captures (PCAP)

  • Full packet capture — complete record of network traffic including payloads
  • Investigation use: The most detailed network evidence available. Reconstruct exactly what was sent and received.
  • What to look for:
    • Protocol anomalies (HTTP on non-standard ports, DNS with unusually long queries)
    • High-entropy payloads (encrypted C2 traffic stands out against normal traffic)
    • File transfers (extract files from HTTP/FTP streams)
    • Credential exposure (if unencrypted protocols were used)
    • DNS tunneling (data encoded in subdomain labels — aGVsbG8gd29ybGQ.evil.com)
  • Tools: Wireshark, tcpdump, tshark, NetworkMiner
  • Storage: PCAPs are large. Full capture on a busy network generates gigabytes per hour. Usually captured selectively or with a rolling buffer.

Log Management

Syslog

  • Standard protocol for log forwarding (UDP/514 or TCP/514, TLS/6514 for encrypted)
  • Structured format: priority, timestamp, hostname, application, message
  • Nearly universal — network devices, Linux systems, applications all speak syslog
  • rsyslog: Enhanced syslog daemon on Linux. Supports filtering, templating, forwarding to remote destinations, database output.

journalctl (systemd)

  • Binary log format on modern Linux systems (systemd journal)
  • Rich querying: journalctl -u sshd --since "1 hour ago" shows SSH events from the last hour
  • Can forward to syslog for centralized collection
  • Structured fields beyond what syslog captures (unit name, PID, boot ID)

NXLog

  • Cross-platform log collection agent (Windows, Linux, macOS)
  • Converts between formats (Windows Event Log → syslog, JSON, CEF)
  • Useful for getting Windows event logs into a Linux-based SIEM

Retention

  • Hot storage: Searchable, queryable, fast access. Last 30–90 days. SIEM or log aggregation platform.
  • Cold storage: Archived, compressed, slower access. 1+ year. S3/GCS with write-once policy.
  • Retention periods driven by compliance requirements (PCI: 1 year, SOX: 7 years, HIPAA: 6 years) or organizational policy
  • Minimum recommendation: 90 days hot, 1 year cold

Security Considerations for Logs

  • Integrity: Logs must be tamper-proof. If an attacker can modify or delete logs, they can cover their tracks. Ship logs to a separate system immediately. Write-once storage.
  • Access control: Logs often contain sensitive information (usernames, IPs, sometimes credentials in error messages). Restrict access to security and operations teams.
  • Encryption: Logs in transit should use TLS (syslog over TLS/6514). Logs at rest should be encrypted if they contain sensitive data.
  • Time synchronization: All systems must use NTP. Logs without accurate timestamps are useless for correlation. A 5-minute clock drift makes timeline reconstruction impossible.

Correlation: Putting It All Together

An investigation is about building a timeline from multiple sources. No single log type tells the complete story.

Example investigation flow:

  1. IDS alert triggers: “SQL injection attempt detected” against web server
  2. Firewall logs confirm: external IP connected to web server on port 443, connection allowed
  3. Application logs show: malformed query followed by successful data access — the injection worked
  4. Endpoint logs reveal: web server process spawned a shell, new files created in /tmp
  5. Network logs show: web server made outbound connection to unknown external IP (C2)
  6. Packet capture confirms: data exfiltrated over DNS tunneling (long encoded subdomain queries)
  7. Email logs reveal: phishing email to developer contained a link to the attacker’s recon page, establishing the initial vector

Timeline built. Attack chain mapped. Blast radius determined. Now you can contain, eradicate, and recover.

LABS FOR THIS OBJECTIVE