AI-Powered Deepfakes and Social Engineering
Linux Logs Investigation: Tools, Scenarios, and Pro Tips for Cybersecurity Operators
In the world of cybersecurity, logs are the breadcrumbs left behind by systems, services, and applications. For Linux systems, these logs are the primary forensic evidence when something suspicious occurs. Whether you're investigating an intrusion, diagnosing a misconfiguration, or verifying compliance, log analysis is at the core of incident response.
Professional cybersecurity operators rely on a variety of tools and strategies to filter noise, detect anomalies, and trace malicious activity in Linux environments.
Common Log Locations in Linux
Most Linux distributions store logs in /var/log/ by default. Here are the key files:
| Log File | Purpose |
|---|---|
/var/log/auth.log or /var/log/secure | Authentication events, sudo usage, SSH logins |
/var/log/syslog or /var/log/messages | System-wide events |
/var/log/kern.log | Kernel messages |
/var/log/dmesg | Boot-time hardware messages |
/var/log/apache2/access.log / error.log | Web server activity |
/var/log/audit/audit.log | SELinux and audit framework logs |
Essential Tools for Log Investigation
1. journalctl
For systems using systemd, journalctl is the go-to tool.
# View logs since yesterday
sudo journalctl --since "yesterday"
# View only SSH-related logs
sudo journalctl -u ssh.service
# Follow logs in real-time
sudo journalctl -fPro Tip: Use --grep with regex to search specific patterns directly in the journal:
sudo journalctl --grep "Failed password"Here's a Journalctl Cheatsheet with 10+ most common use cases, designed for quick reference during Linux log investigations.
View All Logs
sudo journalctlShows logs from oldest to newest.
View Logs in Reverse (Newest First)
sudo journalctl -rFollow Logs in Real-Time
sudo journalctl -fLike tail -f for the system journal.
View Logs Since a Specific Time
sudo journalctl --since "2025-08-10 14:00"
sudo journalctl --since "yesterday"
sudo journalctl --since "2 hours ago"View Logs Between Two Dates
sudo journalctl --since "2025-08-10 14:00" --until "2025-08-10 18:00"Filter by Service (Unit)
sudo journalctl -u ssh.service
sudo journalctl -u nginx.service --since "today"Filter by Process ID (PID)
sudo journalctl _PID=1234Search for Keywords
sudo journalctl --grep "Failed password"
sudo journalctl --grep "error" -i # Case-insensitiveShow Only Logs from Current Boot
sudo journalctl -bPrevious boot:
sudo journalctl -b -1Limit Output to N Lines
sudo journalctl -n 50Show Logs in JSON Format (for scripting)
sudo journalctl -o json-prettyShow Kernel Logs Only
sudo journalctl -k
sudo journalctl -k --since "1 hour ago"journalctl Pro Tips
-
Combine filters:
sudo journalctl -u ssh.service --since "yesterday" --grep "Failed" -
Use
lessto scroll:sudo journalctl | less -S -
Always run with
sudoto see all logs.
2. grep, egrep, and ripgrep (rg)
The classic method for quick filtering.
# Find failed SSH login attempts
grep "Failed password" /var/log/auth.log
# Case-insensitive search for suspicious IP addresses
grep -i "192.168.10.15" /var/log/syslog
# Ripgrep for blazing speed
rg "error" /var/log/*Pro Tip: Always use grep --color=auto for quick visual parsing.
3. ausearch & aureport (Auditd Framework)
The Linux Audit framework is powerful for forensic logging.
# Search for all sudo events
sudo ausearch -m USER_CMD --success no
# Generate a report of all login attempts
sudo aureport --login --failedScenario: Detect unauthorized privilege escalation attempts.
4. logwatch
A log summarizer for quick overviews.
sudo logwatch --detail high --range todayScenario: Daily automated security review of system activity.
5. GoAccess
A real-time log analyzer for HTTP access logs.
goaccess /var/log/apache2/access.log --log-format=COMBINED -o report.htmlScenario: Detect web scanning attempts or high-volume brute force attacks.
6. fail2ban
While primarily a prevention tool, it logs and tracks failed login attempts.
# View current jail status
sudo fail2ban-client status sshdScenario: Investigate repeated failed login attempts before IP banning.
7. ELK Stack (Elasticsearch, Logstash, Kibana)
For large-scale environments, professionals centralize logs and use visual dashboards.
Pro Tip: Integrate Filebeat to ship logs from multiple Linux servers into ELK for correlation.
Real-World Investigation Scenarios
Scenario 1: Suspicious SSH Activity
grep "Failed password" /var/log/auth.log | awk '{print $(NF-3)}' | sort | uniq -c | sort -nrPurpose: Identify the top offending IPs trying to brute force SSH.
Scenario 2: Detecting Web Attacks
grep -E "UNION|SELECT|INSERT|DROP" /var/log/apache2/access.logPurpose: Spot possible SQL injection attempts.
Scenario 3: Finding Deleted File Access
If Auditd is enabled:
sudo ausearch -m PATH --key deletePurpose: See when and by whom files were deleted.
Pro Tips from the Field
-
Log Rotation Awareness Don't assume logs go back forever — check
/etc/logrotate.confto ensure historical logs are archived before being rotated. -
Timestamp Synchronization Always ensure NTP is enabled; mismatched clocks can ruin a forensic timeline.
-
Combine Tools Use
grepfor quick filtering,awkfor extraction, andsortfor counting — the UNIX way. -
Whitelist Known Noise For repetitive benign events, maintain a personal
grep -v -f whitelist.txtfile. -
Automate Reports Cron-based log summaries save you time during daily SOC duties.
Summary
Linux log investigation is both an art and a science. The art lies in knowing where to look and recognizing patterns; the science is in methodically filtering, parsing, and correlating events. Whether using built-in tools like journalctl and grep or enterprise-grade stacks like ELK, the core principle is visibility — you can't defend what you can't see.
By combining sharp command-line skills with the right log management tools, cybersecurity operators can quickly detect and respond to threats, turning scattered data into actionable intelligence.