Monthly Manifest and Schema Updation (#5697)

This commit is contained in:
shashank-elastic
2026-02-10 09:17:04 +05:30
committed by GitHub
parent 229f3adf75
commit 70d7f2b6b1
88 changed files with 708 additions and 37 deletions
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
+4 -4
View File
@@ -145,11 +145,11 @@
endgame: "8.4.0"
"9.3.0":
beats: "9.2.3"
ecs: "9.3.0-rc1"
beats: "9.3.0"
ecs: "9.3.0"
endgame: "8.4.0"
"9.4.0":
beats: "9.2.3"
ecs: "9.3.0-rc1"
beats: "9.3.0"
ecs: "9.3.0"
endgame: "8.4.0"
+1 -1
View File
@@ -1,6 +1,6 @@
[project]
name = "detection_rules"
version = "1.5.40"
version = "1.5.41"
description = "Detection Rules is the home for rules used by Elastic Security. This repository is used for the development, maintenance, testing, validation, and release of rules for Elastic Securitys Detection Engine."
readme = "README.md"
requires-python = ">=3.12"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/02/06"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -21,6 +21,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Kubelet Certificate File Access Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubelet Certificate File Access Detected via Defend for Containers
This detection flags an interactive process inside a Linux container opening files under `/var/lib/kubelet/pki/`, which includes the kubelet client certificate and key used to authenticate to the Kubernetes API. Attackers who obtain these credentials can impersonate the node, enumerate cluster resources, and pivot to secrets or workloads. A common pattern is an operator execing into a compromised pod, locating the kubelet cert/key pair, copying it out, then using it to query the API server from outside the container.
### Possible investigation steps
- Identify the pod/namespace/node and owning controller for the container, then confirm whether it should ever have access to host kubelet PKI (e.g., privileged DaemonSet, hostPath mount, node-agent tooling) or if this is an unexpected breakout indicator.
- Review the interactive session context (exec/attach/ssh), including who initiated it and the command history/TTY telemetry around the alert time, to determine whether this was routine debugging or suspicious enumeration.
- Inspect the container filesystem and recent file operations for evidence of credential harvesting (reads of kubelet client cert/key pairs, copies to temporary paths, archive creation, or outbound transfer tooling) and preserve artifacts for forensics.
- Correlate immediately after the access event for Kubernetes API activity using node credentials (unusual discovery, secret access, or cluster-wide queries) originating from the same workload identity, node, or egress address.
- Validate whether kubelet credentials were reused by reviewing API server audit logs for unexpected node identity actions, and rotate kubelet client certs/keys and isolate the workload if misuse is suspected.
### False positive analysis
- A cluster operator or SRE may exec into a privileged pod (e.g., a DaemonSet with hostPath access to `/var/lib/kubelet`) for node troubleshooting and use interactive shell commands to inspect or validate kubelet PKI files during incident response or routine maintenance.
- A legitimate containerized node-management or diagnostic workflow that runs interactively (e.g., invoked manually for verification) may open files under `/var/lib/kubelet/pki/` as part of validating kubelet certificate presence/permissions after upgrades, certificate rotation, or node reconfiguration.
### Response and remediation
- Immediately isolate the affected workload by scaling the pod/controller to zero or cordoning and draining the node if a privileged pod has host access to `/var/lib/kubelet/pki/`, and preserve the container filesystem and process list for forensics before teardown.
- Remove the execution path that enabled access by deleting or patching the pod/DaemonSet to drop `privileged`, `hostPID/hostNetwork`, and any `hostPath` mounts that expose `/var/lib/kubelet` and redeploy only from a known-good image and manifest.
- Rotate and reissue kubelet client certificates/keys on the impacted node(s) (or replace the node from autoscaling/immutable infrastructure) and verify the old credentials can no longer authenticate to the Kubernetes API server.
- Review Kubernetes API server audit logs for activity using the node identity around the access time (cluster-wide discovery, secret reads, token reviews, exec into other pods) and revoke/rotate any exposed service account tokens or secrets accessed during the window.
- Escalate to the Kubernetes platform/on-call security team immediately if the files include a kubelet client key, if the pod was privileged or had host mounts, or if API audit logs show node credential use from unexpected sources or unusual resource enumeration.
- Harden the cluster by enforcing policies that block hostPath access to `/var/lib/kubelet` and privileged pods (Pod Security Admission/Gatekeeper/Kyverno), limiting interactive exec/attach via RBAC, and monitoring for subsequent access attempts to kubelet PKI paths and related credential exfiltration tooling.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#kubelet-api",
"https://www.cyberark.com/resources/threat-research-blog/using-kubelet-client-to-attack-the-kubernetes-cluster",
@@ -35,6 +66,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/02/02"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -27,6 +27,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Potential Kubeletctl Execution Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Potential Kubeletctl Execution Detected via Defend for Containers
This detects an interactive execution of kubeletctl within a Linux container, a tool that simplifies direct access to the nodes Kubelet API. It matters because kubeletctl can expose pod and node details and enable actions that support discovery and lateral movement from a compromised container. A common attacker pattern is running `kubeletctl scan` against the Kubelet endpoint, then using `pods` or `exec/attach` to reach other workloads.
### Possible investigation steps
- Determine how an interactive shell was obtained in the container (e.g., kubectl exec, docker exec, or an app RCE) by correlating the timestamp with Kubernetes audit logs and upstream access logs for the initiating user or workload.
- Review the full kubeletctl invocation to identify the intended operation and target Kubelet endpoint (node IP/hostname and port), then validate whether that endpoint should be reachable from this pod in the cluster design.
- Correlate container network activity around the alert for connections to node addresses on Kubelet ports (commonly 10250/10255) and look for scanning patterns across multiple nodes indicating discovery or lateral movement attempts.
- Check for access to Kubernetes credentials within the container (service account token, mounted certificates, kubeconfig, cloud metadata credentials) and verify whether any were used to authenticate to the Kubelet API.
- Hunt for follow-on actions consistent with lateral movement or impact, such as kubeletctl exec/attach/portForward usage, access to other pod namespaces, or subsequent Kubernetes API activity that creates/patches workloads.
### False positive analysis
- An administrator or developer may have executed kubeletctl interactively inside the container during an incident response or troubleshooting session to enumerate pods/runningpods or validate Kubelet API connectivity, which can resemble discovery activity.
- A container image or entrypoint script that includes kubeletctl may be invoked manually for routine diagnostics (e.g., running scan/pods/cri or using --server/-s to target a node), producing an interactive exec event without malicious intent.
### Response and remediation
- Isolate the affected pod by scaling it to zero or applying a deny-all egress policy while preserving the container filesystem and process history needed to reconstruct the kubeletctl command, its target node address, and any output artifacts.
- Block and alert on pod-to-node access to the Kubelet API (typically 10250/10255) at the network layer, and rotate/revoke any Kubernetes service account tokens or kubeconfigs present in the container if kubeletctl attempted authenticated actions like exec/attach/portForward.
- Remove kubeletctl and related tooling from the image and redeploy from a known-good build, then perform node/pod hygiene by evicting/restarting the workload and checking for persistence indicators such as added binaries, modified entrypoints, or unexpected cron/init scripts.
- Recover by re-creating the workload in a clean namespace with least-privilege RBAC, validating no unauthorized pods/replicasets were created and that the service account permissions and mounts match the expected deployment spec.
- Escalate to the incident response team immediately if kubeletctl targeted multiple nodes, invoked exec/attach/portForward/run/scan, or if there is evidence of access to other namespaces or credential material (service account tokens, cloud metadata credentials) from the container.
- Harden by enforcing Pod Security Standards (no privileged pods, hostNetwork/hostPID/hostPath restrictions), restricting interactive exec into production pods, and limiting node API exposure by disabling unauthenticated Kubelet endpoints and requiring authenticated/authorized access.
"""
references = [
"https://www.cyberark.com/resources/threat-research-blog/using-kubelet-client-to-attack-the-kubernetes-cluster",
"https://github.com/cyberark/kubeletctl",
@@ -41,6 +72,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/02/02"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -20,6 +20,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Potential Direct Kubelet Access via Process Arguments Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Potential Direct Kubelet Access via Process Arguments Detected via Defend for Containers
This detection flags an interactive process started inside a Linux container that includes an HTTP request targeting the Kubelet API on port 10250, a common pivot point for gaining execution and visibility across nodes. Attackers use direct Kubelet access to enumerate pods, fetch logs, or run commands that can lead to broader cluster access and lateral movement. A typical pattern is invoking curl or wget from a container shell against `https://<node-ip>:10250/` endpoints to probe or execute actions.
### Possible investigation steps
- Identify the originating pod/workload and container image for the interactive session, then determine whether the container was expected to provide diagnostic tooling or shell access and whether it recently changed.
- Extract the full command line and reconstruct the requested Kubelet endpoint path (for example `/pods`, `/exec`, `/run`, `/logs`) to infer intent (enumeration vs remote execution) and capture any embedded tokens or client cert usage.
- Correlate the process start time with Kubernetes audit logs and API server events to see if there were concurrent pod exec/attach, secret reads, or workload modifications suggesting follow-on activity.
- Verify whether the destination node IP/hostname is the local node or a remote node and review network flow logs/egress policies to confirm the container could reach port 10250 and whether other nodes were contacted.
- Check node and Kubelet configuration for exposure and auth bypass risk (anonymous auth, webhook mode, client certs), and inspect Kubelet logs around the timestamp for the corresponding request and response status codes.
### False positive analysis
- A cluster operator or SRE opens an interactive shell in a troubleshooting container and manually curls `https://<node-ip>:10250/` (or `/pods`/`/metrics`) to validate Kubelet reachability, authentication behavior, or node health during incident triage.
- A legitimate in-container diagnostic workflow uses an interactive session to probe the local nodes Kubelet port 10250 for environment verification (e.g., confirming node IP mapping or TLS/cert configuration), embedding the URL in process arguments without any intent to enumerate or execute actions across the cluster.
### Response and remediation
- Isolate the affected pod by removing service exposure and applying a temporary egress deny rule to block traffic to node port 10250 from that namespace/pod label, then terminate the interactive shell session and restart the workload from a known-good image.
- Capture and preserve the full command line, container filesystem changes, and relevant Kubelet and Kubernetes audit log entries around the timestamp, then hunt for additional in-cluster attempts to reach `https://<node>:10250/` from other pods or namespaces.
- Rotate any credentials that may have been exposed or used (service account tokens, client certificates, kubeconfig files) and revoke or redeploy affected service accounts, then validate no unauthorized `exec/attach`, secret reads, or workload changes occurred after the access attempt.
- Escalate to the platform security/on-call incident commander immediately if the Kubelet request targeted sensitive endpoints like `/exec`, `/run`, `/containerLogs`, or returned successful responses (2xx/3xx) or if similar commands are seen across multiple nodes.
- Harden by enforcing Kubelet authentication/authorization (disable anonymous access, require webhook authz, restrict client cert issuance), and implement network controls that prevent pods from reaching node Kubelet ports except from approved node-local agents.
- Reduce recurrence by removing shell and HTTP tooling from application images, limiting interactive access (disable `kubectl exec` for non-admins), and tightening RBAC and admission policies to block privileged pods/host networking that increase node API reachability.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#kubelet-api",
"https://www.cyberark.com/resources/threat-research-blog/using-kubelet-client-to-attack-the-kubernetes-cluster",
@@ -35,6 +66,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/02/02"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/02"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -15,6 +15,37 @@ This behavior is uncommon for regular Kubernetes clusters.
language = "esql"
license = "Elastic License v2"
name = "Kubernetes Potential Endpoint Permission Enumeration Attempt by Anonymous User Detected"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Potential Endpoint Permission Enumeration Attempt by Anonymous User Detected
This detects a burst of Kubernetes API requests from an unauthenticated identity that probes many different endpoints and resource types, producing mostly forbidden/unauthorized/not found responses within a small window. It matters because this pattern maps the clusters exposed surface and reveals which APIs might be reachable before an attacker commits to credential theft or exploitation. A common usage pattern is scripted GET/LIST sweeps across core and custom resources (for example pods, secrets, namespaces, and CRDs) from one source IP and user agent.
### Possible investigation steps
- Review the specific request URIs and resource types queried and their sequence to fingerprint common reconnaissance tooling and whether high-value endpoints (e.g., secrets, tokenreviews, subjectaccessreviews, CRDs) were probed.
- Determine whether the apparent source IP is internal or Internet-routable and confirm the true originating client by correlating load balancer/ingress/firewall logs (including X-Forwarded-For) with the audit event timestamps.
- Validate Kubernetes API server authentication/authorization posture during the window to identify misconfiguration that permits anonymous access and confirm whether any requests returned successful responses that indicate real data exposure.
- Hunt for follow-on activity from the same origin or user agent such as authenticated requests, service account token usage, RBAC/ClusterRoleBinding changes, pod exec, or secret/configmap reads to assess escalation beyond discovery.
- If the API endpoint is publicly reachable, apply immediate containment by restricting network access to the API server (allowlisting, VPN/private endpoint, temporary IP blocks) while preserving relevant audit and network logs for forensics.
### False positive analysis
- Misconfigured or transitional API server authentication (e.g., anonymous auth briefly enabled or a failing authn proxy/fronting component) can cause legitimate clients to appear as `system:anonymous` and generate multiple 401/403/404 responses across several endpoints during normal cluster access attempts.
- Internal cluster health checks or component discovery behavior that hits multiple API paths without presenting credentials (or uses requests that the audit log records with empty/null usernames) can resemble enumeration when it produces a short burst of failed requests across diverse resources from a single source IP and user agent.
### Response and remediation
- Immediately restrict Kubernetes API server network exposure by allowlisting known admin/VPN IPs and temporarily blocking the observed source IP(s) and user agent at the load balancer/firewall while preserving audit logs and reverse-proxy access logs for the timeframe.
- Eradicate the anonymous access path by disabling anonymous authentication on the API server, fixing any misconfigured auth proxy that forwards unauthenticated traffic, and removing any RBAC bindings that grant permissions to `system:anonymous` or `system:unauthenticated`.
- Validate whether any requests from the same source returned successful responses (especially reads of secrets/configmaps, tokenreviews/subjectaccessreviews, or CRDs) and, if so, rotate impacted service account tokens and credentials and perform a targeted review of recently issued tokens and new ClusterRoleBindings.
- Recover by re-enabling API access in a controlled manner (private endpoint/VPN, bastion, or mTLS), confirming expected kubectl and controller functionality, and monitoring for renewed bursts of failed requests across many request URIs from unauthenticated identities.
- Escalate to the incident response lead and platform security team if any anonymous request succeeded, if the probing repeats from multiple external IPs, or if follow-on activity appears (new privileged RBAC, pod exec, or secret reads) within 24 hours of the enumeration attempt.
- Harden by enforcing least-privilege RBAC, enabling and retaining full audit logging for authn/authz failures, applying API server rate limits/WAF rules for repeated 401/403/404 sweeps, and continuously validating that the API endpoint is not publicly reachable.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#unauthenticated-api-access"
]
@@ -26,6 +57,7 @@ tags = [
"Domain: Kubernetes",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "esql"
@@ -2,7 +2,7 @@
creation_date = "2026/02/02"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/02"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -14,6 +14,36 @@ can detect automated permission enumeration attempts. This behavior is uncommon
language = "esql"
license = "Elastic License v2"
name = "Kubernetes Potential Endpoint Permission Enumeration Attempt Detected"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Potential Endpoint Permission Enumeration Attempt Detected
Detects a single Kubernetes identity from one IP issuing a burst of API calls across many resources and URLs with a mix of allowed and denied outcomes, consistent with automated RBAC probing rather than normal operations. This matters because attackers use it to map what they can access and identify high-value objects (secrets, pods, nodes) before escalation or lateral movement. A common pattern is running a script that iterates list/get/watch on dozens of API endpoints until it finds ones that return data.
### Possible investigation steps
- Expand the timeline around the alert for the same identity and source to reconstruct the full API-call sequence and identify which resource types returned successful data, prioritizing secrets, configmaps, nodes, pods, and RBAC objects.
- Determine whether the source IP maps to a cluster node, pod egress/NAT, VPN, or an external host using infrastructure and network telemetry, and confirm it matches expected administrative or automation origins.
- Validate whether the acting identity is a human user, service account, or external auth integration and review recent sign-ins/token issuance and current RBAC bindings for unexpected or overly broad access.
- Hunt for follow-on actions from the same identity or IP that indicate escalation or execution, such as modifying role bindings, creating privileged pods, accessing secret data, or initiating exec/port-forward operations.
- If the activity is not clearly legitimate, contain by rotating or disabling the credential and tightening permissions, then search for the same enumeration behavior across other identities and sources to scope impact.
### False positive analysis
- A cluster administrator or platform engineer using kubectl from a single workstation/VPN IP to troubleshoot RBAC issues may rapidly test get/list/watch across multiple resources and endpoints, producing a mix of allowed and forbidden responses within a short window.
- A newly deployed or updated in-cluster component using a service account may probe several Kubernetes API resources during initialization or capability detection and encounter intermittent authorization denials due to incomplete RBAC bindings, generating diverse requestURIs/resources with both success and failure outcomes.
### Response and remediation
- Quarantine the actor by disabling the implicated user or service account (revoke kubeconfig/token and delete associated Secrets for service-account tokens) and, if the source IP is external, block it at the API server ingress/load balancer while preserving access for known admin networks.
- Eradicate the access path by rotating any credentials the identity could have used (OIDC refresh tokens, client certs, static kubeconfigs) and removing unexpected RBAC RoleBindings/ClusterRoleBindings or groups that grant broad read access discovered during review.
- Validate impact and recover by reviewing what endpoints returned successful data during the burst (especially secrets, configmaps, nodes, pods, and RBAC objects), rotating any exposed application secrets, and restarting affected workloads after credential updates.
- Escalate immediately to incident response if the same identity subsequently creates/patches RBAC bindings, deploys privileged pods/daemonsets, performs exec/port-forward, or accesses secret data across multiple namespaces.
- Harden by enforcing least-privilege RBAC for humans and service accounts, segmenting API access with network controls (private endpoint/VPN allowlists), and enabling short-lived tokens with regular rotation plus alerting on repeated mixed allow/deny probing across many resources.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#unauthenticated-api-access"
]
@@ -25,6 +55,7 @@ tags = [
"Domain: Kubernetes",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "esql"
@@ -2,7 +2,7 @@
creation_date = "2026/02/04"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/04"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -15,6 +15,37 @@ index = ["logs-kubernetes.audit_logs-*"]
language = "kuery"
license = "Elastic License v2"
name = "Kubernetes Cluster-Admin Role Binding Created"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Cluster-Admin Role Binding Created
This rule flags when someone creates a RoleBinding or ClusterRoleBinding that assigns the cluster-admin role, which grants unrestricted control over every Kubernetes resource and enables rapid privilege escalation or persistence. Attackers often abuse a stolen namespace service account to bind it to cluster-admin, then pivot to read secrets, change security controls, or deploy a privileged DaemonSet across all nodes to maintain control.
### Possible investigation steps
- Identify who created the binding by reviewing the audit event user identity, groups, source IP, and user agent, and confirm whether it matches an approved admin workflow or automation.
- Inspect the created RoleBinding or ClusterRoleBinding to determine which subject received cluster-admin, whether it targets a service account or external identity, and whether the subject is expected to have cluster-wide privileges.
- Correlate the creator and bound subject with recent authentication events and credential changes to spot compromised accounts, unusual access locations, or use of long-lived tokens.
- Review subsequent Kubernetes audit activity from the same actor or newly privileged subject for rapid follow-on actions such as listing secrets, creating privileged pods/daemonsets, modifying RBAC, or disabling admission controls.
- Validate the change against change management records and repository-based RBAC manifests, and if unauthorized, assess scope by enumerating other recent privileged RBAC grants created around the same time.
### False positive analysis
- A cluster bootstrap, upgrade, or recovery workflow legitimately creates or re-creates a ClusterRoleBinding/RoleBinding to `cluster-admin` for a break-glass admin user or core control-plane service account as part of restoring expected RBAC state.
- An approved operational change temporarily grants `cluster-admin` to a namespace service account or automation identity to perform broad maintenance tasks (e.g., installing cluster-scoped resources), and the binding creation is captured during the allowed change window.
### Response and remediation
- Immediately identify and delete or edit the newly created RoleBinding/ClusterRoleBinding granting `cluster-admin`, then revoke the bound subjects access by rotating the affected service account token or disabling the implicated user/identity provider account.
- Quarantine likely-abused workloads by scaling down or deleting pods/deployments created by the newly privileged subject and blocking its network access with namespace isolation policies while you preserve relevant audit logs and YAML manifests.
- Enumerate and undo follow-on changes made after the binding creation, including additional RBAC grants, new cluster-scoped resources (CRDs, webhooks), privileged DaemonSets, secret reads, or changes to admission controllers, and rotate any exposed credentials found in Secrets.
- Recover by restoring RBAC and critical cluster resources from GitOps or known-good backups, then re-apply least-privilege roles and validate access with `kubectl auth can-i` for impacted identities and namespaces.
- Escalate to incident response leadership immediately if the binding targets a service account, an external identity not in the admin group, or if there is evidence of secret access, privileged workload creation, or persistence mechanisms (e.g., new webhooks or DaemonSets).
- Harden by enforcing RBAC via GitOps-only change control, restricting `cluster-admin` binding creation with admission policy (ValidatingAdmissionPolicy/Kyverno/OPA Gatekeeper), requiring MFA and short-lived tokens for admins, and alerting on any creation or modification of cluster-wide RBAC bindings.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
]
@@ -27,6 +58,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Persistence",
"Tactic: Privilege Escalation",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "query"
@@ -2,7 +2,7 @@
creation_date = "2026/02/04"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/04"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -14,6 +14,37 @@ privilege escalation or unauthorized access within the cluster.
language = "esql"
license = "Elastic License v2"
name = "Kubernetes Creation or Modification of Sensitive Role"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Creation or Modification of Sensitive Role
This rule detects allowed create, update, or patch actions on Roles and ClusterRoles that introduce high-risk RBAC permissions, including wildcard access and escalation verbs like bind, escalate, or impersonate. These changes matter because they can silently expand privileges and enable persistence or lateral movement across the cluster. Attackers commonly add a new ClusterRole with `*` verbs/resources and then use it to bind themselves or a service account to cluster-adminequivalent access.
### Possible investigation steps
- Identify the responsible identity and origin by reviewing the audit events user/service account, userAgent, and source IPs, then confirm whether the action came from approved automation (e.g., GitOps/CI) or an interactive session.
- Retrieve and diff the Role/ClusterRole manifest before vs after the change to pinpoint newly added wildcards, escalation verbs (bind/escalate/impersonate), or permissions over RBAC resources that enable privilege escalation.
- Enumerate RoleBindings/ClusterRoleBindings that reference the modified role and determine which users/groups/service accounts gained effective permissions, prioritizing bindings created/changed near the same time.
- Validate authorization intent by correlating the change with a change ticket/PR and the expected namespace/cluster scope, and flag any out-of-band edits (kubectl apply/edit) that bypass the normal workflow.
- If suspicious, contain by reverting the role and removing or disabling newly privileged bindings/subjects, then hunt for follow-on activity from the same identity (e.g., creation of new service accounts, secrets access, or additional RBAC changes) within the incident window.
### False positive analysis
- Cluster administrators or platform automation legitimately create or update Roles/ClusterRoles to include wildcard verbs/resources or escalation-related verbs (bind/escalate/impersonate) during initial cluster bootstrapping, feature enablement, or maintenance, especially when enabling broad operational access for system components.
- Routine RBAC refactoring such as consolidating multiple granular roles into a single reusable role, migrating permissions across namespaces, or adjusting access for incident response can temporarily add permissions over RBAC resources (roles/rolebindings/clusterroles/clusterrolebindings) and trigger the rule even when the change is approved and tracked.
### Response and remediation
- Immediately locate and quarantine the changed Role/ClusterRole by reverting it to the last known-good manifest (from Git/GitOps) or deleting it if unauthorized, and remove any new RoleBinding/ClusterRoleBinding subjects that reference it.
- Contain the actor by disabling or rotating credentials for the responsible user/service account (and its tokens), and if the change came from a workload, isolate the namespace/workload (scale down, deny egress) until provenance is confirmed.
- Eradicate persistence by searching for and removing additional RBAC changes made in the same window (new roles, bindings, service accounts) and by revoking any newly granted access to secrets or cluster-scoped resources discovered during review.
- Recover by redeploying RBAC from a controlled pipeline, validating effective permissions for impacted subjects, and monitoring for re-creation of the same role name or re-binding attempts after rollback.
- Escalate to platform security/incident response immediately if the role grants wildcard permissions, includes `impersonate`/`escalate`/`bind`, is cluster-scoped, or is bound to non-admin subjects or external identities without an approved change record.
- Harden by enforcing RBAC guardrails (OPA Gatekeeper/Kyverno policies blocking wildcard/escalation verbs except for approved groups), restricting who can create/update RBAC objects, and requiring all RBAC changes to flow through code review and signed GitOps automation.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
]
@@ -26,6 +57,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Persistence",
"Tactic: Privilege Escalation",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "esql"
@@ -2,7 +2,7 @@
creation_date = "2026/02/04"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/04"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -14,6 +14,37 @@ index = ["logs-kubernetes.audit_logs-*"]
language = "kuery"
license = "Elastic License v2"
name = "Kubernetes Creation of a RoleBinding Referencing a ServiceAccount"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Creation of a RoleBinding Referencing a ServiceAccount
This rule detects creation of a RoleBinding or ClusterRoleBinding that grants permissions to a ServiceAccount, a common way to delegate access inside the cluster and a frequent precursor to stealthy privilege escalation or persistence. Attackers who gain the ability to create bindings often attach an over-privileged role (or cluster-wide role) to an existing ServiceAccount used by a workload, then use that workloads token to operate with elevated rights without creating new user identities.
### Possible investigation steps
- Identify the principal that created the binding (user/service account), along with the source IP, user-agent, and authentication method, to determine whether it originated from an expected controller, CI/CD system, or a suspicious client.
- Review the new RoleBinding/ClusterRoleBindings `roleRef` and subjects to determine what permissions were granted, and assess blast radius by inspecting the referenced Role/ClusterRole rules for high-impact verbs/resources (e.g., secrets, pods/exec, nodes, RBAC).
- Determine where the referenced ServiceAccount is used by enumerating pods/deployments in the namespace that run under it, checking whether service account tokens are mounted, and whether this SA is associated with privileged workloads or externally reachable services.
- Correlate nearby audit activity for additional RBAC or identity changes (new roles, bindings, service accounts, token requests) and for follow-on actions performed using the ServiceAccount that indicate attempted privilege escalation or persistence.
- Validate the change against approved deployment/change records and, if unauthorized or overly permissive, remove/roll back the binding and rotate or invalidate the ServiceAccount credentials while tightening RBAC to least privilege.
### False positive analysis
- A cluster administrator or GitOps-driven deployment legitimately creates or updates RoleBindings/ClusterRoleBindings to grant a workload ServiceAccount the minimal permissions required for a new release, namespace onboarding, or routine RBAC refactoring.
- Kubernetes controllers or automation running under authorized identities (e.g., internal operators, admission policies, or namespace provisioning jobs) create bindings for default or system ServiceAccounts as part of standard cluster bootstrap, reconciliation, or multi-tenant namespace setup.
### Response and remediation
- Immediately fetch and snapshot the created RoleBinding/ClusterRoleBinding manifest, its referenced Role/ClusterRole, and recent Kubernetes audit events around the creator and the ServiceAccount to preserve evidence and establish scope.
- Contain potential misuse by deleting or scaling down workloads that use the referenced ServiceAccount and temporarily revoking the new binding (or applying an emergency deny policy via admission controls) until the change is validated.
- Eradicate unauthorized privilege delegation by removing the binding, replacing it with a least-privilege Role/RoleBinding scoped to the required namespace/resources, and rotating credentials by recreating the ServiceAccount or forcing token/key rotation for any dependent workloads.
- Recover safely by redeploying affected applications with the corrected RBAC, validating that required operations succeed without cluster-admin-equivalent rights, and monitoring for repeated binding creation or follow-on access to secrets, pod exec, or node-level resources.
- Escalate to platform security/incident response immediately if the binding references a high-privilege ClusterRole (e.g., cluster-admin), targets a broadly used ServiceAccount, or is followed by suspicious actions such as secret reads, new token requests, or pod exec sessions from the same identity.
- Harden by enforcing RBAC guardrails with admission policies that restrict who can create RoleBindings/ClusterRoleBindings and which roles may be referenced, disabling auto-mounting of service account tokens where not needed, and adopting GitOps-only RBAC changes with mandatory review.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
]
@@ -26,6 +57,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Persistence",
"Tactic: Privilege Escalation",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "query"
@@ -2,7 +2,7 @@
creation_date = "2026/02/04"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/04"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -17,6 +17,37 @@ index = ["logs-kubernetes.audit_logs-*"]
language = "eql"
license = "Elastic License v2"
name = "Kubernetes Sensitive RBAC Change Followed by Workload Modification"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Sensitive RBAC Change Followed by Workload Modification
This rule detects when a user grants or broadens high-risk permissions in a Role/ClusterRole and then quickly creates or patches a DaemonSet, Deployment, or CronJob, a strong signal of RBAC-driven privilege escalation followed by payload deployment. Attackers often add wildcard access or escalation verbs to a new role, bind it to their identity, then patch a workload to run a malicious container across nodes or on a schedule to establish persistence.
### Possible investigation steps
- Review the Role/ClusterRole change diff to identify newly granted wildcard resources/verbs or escalation permissions (e.g., bind, impersonate, escalate) and determine the effective access increase for the actor.
- Identify any RoleBinding/ClusterRoleBinding creations or updates around the same time to see whether the modified role was bound to the same principal or a newly created service account.
- Inspect the subsequent DaemonSet/Deployment/CronJob spec changes for malicious indicators such as new images, added initContainers, elevated securityContext (privileged/hostPID/hostNetwork), hostPath mounts, or suspicious command/args.
- Correlate pod runtime activity from the modified workload (image pulls, container starts, outbound connections, and access to secrets/configmaps) to confirm execution and scope of impact.
- Validate the actors legitimacy by checking whether the request originated from expected IP/user-agent and whether the identity is associated with approved CI/CD automation or an unusual interactive session.
### False positive analysis
- A platform engineer performing an urgent, legitimate RBAC adjustment (e.g., expanding a Role/ClusterRole for a new feature rollout) and then immediately patching or deploying a DaemonSet/Deployment/CronJob as part of the same change window can match this sequence.
- A CI/CD pipeline or GitOps-style workflow using a non-system:masters identity may update RBAC manifests and then apply workload updates within minutes during routine releases, producing this pattern without malicious intent.
### Response and remediation
- Immediately revoke or roll back the risky Role/ClusterRole changes and remove any new/updated RoleBinding/ClusterRoleBinding that ties the elevated permissions to the triggering user or service account.
- Quarantine the modified Deployment/DaemonSet/CronJob by scaling it to zero or deleting it and cordon/drain affected nodes if pods ran privileged, used hostPath mounts, or executed on many nodes.
- Rotate credentials and access paths exposed through the workload (service account tokens, kubeconfig files, mounted secrets, cloud keys) and invalidate any newly issued tokens tied to the actor.
- For eradication and recovery, redeploy workloads from trusted Git/registry sources, block the suspicious images/digests in admission controls, and verify no persistence remains via CronJobs, DaemonSets, webhook configurations, or additional RBAC bindings.
- Escalate to incident response and platform leadership if the RBAC change included wildcard permissions or escalation verbs, if the workload ran privileged/hostNetwork/hostPID, or if sensitive secrets were accessed or exfiltration is suspected.
- Harden by enforcing least-privilege RBAC, requiring peer approval for RBAC changes, restricting workload mutations via GitOps-only service accounts, and using admission policies to deny privileged pods, hostPath mounts, and unapproved registries.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
]
@@ -29,6 +60,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Privilege Escalation",
"Tactic: Persistence",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/02/04"
integration = ["kubernetes"]
maturity = "production"
updated_date = "2026/02/04"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-kubernetes.audit_logs-*"]
language = "kuery"
license = "Elastic License v2"
name = "Kubernetes Service Account Modified RBAC Objects"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Kubernetes Service Account Modified RBAC Objects
This rule detects Kubernetes service accounts performing allowed write actions on RBAC resources such as Roles and RoleBindings, which is atypical because service accounts rarely administer permissions. It matters because stolen or over-privileged service account tokens can silently alter authorization to gain or retain elevated access across the cluster. An attacker commonly uses a compromised workloads token to create or patch a binding that grants cluster-admin privileges to their service account for persistent control.
### Possible investigation steps
- Retrieve the full audit event and diff the before/after RBAC object to identify newly granted subjects, verbs, resources, and cluster-admin or wildcard permissions.
- Trace the acting service account to its owning workload (Deployment/Pod) and node, then review recent image changes, restarts, exec sessions, and container logs around the event time for compromise indicators.
- Determine whether the change is attributable to an expected controller or GitOps/CI automation by correlating with change tickets, pipeline runs, and repository commits for RBAC manifests.
- Validate whether the service account token may be abused by checking for unusual API access patterns, source IPs/user agents, and cross-namespace activity compared to its baseline behavior.
- Contain if suspicious by reverting the RBAC change, rotating the service account token (and any mounted secrets), and tightening the service accounts Role/ClusterRole to least privilege.
### False positive analysis
- A platform automation running in-cluster (e.g., a controller or CI job using a service account) legitimately applies RBAC manifests during routine deployment, upgrades, or namespace onboarding, resulting in create/patch/update of Roles or RoleBindings.
- A Kubernetes operator or housekeeping workflow running under a service account intentionally adjusts RBAC as part of maintenance (e.g., rotating access, reconciling drift, or cleaning up obsolete bindings) and triggers allowed delete or update actions on RBAC resources.
### Response and remediation
- Immediately remove or quarantine the offending service account by deleting its RoleBindings/ClusterRoleBindings and restarting or scaling down the owning workload to stop further RBAC writes.
- Revert the unauthorized RBAC object changes by restoring the last known-good Roles/Bindings from GitOps/manifests (or `kubectl rollout undo` where applicable) and verify no new subjects gained wildcard or cluster-admin-equivalent access.
- Rotate credentials by recreating the service account or triggering token re-issuance, deleting any mounted legacy token secrets, and redeploying workloads to ensure old tokens cannot be reused.
- Hunt and eradicate persistence by searching for additional recently modified RBAC objects and newly created service accounts in the same namespaces, then remove unauthorized accounts/bindings and scan the implicated container images for backdoors.
- Escalate to incident response and cluster administrators immediately if any change grants `cluster-admin`, introduces `*` verbs/resources, or binds a service account to privileged ClusterRoles across namespaces.
- Harden going forward by enforcing least-privilege RBAC, enabling admission controls to restrict RBAC modifications to approved identities/namespaces, and using short-lived projected service account tokens with workload identity constraints.
"""
references = [
"https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
]
@@ -28,6 +59,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Privilege Escalation",
"Tactic: Persistence",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "query"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.process-*", "logs-endpoint.events.file-*"]
language = "eql"
license = "Elastic License v2"
name = "Discovery Command Output Written to Suspicious File"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Discovery Command Output Written to Suspicious File
This rule flags a macOS discovery utility launched from an interactive shell and, within seconds, the same process writing to an unusual or hidden file location, indicating staged reconnaissance for later theft. Adversaries commonly run commands like `whoami`, `ifconfig`, `dscl`, or `system_profiler` and redirect output into `/tmp`, `/Users/Shared`, or a dotfile path to bundle host details before exfiltrating the collected text.
### Possible investigation steps
- Review the created/modified files contents, size, and timestamps to confirm it contains discovery output and whether it is being appended across multiple executions.
- Pivot from the initiating process to identify subsequent child processes or shell commands that compress, encrypt, move, or delete the file, indicating staging and cleanup.
- Examine concurrent network activity from the same process tree for outbound connections, file uploads, or suspicious DNS/HTTP requests immediately after the write event.
- Validate the interactive session context by correlating to the logged-in user, terminal/TTY (if available), remote access artifacts (SSH/VPN/remote management), and recent authentication events for that account.
- Hunt on the host for related staging patterns such as additional hidden files in common drop locations, recent archive creation, or persistence changes (LaunchAgents/LaunchDaemons/crontab) around the alert time.
### False positive analysis
- An administrator or troubleshooting script run from bash/zsh may execute built-in discovery commands (e.g., `system_profiler`, `ifconfig`, `dscl`) and redirect the output into `/tmp`, `/private/tmp`, or `/Users/Shared` as a temporary log or support bundle artifact.
- A login/profile shell customization (e.g., `.zshrc`/`.bash_profile`) or local diagnostic routine may run `whoami`/`arch`/`csrutil` and append results into a hidden dotfile path (e.g., `/*/.*`) for auditing or environment validation, creating a short command-then-write pattern.
### Response and remediation
- Isolate the macOS host from the network and suspend or terminate the implicated shell/process tree that executed the discovery command and immediately wrote into locations like `/tmp`, `/Users/Shared`, or hidden dotfiles to prevent further staging or exfiltration.
- Quarantine the written file(s) and any adjacent artifacts (archives, encrypted blobs, renamed copies) from the same directories, preserve them for analysis, and remove the staged data once collection is complete.
- Identify and eradicate the launch point by reviewing the invoking shell history and user startup scripts (e.g., `.zshrc`, `.bash_profile`) for redirection or scripted discovery, and delete any associated persistence (LaunchAgents/LaunchDaemons, cron entries) tied to the same user or file path.
- Rotate credentials and invalidate active sessions for the logged-in user that ran the command, and audit recent remote access methods (SSH, remote management, VPN) used on the host to ensure the account was not compromised.
- Restore the host to a known-good state by reinstalling or reimaging if tampering is suspected, then monitor for re-creation of the same suspicious file paths and repeat discovery-to-file-write behavior from any interactive shell.
- Escalate to IR leadership immediately if the staged file contains host/user inventory data and there is evidence of outbound transfer attempts (new external connections, upload utilities like `curl`/`scp`, or rapid archive creation) following the write event.
"""
risk_score = 47
rule_id = "60da1bd7-c0b9-4ba2-b487-50a672274c04"
severity = "medium"
@@ -25,7 +56,8 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Collection",
"Tactic: Discovery",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
type = "eql"
query = '''
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/02/03"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ from = "now-9m"
language = "esql"
license = "Elastic License v2"
name = "Suspicious AWS S3 Connection via Script Interpreter"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Suspicious AWS S3 Connection via Script Interpreter
This rule flags macOS script interpreters (AppleScript, Node.js, Python) that repeatedly initiate outbound connections to AWS S3 or CloudFront with little or no command context, a common sign of scripted automation rather than normal app traffic. Attackers often use a short Python or Node one-liner to fetch a second-stage payload from an S3 bucket and then poll the same bucket or a CloudFront-backed URL for commands or to upload stolen data.
### Possible investigation steps
- Pivot from the flagged executable to full process ancestry and command-line/script file to determine what code initiated the S3/CloudFront traffic and whether it was launched interactively, by a LaunchAgent/Daemon, or by another app.
- Identify the specific bucket/distribution and object paths involved using available URL/SNI/HTTP telemetry, then validate ownership and reputation by correlating with cloud account inventory, known-good tooling, and threat intel.
- Review concurrent endpoint activity from the same process and user such as file downloads to writable/temp locations, new executable creation, permission changes, or immediate execution of newly written payloads.
- Hunt for follow-on behaviors consistent with C2 or exfiltration including repeated polling intervals, unusually large outbound byte counts, multipart upload patterns, and matching connections from other hosts using the same domain.
- If suspicious, capture and preserve the script contents and related artifacts (Python/Node packages, AppleScript files, launch plist, cron entries) and isolate the host while blocking the destination domain at egress.
### False positive analysis
- A developer or build/CI workflow runs Python/Node scripts on macOS to fetch artifacts or dependencies from an organization-owned S3 bucket or CloudFront distribution, producing repeated connections during installs, tests, or packaging.
- A legitimate AppleScript/Python/Node automation (e.g., user logon script, LaunchAgent task, or scheduled job) periodically uploads logs/backups or syncs configuration to S3/CloudFront, resulting in bursty, minimal-argument interpreter network starts that exceed the connection threshold.
### Response and remediation
- Isolate the affected macOS host from the network and immediately block the observed S3/CloudFront domain(s) and resolved IPs at egress while allowing access needed for forensics and management.
- Acquire and preserve the initiating script and execution context by collecting the interpreters on-disk script/one-liner source, parent process details, relevant LaunchAgents/LaunchDaemons plist files, and any newly written binaries or archives associated with the same time window.
- Eradicate persistence and tooling by removing or disabling the malicious launch plist/cron entries, deleting the identified script and any downloaded payloads, and revoking/quarantining any Python/Node packages or AppleScript components tied to the outbound S3 activity.
- Reset and revoke credentials exposed on the host by rotating the users passwords/tokens, removing any AWS keys found in environment variables/config files (e.g., CLI config, application secrets), and invalidating active sessions associated with the user or host.
- Recover by reimaging or restoring the endpoint from a known-good baseline if payload execution or system modification is confirmed, then reintroduce it to the network only after validating no recurring connections to the same S3/CloudFront endpoints.
- Escalate to incident response and cloud security if multiple hosts show the same destination domain or bucket, the script performs uploads or handles sensitive files, or you identify AWS credentials, data staging, or active command polling indicative of C2 or exfiltration.
"""
risk_score = 47
rule_id = "05f2b649-dc03-4e9a-8c4e-6762469e8249"
severity = "medium"
@@ -24,7 +55,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Command and Control",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
type = "esql"
query = '''
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -15,6 +15,37 @@ index = ["logs-endpoint.events.network-*", "logs-endpoint.events.file-*"]
language = "eql"
license = "Elastic License v2"
name = "Executable File Download via Wget"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Executable File Download via Wget
This rule detects wget pulling down a Mach-O executable and writing it into commonly abused transient or shared directories on macOS, which often signals payload staging during ingress tool transfer. Attackers frequently run wget from a shell or scripted installer to fetch a second-stage binary into /tmp or /Users/Shared, then immediately execute it to establish command and control or deploy additional tooling.
### Possible investigation steps
- Pivot from the detected wget process to identify its parent process, user context, and full command line to determine whether it was launched by an interactive shell, script, or installer package.
- Review the network connection details from the wget execution (remote IP/domain, URL path, TLS certificate/SNI if available) and assess reputation plus whether it aligns with known internal software distribution.
- Inspect the downloaded Mach-O at the destination path by collecting its hash, signature/notarization status, and basic static traits (strings, embedded URLs, ad-hoc signing) to quickly judge legitimacy.
- Check for immediate follow-on activity from the same host such as execution of the new file, creation of persistence (LaunchAgents/Daemons, cron, login items), or additional tool downloads within the next few minutes.
- Scope for reuse by searching across endpoints for the same downloaded hash, filename, URL, or destination directory pattern to determine blast radius and whether this is a recurring delivery mechanism.
### False positive analysis
- An administrator or developer uses wget in a script to fetch an internal build or test Mach-O binary and stages it in /tmp or /Users/Shared for immediate execution during troubleshooting or CI-style workflows.
- A legitimate installation or update routine invokes wget to download a helper executable to a transient directory (for unpacking or preflight checks) before moving it into an application bundle, causing a short-lived write to /tmp-like paths.
### Response and remediation
- Isolate the affected macOS host from the network and terminate the active `wget` process to stop additional payload transfers or execution.
- Quarantine the downloaded Mach-O from `/tmp`, `/private/tmp`, `/var/tmp`, or `/Users/Shared` and preserve a copy plus the originating `wget` command/URL for analysis before removal.
- Hunt on the host for immediate follow-on execution of the downloaded file and remove any persistence artifacts created around the same time, such as new LaunchAgents/LaunchDaemons, login items, or cron entries pointing to the staged path.
- Block the observed download URL/domain/IP at egress controls and add allowlisting controls for approved internal distribution sources to reduce future misuse of `wget` for tool transfer.
- Escalate to incident response if the staged Mach-O is executed, unsigned/ad-hoc signed, establishes outbound connections to unapproved infrastructure, or the same hash/URL is found on multiple endpoints.
- Harden endpoints by restricting `wget` usage where possible, enforcing Gatekeeper/notarization and least-privilege execution, and adding monitoring/controls for executable writes and executions from world-writable directories.
"""
references = [
"https://attack.mitre.org/techniques/T1105/"
]
@@ -26,7 +57,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Command and Control",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
type = "eql"
query = '''
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.network-*", "logs-endpoint.events.process-*"]
language = "eql"
license = "Elastic License v2"
name = "Perl Outbound Network Connection"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Perl Outbound Network Connection
This rule detects Perl starting on macOS and then initiating an outbound connection to a public (non-private) IP, a pattern that stands out because Perl rarely performs direct network reach-outs in normal macOS workflows. Attackers often abuse Perl as a built-in “living off the land” runtime to beacon to external command-and-control over HTTP/S or to fetch and execute a second-stage payload from an internet host.
### Possible investigation steps
- Review the full command line, parent/child process tree, execution context (user, TTY, working directory), and referenced script/module paths to determine whether the run was expected or suspicious.
- Pivot on the external destination (IP, port, and any resolved domain from DNS telemetry) to assess reputation, hosting characteristics, and whether other endpoints have recently contacted the same infrastructure.
- Examine connection characteristics (protocol, TLS SNI/certificate, HTTP headers/user-agent, data volume, and timing) to identify staged downloads or beacon-like periodicity.
- Correlate nearby file activity for newly created or modified scripts, temp artifacts, or downloaded payloads, and validate them via hashes, signatures, and known-good baselines.
- Check for follow-on behavior consistent with persistence or lateral movement, such as new launchd/cron items, suspicious login items, or additional interpreters and shells spawned from the same lineage.
### False positive analysis
- A legitimate Perl script run by an administrator or scheduled maintenance task (e.g., log rotation, health checks, or API polling) may connect to a public service endpoint over HTTP/S, matching the Perl exec followed by a non-private destination IP pattern.
- A developer workflow that uses Perl one-liners or project scripts to fetch dependencies, query internet-hosted resources, or validate external URLs can generate outbound connections to public IPs that appear unusual on endpoints without an established baseline for Perl network use.
### Response and remediation
- Isolate the affected macOS host from the network (or block the specific destination IP/port at the egress firewall) and terminate the suspicious `perl` process to stop any active command-and-control or payload download.
- Collect and preserve the Perl command line, referenced script paths, current working directory, any newly written files (especially in `/tmp`, `/var/tmp`, and the users `~/Library`), and the full process tree for forensic review before cleanup.
- Remove or quarantine the identified Perl script and any downloaded payloads, then eradicate persistence by deleting malicious `launchd` agents/daemons, cron entries, and suspicious Login Items created around the time of the outbound connection.
- Reimage or restore the endpoint from a known-good source if integrity cannot be confidently validated, rotate credentials used on the device, and invalidate active sessions/tokens that may have been exposed to the Perl process.
- Escalate to IR/forensics immediately if the destination infrastructure is contacted by multiple hosts, the Perl process runs under a privileged context, or you observe repeated beacon-like connections or evidence of persistence beyond a single script execution.
- Harden by restricting interpreter execution (Perl, Python, Ruby) via endpoint controls, enforcing outbound allowlisting/proxying for user endpoints, and adding detections for Perl launching network tools or writing executable content into user-writable directories.
"""
risk_score = 47
rule_id = "aba3bc11-e02f-4a03-8889-d86ea1a44f76"
severity = "medium"
@@ -25,7 +56,8 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Command and Control",
"Tactic: Execution",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
type = "eql"
query = '''
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/02/03"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.process-*", "logs-endpoint.events.network-*"]
language = "eql"
license = "Elastic License v2"
name = "Script Interpreter Connection to Non-Standard Port"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Script Interpreter Connection to Non-Standard Port
This rule detects a macOS script interpreter launch (Python, Node, or Ruby) quickly followed by an outbound connection to a raw IP address over a non-standard port. It matters because implants and initial access scripts often bypass domain-based controls and blend into developer tooling while using unusual ports for C2. A common pattern is a one-liner Python or Node stager that beacons directly to an external IP on a high-but-not-ephemeral port (e.g., 4444/8081) to fetch or execute a second stage.
### Possible investigation steps
- Review the interpreters full command line, parent/ancestry, execution path, and working directory to determine whether this was an interactive developer action, a scheduled task, or a hidden launcher.
- Identify the script/module being executed (including any temp paths or inline code), collect it for analysis, and check for obfuscation, encoded payloads, or remote-fetch logic.
- Pivot on the destination IP and port to assess reputation, hosting/ASN, geolocation, and whether the host has contacted the same endpoint before or other endpoints on the same unusual port.
- Correlate around the event time for follow-on activity such as file downloads, new processes, credential access attempts, persistence creation (LaunchAgents/LaunchDaemons), or security tool tampering.
- Validate the initiating user context and host posture (new user/login, recent software installs, unsigned binaries, quarantine attributes, or MDM exceptions) to decide on containment and scoping to peer endpoints.
### False positive analysis
- A developer runs a short Python/Node/Ruby script with a single argument to test a service by connecting directly to a public IP on an application-specific port (e.g., staging APIs, custom web services, or test listeners), resulting in a raw-IP outbound connection outside common ports.
- An administrative or diagnostic script (e.g., a quick health check or connectivity probe) executed via an interpreter uses an IP literal for reliability and targets a non-standard port for internal tooling exposed to the internet, producing the same interpreter-to-raw-IP network pattern without malicious intent.
### Response and remediation
- Isolate the affected macOS host from the network (or block only the observed destination IP:port at the firewall) and terminate the Python/Node/Ruby process that initiated the outbound raw-IP connection.
- Acquire volatile and on-disk artifacts including the interpreter command line, referenced script file, current working directory contents, recent downloads, and any temporary directories used at execution time, then submit the script and any fetched payloads for malware analysis.
- Hunt for persistence and re-infection by checking for new or modified LaunchAgents/LaunchDaemons, cron entries, login items, and recently added executable files, and remove/rollback any items tied to the interpreter or the suspicious IP:port.
- Reset potentially impacted credentials and revoke active tokens for the initiating user if the script accessed keychain material, SSH keys, browser sessions, or cloud CLIs near the event time.
- Restore the endpoint from a known-good snapshot or reimage if the script/payload cannot be confidently eradicated, then validate recovery by confirming no further connections to the same IP:port and no recurrence of the interpreter one-liner.
- Escalate to IR leadership and initiate broader scoping if multiple hosts contact the same external IP:port, the destination is confirmed malicious, or persistence/credential theft is detected, and harden by restricting script interpreter execution via MDM, enforcing full disk access controls, and adding egress allow-listing for non-standard ports.
"""
references = [
"https://github.com/blackorbird/APT_REPORT/blob/master/lazarus/2024-10-14%20Lazarus%20InvisibleFerret.pdf"
]
@@ -28,7 +59,8 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Command and Control",
"Tactic: Execution",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
type = "eql"
query = '''
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.network-*"]
language = "eql"
license = "Elastic License v2"
name = "DNS Request for IP Lookup Service via Unsigned Binary"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating DNS Request for IP Lookup Service via Unsigned Binary
This detects an unsigned or untrusted process on macOS performing DNS lookups for common “what is my public IP” and geolocation services, a frequent reconnaissance step before external communications. It matters because malware uses the hosts external IP and region to choose C2 infrastructure, gate payload delivery, or evade sandboxing. A typical pattern is a dropped, unsigned Mach-O or script resolving api.ipify.org or ipinfo.io immediately after execution, then initiating outbound beacons.
### Possible investigation steps
- Identify the initiating process and parent chain, then validate whether the binary is expected for the host/user and whether it is actually unsigned versus a transient signature collection issue.
- Review the same processs near-term network activity for follow-on HTTP(S) requests to the resolved service and any subsequent connections to rare/new domains or IPs that could indicate C2 staging.
- Pivot from the resolved domain to other endpoints to determine prevalence and timing, then prioritize isolated single-host hits with recent first-seen binaries.
- Examine how the binary was introduced by correlating with recent downloads, archive mounts, installer executions, or quarantine/Gatekeeper events around the process start time.
- Acquire and analyze the binary (hash reputation, static strings, entitlements, persistence mechanisms, and launch agents/daemons) to confirm intent and scope of compromise.
### False positive analysis
- A developer-built or locally compiled macOS utility/script (run from a user directory) performs a “what is my public IP” DNS lookup for telemetry, diagnostics, or environment detection, and is flagged because it lacks a trusted code signature.
- An unsigned helper binary dropped by a legitimate installer/updater workflow briefly runs during setup to validate external connectivity or geolocation by resolving an IP-lookup domain, and is detected before the binary is signed or placed in its final trusted location.
### Response and remediation
- Isolate the affected macOS host from the network if the unsigned process continues to resolve IP-lookup domains (e.g., api.ipify.org, ipinfo.io) or initiates new outbound connections immediately after the lookup.
- Quarantine the unsigned executable and any associated scripts from disk (preserving path, hashes, and a copy for analysis) and remove its persistence artifacts such as newly created LaunchAgents/LaunchDaemons, login items, or cron entries tied to the same binary.
- Block the observed IP-lookup domains used by the unsigned process at DNS/web egress and add temporary deny rules for any follow-on suspicious destinations the process contacted after resolution.
- Reset compromised credentials and invalidate active sessions for the logged-in user if the process originated from user-writable locations (Downloads, Desktop, /tmp) or if additional discovery/collection behavior is found on the host.
- Reimage or restore the endpoint from a known-good state when persistence or tampering is confirmed, then verify Gatekeeper/XProtect status, re-enable security tooling, and monitor for recurrence of the same binary hash or domain pattern.
- Escalate to the incident response team if the unsigned binary is newly seen in the environment, appears on multiple hosts, or is followed by connections to rare domains/IPs indicative of staging or command-and-control.
"""
risk_score = 47
rule_id = "47e46d85-3963-44a0-b856-bccff48f8676"
severity = "medium"
@@ -24,7 +55,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.process-*"]
language = "eql"
license = "Elastic License v2"
name = "External IP Address Discovery via Curl"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating External IP Address Discovery via Curl
This rule detects macOS processes launching curl (or nscurl) to query common public “what is my IP” and geolocation services, often from unusual parent applications or untrusted/unsigned code. Attackers use this to learn the victims outward-facing address and network context to guide follow-on targeting, routing, or staging decisions. A typical pattern is a script or dropped binary spawning curl with a short command that hits ipify/ipinfo/ifconfig-style endpoints immediately after execution.
### Possible investigation steps
- Review the full process tree and timeline around the curl execution to identify the initiating app/script, preceding download or execution activity, and any rapid follow-on discovery or persistence commands.
- Examine the curl/nscurl command line and stdout/stderr capture (if available) to confirm the external-IP lookup intent and whether results were written to disk, environment variables, or passed to subsequent processes.
- Correlate with network telemetry for the same host and time window to verify the outbound connection, destination IP/ASN, DNS resolution, TLS/SNI details, and any additional unexpected egress to non-lookup infrastructure.
- Validate the provenance of the parent executable by checking its path, quarantine/notarization status, signature trust, and recent file creation/modification events to assess whether it was dropped or launched from a user-writable location.
- Hunt for repeat occurrences across the endpoint (and fleet) that share the same parent, script content, or destination services, then check for associated indicators like new launch agents/daemons, cron jobs, or suspicious login items.
### False positive analysis
- A user or admin runs a short shell one-liner (bash/zsh with an http-containing command line) that uses curl to quickly confirm the Macs external IP during routine troubleshooting, VPN verification, or connectivity checks.
- A legitimate but unsigned/not-yet-trusted macOS app launched from /Applications, a mounted /Volumes installer/dmg, or a temporary /private/var/folders path performs an external IP lookup via curl as part of initialization, telemetry, or network diagnostics.
### Response and remediation
- Isolate the affected Mac from the network if the curl/nscurl external-IP lookup is spawned by an unsigned/untrusted parent or from user-writable paths (e.g., /private/var/folders, mounted /Volumes) to prevent follow-on command-and-control.
- Quarantine and remove the initiating artifact (app/script/binary) and any associated installers or DMGs, then block its hash and the specific lookup domains contacted (e.g., ipinfo.io, api.ipify.org, ifconfig.me) at egress/DNS to stop repeat discovery.
- Hunt for and delete persistence created around the event (LaunchAgents/LaunchDaemons, login items, cron entries) and terminate any remaining suspicious processes that inherit environment/output from the curl call.
- Reset exposed credentials and invalidate active sessions if the same parent process also accessed browsers, keychain, SSH keys, or configuration files shortly before/after the lookup, and rotate VPN/API tokens used on the host.
- Reimage or restore the endpoint from a known-good snapshot if additional unknown binaries, repeated external-IP lookups, or unexpected outbound connections are observed after cleanup, then validate with a follow-up scan and a clean process baseline.
- Escalate to IR leadership immediately if the external-IP lookup is followed by downloads/execution, persistence creation, or connections to newly registered/rare domains, and harden by restricting curl execution for non-admin contexts and tightening macOS app execution controls (Gatekeeper/notarization).
"""
risk_score = 21
rule_id = "3ad362a9-40cb-4536-8f8b-6a8b5cc24d3c"
severity = "low"
@@ -24,7 +55,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.file-*"]
language = "eql"
license = "Elastic License v2"
name = "Full Disk Access Permission Check"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Full Disk Access Permission Check
This rule detects suspicious reads of the Time Machine preferences plist that adversaries use as a quick litmus test for Full Disk Access, a permission that unlocks broad visibility into user and system data. Attackers commonly launch a scriptable or unsigned helper (for example via Python or AppleScript) to open this file, confirm FDA is present, then proceed to enumerate and collect protected artifacts like browser stores, messages, or backups.
### Possible investigation steps
- Validate the opening process lineage (parent/child chain, launch method, user session) to determine whether the access originated from an interactive admin action or an unexpected background helper.
- Review the process code-signing identity (signer, notarization, team ID) and binary provenance (download attributes, install source, first-seen time) to distinguish legitimate tooling from potentially dropped or trojanized executables.
- Pivot to other file and directory access by the same process around the alert time to identify follow-on discovery/collection of protected user data (e.g., browser profiles, keychain-related paths, Messages, Mail, backups).
- Check recent and concurrent macOS privacy permission changes and TCC/FDA-related events for the user and process to see if FDA was newly granted, prompted, or bypassed preceding the access.
- Correlate with network activity from the same process or host after the check (new outbound connections, uploads, DNS to uncommon domains) to assess whether the FDA verification preceded staging or exfiltration.
### False positive analysis
- An administrator or power user running an interactive shell (Terminal, bash/sh, python) executes a local audit or troubleshooting script that reads /Library/Preferences/com.apple.TimeMachine.plist to confirm Time Machine configuration and permissions.
- A developer or IT engineer uses a scripting runtime (osascript, node, ruby, perl, python) during endpoint diagnostics to check whether the current session/app context has Full Disk Access by attempting to open the Time Machine preference plist.
### Response and remediation
- Isolate the Mac from the network or apply host firewall blocks if the plist access comes from an unexpected/unsigned process or occurs outside an active user session to prevent follow-on collection and exfiltration.
- Terminate the offending process and remove its persistence (LaunchAgents/LaunchDaemons, cron, login items) and any newly dropped executables or scripts found in user-writable paths that launched the plist check.
- Revoke Full Disk Access for the suspicious app in Privacy & Security settings, reset TCC permissions if tampering is suspected, and rotate credentials/tokens exposed on the host (browser sessions, SSH keys, API keys) if protected data access is likely.
- Collect and preserve evidence (the binary and its hash, quarantine/xattr info, parent process, relevant unified logs, and a copy of the accessed plist) before cleanup, then run a full endpoint malware scan and validate no additional sensitive files were accessed immediately after the check.
- Restore system integrity by updating macOS and security tools, reinstalling or re-imaging if core components were modified, and confirm normal Time Machine configuration after remediation to ensure operational recovery.
- Escalate to IR/SECOPS immediately if the process is unsigned/notarization-missing, shows persistence, or makes outbound connections after the plist read, and harden by enforcing MDM controls that restrict FDA grants and block execution of untrusted scripting runtimes where feasible.
"""
risk_score = 47
rule_id = "e26c0f76-2e80-445b-9e98-ab5532ccc46f"
severity = "medium"
@@ -24,7 +55,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.process-*"]
language = "eql"
license = "Elastic License v2"
name = "Suspicious SIP Check by macOS Application"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Suspicious SIP Check by macOS Application
This rule detects a macOS application bundle launching `csrutil status` and explicitly parsing for “enabled,” an uncommon behavior that often signals preflight environment checks. Attackers use this to confirm System Integrity Protection constraints before deciding whether to attempt persistence, injection, or privilege escalation, or to abort execution to avoid analysis. A common pattern is a trojanized app from a mounted disk image performing the SIP check immediately after first launch, then conditionally unpacking and running a secondary payload.
### Possible investigation steps
- Identify the initiating application bundle and validate its provenance by reviewing its code signature, notarization status, Team ID, and download origin (e.g., Gatekeeper quarantine attributes and DMG mount source).
- Build a short timeline around the SIP check to see what executed next from the same parent chain (new processes, scripts, installers, or command interpreters) and whether execution diverged after reading “enabled.”
- Inspect the apps bundle contents and related file activity for dropped binaries, launch agents/daemons, login items, or modified plist files that indicate persistence or staged payload execution.
- Look for follow-on discovery and defense-evasion behavior on the host (e.g., VM/sandbox checks, system profiling, security tool enumeration, permission prompts abuse) that would support a malware preflight workflow.
- If suspicious, isolate the host and collect the app bundle, associated DMG, and execution artifacts for detonation and reverse engineering, then hunt for the same app hash/Team ID across the fleet.
### False positive analysis
- A legitimate enterprise-managed macOS application performing a preflight compatibility or supportability check may invoke `csrutil status` and look for “enabled” to decide whether to proceed with installing drivers, configuring system settings, or enabling features that require SIP-related constraints awareness.
- A user-initiated security/compliance workflow from a GUI app (e.g., a system configuration, diagnostic, or remediation utility distributed as an `.app` from `/Applications` or a mounted volume) may run `csrutil status` and parse for “enabled” to display a health report or to gate remediation instructions without any malicious follow-on activity.
### Response and remediation
- Isolate the affected macOS host from the network and prevent further execution by quitting the initiating `.app` and blocking its bundle identifier/hash via MDM/EDR policy.
- Acquire and preserve artifacts for analysis, including the full `.app` bundle, the originating DMG/ZIP (if launched from `/Volumes`), Gatekeeper quarantine metadata, and recent install logs to trace the download source and execution chain.
- Eradicate by removing the suspicious application and any follow-on components it created (new LaunchAgents/LaunchDaemons, Login Items, cron entries, and dropped executables in user and system Library paths), then terminate any child processes spawned after the SIP check.
- Recover by reinstalling trusted software from known-good sources, rotating credentials used on the host since the first execution, and monitoring for re-creation of persistence files or repeated `csrutil status` checks from application bundles.
- Escalate to incident response if the app is unsigned/notarization-failed, originates from a mounted volume or user Downloads, or if post-check activity includes attempts to modify security settings, write to persistence locations, or launch interpreters like `bash`, `zsh`, `python`, or `osascript`.
- Harden by enforcing only notarized/signed app execution (Gatekeeper/MDM restrictions), blocking untrusted apps from removable/mounted volumes, and deploying detections for app-bundled execution of `csrutil` and subsequent persistence creation.
"""
risk_score = 47
rule_id = "1615230f-beb7-48d8-9b3f-6d10674703bf"
severity = "medium"
@@ -25,7 +56,8 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Discovery",
"Tactic: Defense Evasion",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/02/03"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.file-*"]
language = "eql"
license = "Elastic License v2"
name = "System and Network Configuration Check"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating System and Network Configuration Check
This rule flags suspicious processes reading macOS SystemConfiguration preferences, which can reveal network interfaces, DNS settings, and other environment details used to plan lateral movement or data exfiltration. Attackers commonly run scripting runtimes (e.g., Python, AppleScript, Node) or binaries staged in temporary/shared directories to open the preferences plist during early discovery. Catching this access helps identify stealthy reconnaissance before overt network activity begins.
### Possible investigation steps
- Identify the parent process and full execution chain for the accessing process, including any script path/arguments, to determine whether it was launched by an interactive user, management tooling, or a suspicious launcher.
- Review the accessing binarys provenance by checking code signature/notarization status, file hash reputation, and whether it was recently created or executed from temporary/shared directories indicating staging.
- Correlate nearby discovery activity on the host (e.g., reads of other system/network plists, execution of `scutil`, `ifconfig`, `networksetup`, or `defaults read`) to assess whether this is part of a broader reconnaissance sequence.
- Examine concurrent network activity from the same process (outbound connections, DNS lookups, proxy changes) to identify follow-on behavior consistent with environment mapping or command-and-control.
- Validate the behavior against legitimate software on the host (IT management, VPN/endpoint tools, developer workflows) by matching timestamps to user logins, scheduled jobs, and recent installs/updates.
### False positive analysis
- A legitimate IT/admin or troubleshooting script run interactively (e.g., a Python/AppleScript wrapper) may read `/Library/Preferences/SystemConfiguration/preferences.plist` to collect network settings during support, onboarding, or diagnostics.
- A developer or automation workflow may execute a temporary or shared-directory runtime (e.g., `node`/`python` unpacked to `/tmp` or `/Users/Shared`) that reads the plist to detect interfaces, DNS, or proxy configuration for environment-aware builds or tests.
### Response and remediation
- Isolate the affected Mac from the network and terminate the offending process tree, then quarantine the on-disk script/binary (especially if staged in /tmp, /private/tmp, /var/tmp, or /Users/Shared) to stop further discovery or follow-on execution.
- Collect and preserve artifacts before cleanup, including the suspicious executable/script, its launch mechanism (LaunchAgents/LaunchDaemons, cron, login items), recent shell history, and a copy of /Library/Preferences/SystemConfiguration/preferences.plist metadata for later scoping and forensics.
- Eradicate persistence by removing unauthorized launch entries and deleting the staged payloads, then re-scan the host with EDR/AV and verify no additional suspicious interpreters or unsigned tools remain in temporary/shared directories.
- Recover by rotating credentials used on the host, reviewing and resetting network settings (DNS, proxy, VPN) if changed, and returning the system to service only after repeated checks show no re-creation of the removed artifacts across a full reboot cycle.
- Escalate to incident response immediately if the same process also makes outbound connections, modifies SystemConfiguration plists, or appears on multiple hosts, and initiate enterprise-wide hunting for the file hash and the associated launcher.
- Harden by restricting execution from temporary/shared directories, enforcing signed/notarized code where possible, auditing who can read sensitive configuration files, and adding allowlists for known management tools that legitimately access the preferences plist.
"""
risk_score = 47
rule_id = "6e5189c4-d3a5-4114-8cb3-bd3a65713f19"
severity = "medium"
@@ -24,7 +55,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -2,7 +2,7 @@
creation_date = "2026/01/30"
integration = ["endpoint"]
maturity = "production"
updated_date = "2026/01/30"
updated_date = "2026/02/09"
[rule]
author = ["Elastic"]
@@ -16,6 +16,37 @@ index = ["logs-endpoint.events.file-*"]
language = "eql"
license = "Elastic License v2"
name = "Suspicious Apple Mail Rule Plist Modification"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Suspicious Apple Mail Rule Plist Modification
This detects non-Apple Mail processes creating or modifying the SyncedRules.plist that stores Apple Mail rules, a persistence path because rules can trigger actions on incoming mail. Attackers commonly drop a script to disk, then edit the rules file so a crafted email (often from an attacker-controlled sender or with a specific subject) launches that script when it arrives.
### Possible investigation steps
- Identify the application that modified the plist and validate its legitimacy by checking code signature, bundle path, quarantine/download origin, and recent installation history.
- Diff the current SyncedRules.plist against a known-good or previous version (including backups/snapshots) to pinpoint what rule entries changed and when.
- Decode and review the plist contents to find any rule actions that execute scripts/binaries or reference external paths, then record the exact target command/path.
- Locate and inspect any referenced script or executable (hash, signature, contents, timestamps, and network indicators) and determine whether it is newly created or staged nearby.
- Correlate the modification time with surrounding system activity (process tree, file writes in user Library paths, network connections, and recent email-related events) to determine whether this is persistence setup versus benign automation.
### False positive analysis
- After a macOS reinstall, user migration, or restore from backup, SyncedRules.plist may be recreated or overwritten by a non-Mail restore/migration process when Mail data is copied back into the users MailData directory.
- User-initiated or administrative automation that standardizes, repairs, or deploys Mail rules can modify SyncedRules.plist via command-line file operations or plist editing outside of Mail.app, especially during initial user provisioning or troubleshooting.
### Response and remediation
- Isolate the affected Mac from the network and temporarily disable Apple Mail rule processing by moving `SyncedRules.plist` out of the MailData directory to prevent any rule-triggered script execution while preserving evidence.
- Collect and preserve the modified `SyncedRules.plist`, its extended attributes/quarantine flags, and the modifying process binary/app bundle, then decode the plist to identify any rule actions that reference on-disk scripts or executables.
- Remove malicious persistence by deleting the offending rule entries (or restoring `SyncedRules.plist` from a known-good backup) and deleting/quarantining any referenced scripts/binaries and their launch points if they were dropped on disk.
- Hunt for and eradicate the originator by reviewing recently installed or unsigned apps and user-level agents/daemons that wrote into `~/Library/Mail/**/MailData/`, and reimage the endpoint if additional persistence or tampering is found.
- Recover by re-enabling Mail with a clean ruleset, forcing credential/session resets for affected mail accounts, and monitoring for recurrence of `SyncedRules.plist` changes or rule-triggered execution when new mail arrives.
- Escalate to incident response immediately if the plist contains rules invoking `sh`, `osascript`, `python`, or a non-Apple executable path, if the modifying process is unsigned/untrusted, or if the referenced script shows network beacons or data access behavior.
"""
references = [
"https://www.n00py.io/2016/10/using-email-for-persistence-on-os-x/"
]
@@ -27,7 +58,8 @@ tags = [
"OS: macOS",
"Use Case: Threat Detection",
"Tactic: Persistence",
"Data Source: Elastic Defend"
"Data Source: Elastic Defend",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"