Add investigation guides (#5630)

This commit is contained in:
shashank-elastic
2026-01-27 14:28:06 +05:30
committed by GitHub
parent 7ff19b3497
commit 3ee0a72a65
12 changed files with 381 additions and 12 deletions
@@ -4,7 +4,7 @@ integration = ["cloud_defend", "kubernetes"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -33,6 +33,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Direct Interactive Kubernetes API Request by Common Utilities"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Direct Interactive Kubernetes API Request by Common Utilities
This detection links an interactive invocation of common networking utilities or kubectl inside a container to a near-simultaneous Kubernetes API response, indicating hands-on-keyboard access to the API server for discovery or lateral movement. A common attacker pattern is compromising a pod, reading its mounted service account token, then running curl or kubectl interactively to query /api or /apis endpoints to list pods and secrets and map cluster scope.
### Possible investigation steps
- From Kubernetes audit logs linked to the pod, capture the authenticated principal, namespace, verbs, and request URIs to determine whether the activity focused on discovery or sensitive resources like secrets or RBAC objects.
- Correlate the interactive container activity with kubelet exec/attach or terminal session telemetry to identify who initiated the session and through which source IP or control-plane endpoint.
- Inspect the pods service account by validating access to the mounted token path and enumerating its RoleBindings and ClusterRoleBindings to quantify effective privileges and decide on immediate revocation or rotation.
- Review the container image provenance and available shell history or command logs to confirm use of networking utilities or kubectl and identify any reads of secrets, kubeconfig files, or /api and /apis endpoints.
- Expand the time window to find prior or subsequent API calls from the same pod, namespace, or node, and quarantine or cordon the workload if you observe sustained enumeration or cross-namespace access.
### False positive analysis
- An operator uses kubectl exec -it to enter a pod and runs kubectl or curl to list resources or verify RBAC, producing interactive process starts and near-simultaneous Kubernetes audit responses that are expected during troubleshooting.
- During routine connectivity or certificate checks, an engineer attaches to a container that includes curl/openssl/socat/ncat and interactively tests the Kubernetes API server endpoint, generating correlated audit events without malicious intent.
### Response and remediation
- Immediately isolate the implicated pod by terminating the interactive shell and curl/kubectl processes, applying a deny-all NetworkPolicy in its namespace, and temporarily blocking pod egress to the kube-apiserver address.
- Revoke and rotate the service account credentials used by the pod, invalidate the token at /var/run/secrets/kubernetes.io/serviceaccount/token, and remove excess RoleBindings or ClusterRoleBindings tied to that identity.
- Delete and restore the workload from a trusted image that excludes curl/wget/openssl/socat/ncat, with automountServiceAccountToken disabled and least-privilege RBAC enforced.
- Escalate to incident response if the pod read Secrets or ConfigMaps, modified RBAC objects, attempted create/patch/delete on cluster-scoped resources, or originated from an unapproved operator workstation or bastion.
- Harden by restricting kubectl exec/attach to a small admin group with MFA, enabling admission controls (Pod Security Admission, Gatekeeper, or Kyverno) to block shells or kubectl/netcat in images, and applying egress NetworkPolicies so only approved namespaces can reach https://kubernetes.default.svc.
"""
risk_score = 47
rule_id = "9d312839-339a-4e10-af2e-a49b15b15d13"
severity = "medium"
@@ -45,6 +75,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend", "kubernetes"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -34,6 +34,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Forbidden Direct Interactive Kubernetes API Request"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Forbidden Direct Interactive Kubernetes API Request
This rule correlates an interactive command execution inside a container with a Kubernetes API request that is explicitly forbidden, signaling hands-on-keyboard probing and unauthorized access attempts. It matters because attackers use live shells to enumerate cluster resources and test privileges for lateral movement or escalation. Example: after compromising a pod, an operator opens a shell and runs kubectl get secrets or curls the API server with the pods token, repeatedly receiving 403 Forbidden.
### Possible investigation steps
- Correlate the pod, container, namespace, node, and service account from the alert, then quickly pull the matching audit entries to see the verb, resource, requestURI, and userAgent for the forbidden calls.
- Determine whether the container image normally includes utilities like kubectl/curl/openssl or if they were dropped into the pod, and review recent file writes and package installs to differentiate admin debugging from hands-on-keyboard activity.
- Inspect the pods service account bindings and effective RBAC in the target namespace to confirm least privilege and understand why the request was denied, then check for other successful API requests from the same identity around the same timeframe.
- Review network connections from the pod to the API server (and any proxies) during the session to validate direct access paths, source IPs, and whether a mounted service account token from /var/run/secrets was used.
- Validate whether this was an authorized SRE/debug session by contacting the workload owner and checking for recent kubectl exec or ephemeral debug activity; if not expected, expand the search for similar forbidden attempts from other pods.
### False positive analysis
- An authorized kubectl exec or ephemeral debug session inside a pod where an engineer runs kubectl or curl to probe API resources and, because the pods service account is intentionally leastprivileged, the requests are forbidden as expected.
- Benign interactive troubleshooting that mistakenly uses the wrong namespace or queries clusterscoped endpoints from within the container (e.g., curl/openssl to the API server), causing the audit logs to show forbid decisions even though no malicious access was attempted.
### Response and remediation
- Immediately terminate the interactive shell (e.g., sh/bash) in the offending container and isolate the pod by applying a deny-egress NetworkPolicy in its namespace that blocks outbound connections to https://kubernetes.default.svc and the API server IPs.
- Revoke and rotate credentials by deleting the pod and its ServiceAccount token Secret, temporarily setting automountServiceAccountToken: false on the workload, and redeploying with a new ServiceAccount after validating RBAC least privilege.
- Remove attacker tooling and persistence by rebuilding the container image to exclude kubectl/curl/openssl/socat/ncat, clearing writable volume mounts that contain dropped binaries or scripts, and redeploying from a trusted registry.
- Sweep for spread by identifying pods running the same image or on the same node and terminating any interactive processes issuing Kubernetes API requests from within containers, then restart those workloads cleanly.
- Escalate to incident response if you observe successful API operations (200/201) on secrets, configmaps, or RBAC objects, exec into other pods, or privileged container settings (privileged=true, hostNetwork, or hostPID), indicating lateral movement or credential compromise.
- Harden going forward by tightening RBAC on the new ServiceAccount, enforcing Gatekeeper/OPA policies to deny images that include kubectl/curl and block interactive shells, setting readOnlyRootFilesystem and dropping NET_ADMIN, and restricting API server access via egress controls.
"""
risk_score = 47
rule_id = "5d1c962d-5d2a-48d4-bdcf-e980e3914947"
severity = "medium"
@@ -46,6 +77,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend", "kubernetes"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -32,6 +32,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Direct Interactive Kubernetes API Request by Unusual Utilities"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Direct Interactive Kubernetes API Request by Unusual Utilities
This rule detects interactive commands executed inside containers that use atypical utilities to hit the Kubernetes API, paired with near-simultaneous API activity on pods, secrets, service accounts, roles/bindings, or pod exec/attach/log/portforward. It surfaces hands-on-keyboard discovery and lateral movement using custom scripts that evade common tool allowlists; for example, an intruder opens a shell in a pod, uses Python to query the in-cluster API to list secrets, then triggers pods/exec to pivot into another workload.
### Possible investigation steps
- Identify the implicated pod, container image, and executing service account, then quickly review its RBAC bindings and effective permissions to determine blast radius.
- Inspect the containers interactive session context by pulling recent command lines, shell history, environment variables, and mounted service account tokens, and look for custom scripts or binaries issuing HTTP requests.
- Correlate nearby Kubernetes audit entries tied to the same principal and pod to map accessed resources and verbs, noting any exec/attach/portforward or sensitive object interactions across namespaces.
- Review network activity from the pod to the API server and any in-pod proxies, including DNS lookups and outbound connections, to spot nonstandard clients or tunneling behavior.
- If suspicious, isolate the pod or node, capture runtime artifacts (e.g., process memory or HTTP client traffic), revoke and rotate the service account credentials, and verify image provenance and integrity.
### False positive analysis
- An operator interactively attaches to a pod and uses a Python REPL or bash with /dev/tcp to call the in-cluster API for routine troubleshooting (e.g., list pods, read ConfigMaps, or run selfsubjectaccessreviews), producing normal audit entries that match the rule signature.
- A correlation artifact arises when two namespaces have pods with the same name: one pod starts an interactive shell while another independently performs get/list/watch calls, and the 1-second sequence keyed only on pod-name links the unrelated events.
### Response and remediation
- Immediately isolate the implicated pod that issued direct API calls using a nonstandard utility by applying a deny-all egress NetworkPolicy in its namespace (including to kubernetes.default.svc:443), terminating the interactive session, and scaling its owning Deployment/Job/StatefulSet to zero replicas.
- Before teardown, capture a runtime snapshot of the container and node including the binary or script used to query the API (e.g., files under /tmp or /dev/tcp usage), shell history, environment, and the mounted service account token and CA bundle at /var/run/secrets/kubernetes.io/serviceaccount/.
- Revoke access by removing the service accounts RoleBindings/ClusterRoleBindings, deleting all pods that mount that service account to force token rotation, rotating any Secrets and ConfigMaps that were read or created during the window, and deleting any unauthorized Jobs, CronJobs, or Deployments created by the same principal.
- Restore workloads from a known-good image digest, re-enable the Deployment only after image scan and integrity checks pass, and monitor subsequent Kubernetes audit logs for pods/exec, portforward, and access to secrets across the affected namespaces.
- Escalate to incident response leadership and consider cluster-wide containment if audit logs show create/patch of ClusterRoleBindings, access to secrets outside the workloads namespace, or use of pods/exec to pivot into other nodes or system namespaces such as kube-system.
- Harden access by enforcing least-privilege RBAC that denies pods/exec and attach for application service accounts, setting automountServiceAccountToken: false on workloads that do not need it, restricting egress to the API server with NetworkPolicies, and requiring just-in-time break-glass roles for interactive access.
"""
risk_score = 21
rule_id = "02275e05-57a1-46ab-a443-7fb444da6b28"
severity = "low"
@@ -44,6 +75,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend", "kubernetes"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -32,6 +32,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Service Account Token or Certificate Access Followed by Kubernetes API Request"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Service Account Token or Certificate Access Followed by Kubernetes API Request
This rule correlates interactive access to a pods service account token or CA certificate with a near-immediate Kubernetes API request, signaling credential harvesting to query the cluster and potential lateral movement. An attacker execs into a container, reads /var/run/secrets/kubernetes.io/serviceaccount/token and ca.crt, then uses curl or kubectl with that token and CA to list pods, get secrets, or create a privileged pod to pivot across nodes.
### Possible investigation steps
- Attribute the activity by identifying the pod, container image, node, and interactive session initiator (e.g., kubectl exec) from Kubernetes events and cluster logs to determine whether a human or automation accessed the credentials.
- Retrieve the pods service account and enumerate its RBAC bindings to assess effective privileges, highlighting any ability to read secrets, create pods, or modify roles.
- Reconstruct the full sequence of audit log requests tied to that pod/user around the alert, noting resources, verbs, namespaces, response codes, and userAgent to distinguish legitimate controller behavior from reconnaissance.
- Examine the container for signs of token abuse or exfiltration by reviewing shell history and filesystem artifacts, and correlate with network egress from the pod to external destinations.
- Validate that the API request originated from the same pod by matching source IP, node, and TLS client identity, and check for concurrent suspicious activity on the node or other pods.
### False positive analysis
- A cluster operator troubleshooting an issue execs into a pod, inspects the service account token or CA certificate, and then uses the pods credentials to make a quick Kubernetes API request to verify permissions or list resources.
- A workload running with TTY/stdin enabled is marked as interactive, and the application legitimately reads the service account token (e.g., on startup or token refresh) to perform routine API operations such as leader election or informer watches, producing the observed file access followed by audit log activity.
### Response and remediation
- Immediately isolate the pod that read /var/run/secrets/kubernetes.io/serviceaccount/token or ca.crt by deleting the pod or scaling its deployment to zero, cordoning its node if similar behavior is seen on other pods, and applying a NetworkPolicy that blocks the pods access to the API server while you capture its filesystem.
- Revoke access by removing the implicated service accounts RBAC bindings, recreating the service account to invalidate tokens, restarting any workloads that mount /var/run/secrets/kubernetes.io/serviceaccount, and rotating the service-account signing key if compromise is suspected.
- Validate and recover by reviewing audit records for unauthorized actions (e.g., secrets reads, pod or role changes), rolling back or deleting any malicious resources, and redeploying affected workloads from trusted images with signed releases.
- Escalate to incident response immediately if you observe API requests from the pod that read secrets, create pods in other namespaces, alter Role or ClusterRoleBindings, or transmit the token/ca.crt via curl or similar tooling to external addresses.
- Harden by disabling automountServiceAccountToken on pods that don't require it, scoping service accounts to a single namespace with leastprivilege RBAC, enforcing Pod Security Admission to block privileged/interactive shells, and restricting exec/attach via RBAC or admission policies.
"""
risk_score = 47
rule_id = "4bd306f9-ee89-4083-91af-e61ed5c42b9a"
severity = "medium"
@@ -45,6 +75,7 @@ tags = [
"Tactic: Execution",
"Tactic: Credential Access",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/22"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -26,6 +26,28 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Curl SOCKS Proxy Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Curl SOCKS Proxy Detected via Defend for Containers
This detection flags interactive curl invocations inside Linux containers that use SOCKS proxy options, indicating traffic tunneling to evade egress controls and enable data exfiltration or C2 communications. A common pattern is an operator with shell access launching curl -x socks5h://localhost:1080 or --socks5-hostname via a dynamically created SSH -D port, then fetching payloads, beaconing to external endpoints, or posting stolen data through the proxy.
### Possible investigation steps
- Retrieve the full curl command line to extract the SOCKS proxy host:port, target URLs, and signs of uploads or auth (e.g., -d/--data, -T/--upload-file, -H Authorization), then pivot those IOCs across container and cluster telemetry.
- If the proxy points to localhost or an internal address, confirm a SOCKS listener in the container or node network namespace and identify the owning process to reveal the tunneling mechanism.
- Examine the process ancestry and same TTY/session context to attribute the action and spot precursor activity such as ssh -D, chisel, cloudflared, tor, or 3proxy establishing the proxy.
- Correlate Kubernetes metadata (pod, namespace, service account, node, image) and kube-apiserver audit logs for exec/attach to identify the actor, verify legitimacy, and find similar events in sibling pods or earlier revisions.
- Review network flows and DNS from this container through the proxy to external destinations to quantify data volume and destination reputation, and check for access to sensitive internal services.
### False positive analysis
- A developer troubleshooting from an interactive shell inside a container runs curl with -x/--proxy socks5 options to validate egress or reach internal endpoints through an approved proxy (e.g., localhost or an internal host), generating a benign match.
- Shell profiles or container environment configuration automatically route curl through --preproxy/--socks5-hostname to access internal APIs or artifact mirrors during interactive checks, causing expected activity to be flagged.
"""
references = ["https://www.trendmicro.com/en_us/research/25/f/tor-enabled-docker-exploit.html"]
risk_score = 47
rule_id = "eb958cb3-dead-42b6-94ff-b9de6721fab2"
@@ -36,6 +58,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Command and Control",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -27,6 +27,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Service Account Token or Certificate Read Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Service Account Token or Certificate Read Detected via Defend for Containers
This rule flags when a containers service account token or CA certificate is opened interactively, signaling direct access to credentials that grant API permissions within the cluster. Attackers often exec into a running pod, read /var/run/secrets/kubernetes.io/serviceaccount/token and ca.crt, then use the token with kubectl or curl to enumerate namespaces, list secrets, or spawn privileged workloads via the Kubernetes API.
### Possible investigation steps
- Map the container to its pod, namespace, node, image, and service account, and use Kubernetes audit and runtime logs to identify any recent exec/attach/tty into the pod and the originating user and source IP.
- Examine process ancestry and command activity around the event to detect shells, kubectl/curl/wget usage, token exfiltration patterns (for example, piping the token to a network client), or the token being copied elsewhere.
- Correlate with network telemetry to see if the container initiated connections to the API server or external endpoints immediately after the read, including unusual DNS lookups or spikes in egress.
- Search Kubernetes audit logs for requests authenticated as the implicated service account after the event, prioritizing secrets access, pod exec/attach, workload create/update, and role or clusterrole binding changes, and verify the source IP matches the pod.
- Run a SubjectAccessReview or review RBAC bindings for the service account to quickly determine effective permissions and the potential blast radius if the token was abused.
### False positive analysis
- During legitimate troubleshooting, an engineer attaches an interactive shell to the container and opens /var/run/secrets/kubernetes.io/serviceaccount/token or /var/run/secrets/kubernetes.io/serviceaccount/ca.crt to verify the service account mount and certificate trust.
- An interactive in-container diagnostic or training session reads the token or CA certificate to confirm connectivity or inspect claims, with no subsequent API calls or credential exfiltration, resulting in a benign open event.
### Response and remediation
- Immediately delete the affected pod (namespace/pod-name) to invalidate /var/run/secrets/kubernetes.io/serviceaccount/token, scale the owning Deployment/StatefulSet to zero to stop respawns, and apply a temporary deny-all egress NetworkPolicy to the namespace to contain any misuse.
- On the node hosting the pod, kill any interactive shells (e.g., bash or sh) attached to the container and remove any copied credential artifacts such as /tmp/token or files containing contents of /var/run/secrets/kubernetes.io/serviceaccount/token or ca.crt.
- Rotate credentials and access by recreating the service account and its tokens, purge unauthorized RoleBinding/ClusterRoleBinding entries granting high privileges to that identity, and redeploy the workload from a trusted image with a clean start.
- Escalate to incident response if audit logs show the token being used to read Secrets, create or modify Pods/Jobs/Deployments, or alter RoleBinding/ClusterRoleBinding, or if API requests originate from an IP not matching the affected pods node.
- Harden by setting automountServiceAccountToken: false for pods that do not need API access, using projected service account tokens with short TTL and audience binding, denying kubectl exec/attach in production via RBAC/admission, and trimming the service accounts RBAC to least privilege.
"""
risk_score = 47
rule_id = "66229f32-c460-410d-bc37-4b32322cd4bb"
severity = "medium"
@@ -36,6 +66,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Credential Access",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -26,6 +26,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "DNS Enumeration Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating DNS Enumeration Detected via Defend for Containers
This rule flags interactive use of DNS utilities (nslookup, dig, host, or getent hosts) inside a Linux container that query internal Kubernetes names such as kubernetes.default or *.svc.cluster.local, signaling in-cluster service discovery. After compromising a pod, an attacker opens an interactive shell and runs nslookup kubernetes.default or dig *.svc.cluster.local to enumerate services and namespaces, map reachable endpoints, and stage lateral movement toward the API server, internal dashboards, or other pods.
### Possible investigation steps
- Correlate with Kubernetes audit logs to see if a kubectl exec/attach or ephemeral container was launched into this pod at the same time and identify the requesting user, source IP, and service account identity.
- Pull pod metadata (namespace, owner workload, node, image digest, service account) and verify whether this is a sanctioned troubleshooting/debug context by checking image baseline and change tickets.
- Examine command history and process lineage within the container around the alert to spot followon actions such as reading /var/run/secrets/kubernetes.io/serviceaccount, querying the API server, installing tools, or scanning internal ranges.
- Review DNS and network telemetry from the pod (CoreDNS logs, CNI flow logs) to map the queried hostnames and any subsequent connections to internal services or the API server.
- If unauthorized activity is suspected, isolate the pod via network policy or eviction, rotate the pods service account token and related secrets, and redeploy from a trusted image while investigating node and registry access paths.
### False positive analysis
- An authorized operator attaches an interactive shell to a pod for in-cluster troubleshooting and runs nslookup/dig/host or getent hosts against kubernetes.default or *.svc.cluster.local, matching the rule but expected.
- A pod or initContainer configured with tty: true or stdin enabled executes startup/readiness logic that calls getent hosts or nslookup on internal service FQDNs (*.svc.cluster.local), which is benign but appears as interactive enumeration.
### Response and remediation
- Immediately isolate the affected pod by applying a denyegress NetworkPolicy to the CoreDNS service and cluster service CIDRs, terminate the interactive shell that ran nslookup/dig/host or getent hosts, and temporarily block kubectl exec/attach to the workload.
- Evict the pod and redeploy from a trusted image digest, removing any ephemeral containers added for debugging and uninstalling adhoc packages (e.g., dnsutils, busybox) that were introduced into the container.
- Rotate credentials by deleting and reissuing the workloads service account token and any mounted secrets or registry credentials, then verify the new pod does not perform interactive lookups of kubernetes.default or *.svc.cluster.local.
- Escalate to incident response if enumeration originates from a production namespace, a privileged service account, or is followed by reading /var/run/secrets/kubernetes.io/serviceaccount or curl/wget to https://kubernetes.default, or if similar activity is observed across multiple pods.
- Harden going forward by restricting exec/attach via RBAC, enforcing Admission Controls/Pod Security to disallow tty/stdin and unauthorized ephemeral containers, limiting egress to CoreDNS and internal services with NetworkPolicies, and using distroless/stripped images that omit nslookup/dig/host.
"""
risk_score = 21
rule_id = "74ee9a2d-5ed3-40c8-9e6c-523d2e6a17ef"
severity = "low"
@@ -35,6 +65,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -27,6 +27,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Environment Variable Enumeration Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Environment Variable Enumeration Detected via Defend for Containers
This rule flags interactive execution of env or printenv inside a Linux container, a common discovery step to expose environment variables that frequently store credentials, tokens, and service configuration. A typical pattern: after gaining shell access to a pod, the attacker lists variables to harvest cloud keys, Kubernetes service account tokens, database URLs, and internal endpoints, enabling authenticated API calls, lateral movement within the cluster, or exfiltration via trusted services.
### Possible investigation steps
- Correlate with Kubernetes audit logs (exec/attach or debug) to identify the initiator identity, source IP, and command, and confirm whether the session was expected.
- Review the pod/container context (namespace, deployment, image digest/tag, node, and service account) and compare against baselines to catch unusual targets or ephemeral/privileged debug containers.
- Retrieve the enumerated variables to spot high-risk secrets such as cloud credentials, database passwords, or tokens, and immediately rotate/disable any discovered keys while reviewing provider audit logs for post-alert use.
- Trace subsequent activity within the container after the event for credential usage or exfiltration, including access to 169.254.169.254/metadata, calls to Kubernetes/cloud APIs, outbound network connections, or tooling like curl, wget, base64, and grep.
- Pivot to related signals on the same pod or node around the timestamp (new shells, service account token file reads, package installs, or suspicious downloads) to determine if this is part of a broader compromise.
### False positive analysis
- An operator opens an interactive shell in a container during routine troubleshooting and runs env or printenv to verify configuration, service endpoints, feature flags, or propagated secrets.
- Interactive shell initialization or entrypoint scripts in certain base images automatically invoke env or printenv upon TTY login to display or log variables, producing this event in benign sessions.
### Response and remediation
- Quarantine the affected pod where env/printenv was run by deleting the pod to drop the interactive session, applying a deny-all egress NetworkPolicy targeting its labels, and temporarily blocking kubectl exec/attach to that workload.
- Immediately rotate any secrets exposed in that containers environment (cloud access keys, database passwords, API tokens, Kubernetes service account token), revoke active sessions at providers, and invalidate cached credentials on dependent services.
- Redeploy the application from a verified image digest with a fresh service account and newly issued secrets, and remove debug images or shell entrypoints that enabled interactive access.
- Escalate to incident response if env output contained credentials or the pods IP contacted 169.254.169.254, cloud/Kubernetes APIs, or external endpoints after the enumeration, indicating possible secret use or exfiltration.
- Replace environment-based secret injection with a secrets manager or projected volumes, set automountServiceAccountToken to false where not required, right-size RBAC for the workload, and block egress to the metadata service and the internet.
- Enforce preventive controls by disabling kubectl exec/attach for production (break-glass only with approval), enabling admission policies to block images with shells or package managers, and adding runtime rules to alert on interactive env/printenv followed by curl/wget/base64 or token file reads.
"""
risk_score = 21
rule_id = "f66a6869-d4c7-4d20-ab13-beefd03b63b4"
severity = "low"
@@ -36,6 +67,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -26,6 +26,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Service Account Namespace Read Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Service Account Namespace Read Detected via Defend for Containers
This rule flags an interactive process inside a container opening /var/run/secrets/kubernetes.io/serviceaccount/namespace, which reveals the pods Kubernetes namespace; adversaries use this quick check to orient themselves and scope discovery. A common pattern is: after landing a shell in a pod, the attacker reads the namespace file, then issues namespace-scoped kubectl or direct API calls to list Deployments, Secrets, and ServiceAccounts, map privileges and service endpoints, and plan targeted lateral movement within that namespace.
### Possible investigation steps
- Build the process tree and shell session timeline around the event to identify the parent and child processes, interactive TTY, and exact commands executed before and after.
- Map the container to its pod, namespace, node, image, owner, and service account, and verify whether any kubectl exec/attach or debug container activity targeting this pod is expected at that time.
- Correlate Kubernetes audit logs for that service account and pod around the timestamp to spot list/get calls, Secrets or ServiceAccounts enumeration, and the user/IP that initiated any exec/attach.
- Inspect network flows and DNS from the container to the Kubernetes API server immediately after the event to confirm follow-on API access or token validation attempts.
- Review the service accounts RBAC bindings and search the container for reads of the token/ca.crt or the presence of kubectl/kubeconfig or scripts that could leverage the token.
### False positive analysis
- During legitimate troubleshooting, a user opens an interactive shell in the container and the shells profile or prompt customization reads /var/run/secrets/kubernetes.io/serviceaccount/namespace to display the current namespace.
- A documented operational check or training exercise instructs staff to manually open the service account namespace file inside a container to confirm the environment before changes, producing a benign detection.
### Response and remediation
- Immediately kill the interactive shell process inside the container (e.g., bash/sh attached via kubectl exec) and quarantine the pod/namespace by applying a deny-all NetworkPolicy and pausing the owning Deployment/StatefulSet.
- Escalate to a major incident and page the on-call cluster security team if you observe subsequent reads of /var/run/secrets/kubernetes.io/serviceaccount/token or ca.crt or API calls from the pod to the Kubernetes API (443) that list Secrets or ServiceAccounts.
- Rotate the service account credentials by deleting the pod to force a new projected token, set automountServiceAccountToken: false on the workload if not needed, and remove pods/exec and pods/attach privileges from users or roles that accessed this pod.
- Redeploy the workload from a trusted, signed image without embedded shells/kubectl, verify image digest on rollout, and only lift quarantine after confirming no unauthorized containers, cronjobs, or startup scripts were added.
- Enforce least-privilege RBAC for the service account (deny list/get on Secrets, ConfigMaps, and Pods in the namespace), enable Pod Security Admission restricted with readOnlyRootFilesystem and dropped Linux capabilities, and require approvals for kubectl exec using ephemeral containers for debugging.
"""
risk_score = 21
rule_id = "f7c64a1b-9d00-4b92-9042-d3bb4196899a"
severity = "low"
@@ -35,6 +65,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -26,6 +26,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Tool Enumeration Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Tool Enumeration Detected via Defend for Containers
Detects interactive use of which inside a Linux container to list installed networking, container-control, compiler, and scanning utilities. Adversaries do this to quickly assess built-in tools for living-off-the-land actions like payload download, cluster manipulation, or reconnaissance without dropping new binaries. Example: after compromising a Kubernetes pod, an operator runs which curl wget kubectl nmap python to decide how to transfer data, interact with the API, or probe the network.
### Possible investigation steps
- Correlate Kubernetes audit logs to determine whether the pod was accessed via kubectl exec, attach, or an ephemeral container and to identify the requesting user, source IP, and user agent.
- Review the containers process tree and TTY session around the alert time to see if the same session subsequently executed the enumerated utilities or performed network reconnaissance or data transfer.
- Analyze outbound network connections and DNS queries from the pod around the event to unfamiliar destinations or cluster control-plane endpoints and compare them against expected egress policy.
- Inspect pod and container metadata (namespace, service account, image, node) and evaluate RBAC bindings and mounted secrets to gauge potential impact and access scope.
- Confirm with the service owner whether this interactive container access aligns with an approved maintenance or debugging task and gather the corresponding change ticket or runbook reference.
### False positive analysis
- An engineer opens an interactive shell in a container for approved troubleshooting and runs which on utilities like curl, wget, kubectl, and python to confirm tool availability before debugging.
- During routine post-deployment checks, an operator follows a runbook that uses which to verify paths for expected binaries such as openssl and gcc inside the container, resulting in a benign alert.
### Response and remediation
- Immediately terminate any active TTY/shell in the affected pod (namespace/name) and isolate it by applying a temporary deny-all NetworkPolicy and removing exec/attach permissions from its service account.
- Delete the pod and any attached ephemeral debug container, redeploy from a known-good image, and rotate mounted secrets, cloud credentials, and the service account token present in the container.
- Restore service from clean deployments and verify the workload behaves as expected by running smoke tests and confirming the pods outbound connections are limited to approved destinations and ports.
- Escalate to the incident response team if the same session executed kubectl or docker, ran scanning tools such as nmap/masscan, accessed /var/run/secrets or changed RBAC, or connected to unfamiliar external IPs or the Kubernetes API server, and preserve evidence (container filesystem snapshot, shell history, Kubernetes audit logs, and node syslogs).
- Harden by enforcing admission controls to block interactive kubectl exec/attach to production pods and requiring runAsNonRoot, a read-only root filesystem, and dropped Linux capabilities on this workload.
- Reduce living-off-the-land options by rebuilding images to distroless/minimal and omitting utilities enumerated by which (curl, wget, nc, python, kubectl), and restrict egress with NetworkPolicies and service account RBAC to prevent cluster manipulation from inside containers.
"""
risk_score = 21
rule_id = "b84264aa-37a3-49f8-8bbc-60acbe9d4f86"
severity = "low"
@@ -35,6 +66,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -29,6 +29,36 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Direct Interactive Kubernetes API Request Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Direct Interactive Kubernetes API Request Detected via Defend for Containers
The rule flags interactive use of curl, wget, openssl, busybox ssl_client, socat/ncat, or kubectl from inside a container to call the Kubernetes API with a bearer token, often with custom CA or insecure TLS options. An operator enumerates cluster resources and tests access with in-pod credentials, enabling lateral movement or privilege escalation; after landing in a pod, they read the service account token and query the API to list namespaces, pods, or secrets, or issue kubectl get/patch to probe or modify workloads.
### Possible investigation steps
- Map the container ID to its pod, namespace, node, image, and owning controller, and confirm whether this workload is expected to make direct Kubernetes API calls or allow interactive access.
- Determine how the interactive session was initiated and by whom by correlating with Kubernetes events and audit logs for exec/attach/ephemeral-container activity and runtime logs for TTY sessions, including the initiating principal and source IP.
- Correlate with API server audit logs to retrieve the exact requests (verbs, resources, namespaces), the authenticated subject (service account or user), and response codes to identify any successful access to sensitive resources like Secrets or workload-modifying actions.
- Inspect the pod for credential use and operator traces by checking recent process activity, shell history, environment variables, and access to service account token or kubeconfig files at expected mount paths.
- Assess scope and potential persistence by listing recent cluster objects created or modified by the same identity across namespaces (Pods, CronJobs, RoleBindings, Secrets) within the timeframe around the alert.
### False positive analysis
- An administrator used kubectl interactively within a maintenance container to run get/list/patch commands during routine operations such as inspecting pods or updating labels, which matches expected administrative behavior.
- A developer ran openssl s_client, socat with SSL, or ncat --ssl interactively from within the container to troubleshoot TLS connectivity to a service endpoint, not the Kubernetes API server, causing the rule to fire despite benign intent.
### Response and remediation
- Immediately delete the affected pod to terminate interactive access, and apply a temporary NetworkPolicy in its namespace that blocks egress to the default/kubernetes service (API server) while you patch its ServiceAccount to set automountServiceAccountToken: false.
- Use API server audit logs and kubectl to enumerate actions taken by the pods ServiceAccount and revert any unauthorized objects it created or modified (Pods, CronJobs, RoleBindings, Secrets), and remove any attached ephemeral containers across the namespace.
- Rotate credentials and restore workloads by deleting any legacy ServiceAccount token Secret, restarting pods to issue new bound tokens, rebuilding the image from a trusted base, and redeploying with read-only rootfs and minimal RBAC verified via kubectl auth can-i.
- Escalate to incident response if audit logs show Secrets access or create/patch/update on workloads, if the ServiceAccount holds cluster-admin, or if the observed commands used curl -k/--insecure, wget --no-check-certificate, or openssl/socat/ncat with SSL to the API server.
- Harden the cluster by enforcing admission controls that deny kubectl exec/attach for non-admins, requiring automountServiceAccountToken: false by default and short-lived bound tokens where needed, restricting NetworkPolicies so only designated controllers can reach the API server, and adopting distroless images that omit curl/wget/openssl/ncat.
"""
risk_score = 21
rule_id = "26a989d2-010e-4dae-b46b-689d03cc22b3"
severity = "low"
@@ -39,6 +69,7 @@ tags = [
"Use Case: Threat Detection",
"Tactic: Execution",
"Tactic: Discovery",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"
@@ -4,7 +4,7 @@ integration = ["cloud_defend"]
maturity = "production"
min_stack_comments = "Defend for Containers integration was re-introduced in 9.3.0"
min_stack_version = "9.3.0"
updated_date = "2026/01/21"
updated_date = "2026/01/27"
[rule]
author = ["Elastic"]
@@ -25,6 +25,37 @@ interval = "5m"
language = "eql"
license = "Elastic License v2"
name = "Tool Installation Detected via Defend for Containers"
note = """ ## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating Tool Installation Detected via Defend for Containers
This rule flags interactive package installs inside Linux containers for network utilities or interpreters, a strong signal of hands-on activity to enumerate, fetch payloads, and pivot. Example attacker pattern: after gaining a shell in a pod, they run apt/apk/pacman to add curl, netcat, or socat, pull a second stage from an external host, and open a reverse shell to move from the container into adjacent services.
### Possible investigation steps
- Attribute the interactive install by correlating Kubernetes audit events (exec/attach) and runtime logs to the pod/namespace, a specific user or service account, source IP, and client, and verify alignment with approved break-glass procedures.
- Diff the running container filesystem against the image baseline or SBOM to enumerate newly added binaries and libraries, review package manager logs/cache, and capture hashes and paths for forensics.
- Examine pod-level network telemetry and DNS logs around the event for outbound connections, downloads, or reverse shell patterns, and isolate the workload if beaconing or exfiltration is observed.
- Verify the pods security context and mounts for privilege escalation vectors (privileged, hostPID/IPC, hostPaths, docker.sock) and inventory exposed credentials (service account tokens, cloud metadata, env vars, ~/.ssh, .aws), rotating any secrets at risk.
- Hunt across the cluster for similar interactive installs or exec sessions using audit and Defend for Containers telemetry, and review recent image builds and deployments to detect in-cluster modifications before quarantining or restarting affected workloads.
### False positive analysis
- A developer attaches an interactive shell to a running container to debug connectivity, using apt or apk to install curl or netcat for quick tests, which matches the rules interactive install of network utilities.
- During an approved break-glass fix, an operator interactively installs python or openssl with yum or dnf in a minimal container to run a temporary diagnostic script, triggering the same package-install signature.
### Response and remediation
- Immediately isolate the pod/container that performed interactive installs (e.g., apt-get install curl, apk add netcat) by applying a deny-all NetworkPolicy, terminating active kubectl exec/attach sessions, and cordoning the node if the pod is privileged or has hostPath/docker.sock mounts.
- Stop and delete the compromised workload, snapshot the container filesystem, then redeploy the deployment/statefulset from a trusted image and purge added tools by removing packages and caches (/var/lib/apt, /var/cache/apk, /var/cache/yum) from any retained volumes.
- Rotate the pods service account token and any exposed credentials found in env vars, ~/.ssh, or cloud provider metadata, and invalidate outbound connections established by newly installed binaries like curl, wget, socat, or netcat via egress firewall rules.
- Restore normal connectivity only after confirming no unauthorized binaries remain by diffing against the image SBOM and checking package history files like /var/log/dpkg.log, /var/log/yum.log, or /etc/apk/world, then validating app readiness/liveness probes.
- Escalate to incident response if the install included tor/torsocks, openssl used to generate new keys, reverse-shell behavior (e.g., netcat -e or socat TCP:external_ip), or if activity occurred in production without a change request.
- Enforce immutability and least privilege by rebuilding images without package managers or shells (distroless), enabling read-only root filesystems, disallowing kubectl exec via RBAC, using admission controls to block privileged pods and hostPath/docker.sock mounts, and tightening egress to only approved destinations.
"""
risk_score = 21
rule_id = "527d23e6-8b67-4a8e-a6bd-5169b90ab2a8"
severity = "low"
@@ -34,6 +65,7 @@ tags = [
"OS: Linux",
"Use Case: Threat Detection",
"Tactic: Execution",
"Resources: Investigation Guide",
]
timestamp_override = "event.ingested"
type = "eql"