* [Rule Tuning] AWS Service Quotas Multi-Region GetServiceQuota Requests
This rule is alerting as expected with very few instances in telemetry (only have data from 1 cluster).
- added more fields for context in the query.
- added metadata fields to query
- reduced execution window
- added highlighted fields
#### screenshot of working query with additional context
* Update rules/integrations/aws/discovery_servicequotas_multi_region_service_quota_requests.toml
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
---------
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
This rule is triggering as expected with moderate telemetry volume (high spikes for what looks like expected cleanup jobs) in specific cluster. No changes needed to the rule query.
- updated description, FP and IG
- reduced execution window
- updated highlighted fields
### AWS Config Resource Deletion
- added exclusions for services that perform Config modifications by design, reducing noise by 97% over the last 30 days.
- added success criteria to query as well
- increased severity to medium as this alert should be triaged
- updated description, false positive and investigation guide sections
- reduced execution window
- updated MITRE
- updated tags
- added highlighted fields
### AWS Configuration Recorder Stopped
no major query changes needed for this rule, performing as expected in telemetry with low volume as this is more rare activity.
- updated description, false positive and investigation guide sections
- reduced execution window
- updated MITRE
- updated tags
- added highlighted fields
* [Rule Tunings] AWS Lambda Rules
#### AWS Lambda Layer Added to Existing Function
This rule was missing alerts for the `UpdateFunctionConfiguration` action due to a missing wildcard.
- added missing wildcard to query
- reduced execution window
- updated description, FP and IG sections
- added highlighted fields
#### AWS Lambda Function Policy Updated to Allow Public Invocation
- changed this query to use EQL instead of KQL to optimize wildcard usage
- uses `event.type` as `event_category_override`
- reduced execution window
- updated description, FP and IG sections
- added highlighted fields
* Update rules/integrations/aws/persistence_lambda_backdoor_invoke_function_for_any_principal.toml
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
---------
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
There was a mistake in the query for this rule. It was looking for `event.provider: eventbridge.amazonaws.com` instead of `events.amazonaws.com`. So we have no existing telemetry for this rule. However, I have tested the behavior properly and ensured the new query does alert as expected. I will monitor this rule in telemetry moving forward to gauge it's performance.
- query change `event.provider: events.amazonaws.com`
- reduced execution window
- updated description, FP and IG sections
- updated tags
- added highlighted fields
* [Rule Tuning] AWS CLI with Kali Linux Fingerprint Identified
This rule is performing well in telemetry as expected. I changed this to EQL to avoid the multiple wildcards needed with KQL.
- changed rule type to EQL
- reduced execution window
- updated description, false positive and investigation guide
Script for testing this rule:
Manually perform any action against our AWS account using Kali Linux distribution
#### Screenshot showing working EQL query, still captures the BitPanda behavior this rule was initially designed around.
* add highlighted fields
add highlighted fields
* Update initial_access_kali_user_agent_detected_with_aws_cli.toml
I reduced the history window for new terms rules that were either:
- `now-14 days`
- showing slow performance metrics
There are still several AWS rules with a `now-10d` window but they are not showing any performance issues so I'd like to leave them as is for now.
First Time Seen AWS Secret Value Accessed in Secrets Manager
- removed `BatchGetSecretValue` API call since this calls `GetSecretValue`
- removed the user_agent exclusions from this one, too easy to bypass.
AWS EC2 User Data Retrieval for EC2 Instance
- excluded more benign AWS services from telemetry
AWS IAM Assume Role Policy Update
- removed use of cloudformation exclusion, this should be captured as well
* [Tuning] Windows BruteForce Rules Tuning
#1 Multiple Logon Failure from the same Source Address: converted to ES|QL and raised the threshold to 100 failed auths, alert quality should be better since it aggregates all failed auths info into one alert vs multiple EQL matches. (expected reduction more than 50%)
#2 Privileged Account Brute Force - coverted to ESQL and set the threshold to 50 in a minute. this should drop noise volume by more than 50%.
* ++
* Update execution_shell_evasion_linux_binary.toml
* Update execution_shell_evasion_linux_binary.toml
* Update defense_evasion_indirect_exec_forfiles.toml
* Update lateral_movement_remote_file_copy_hidden_share.toml
* Update lateral_movement_remote_file_copy_hidden_share.toml
* Update persistence_service_windows_service_winlog.toml
* Update credential_access_lsass_openprocess_api.toml
* Update persistence_suspicious_scheduled_task_runtime.toml
* Update impact_hosts_file_modified.toml
* Update defense_evasion_process_termination_followed_by_deletion.toml
* Update rules/windows/credential_access_lsass_openprocess_api.toml
* Update rules/windows/credential_access_bruteforce_admin_account.toml
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
* Update rules/windows/credential_access_lsass_openprocess_api.toml
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
* Update rules/windows/credential_access_bruteforce_multiple_logon_failure_same_srcip.toml
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
* Update credential_access_lsass_openprocess_api.toml
* Update impact_hosts_file_modified.toml
* Update credential_access_dollar_account_relay.toml
* Update credential_access_new_terms_secretsmanager_getsecretvalue.toml
---------
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
This rule is performing well in telemetry and producing alerts as expected for both explicit external account sharing and making snapshots public. Both scenarios tested.
- updated description, FP and IG
- added highlighted fields
- added `event.type` as `event_category_override` field because `event.category` is not populated for these events.
Co-authored-by: shashank-elastic <91139415+shashank-elastic@users.noreply.github.com>
This rule is performing well in telemetry, low volume and expected alerts. No major changes to rule query.
- reduced execution window
- updated description and IG
- added highlighted fields
Co-authored-by: shashank-elastic <91139415+shashank-elastic@users.noreply.github.com>
AWS S3 Bucket Replicated to Another Account
- updated description and IG
- added `event.type` as `event_category_override` field
- adjusted query to use `info` instead of `any` and added `Account=` instead of `Account` to help reduce chances of capturing unintended requests.
- added highlighted fields
AWS S3 Bucket Policy Added to Share with External Account
- added `event.outcome = success` to query to reduce noise from failed attempts
Co-authored-by: shashank-elastic <91139415+shashank-elastic@users.noreply.github.com>
* [Rule Tunings] AWS Multiple API Calls rules
AWS EC2 Multi-Region DescribeInstances API Calls
Over 2,000 alerts in the last 24 hours. This is a very noisy rule, by design it is alerting on quite normal behavior. There is not much in-the-wild threat behavior that justifies keeping this rule as a standalone alert. As a threat indicator, this is best used as a hunting rule or in correlation with another rule, for example: (GetCallerIdentity new terms + multi region DescribeInstances by same principal) or (Multiple Discovery API calls + multi region DescribeInstances by same principal) or (multi region DescribeInstances + snapshot/AMI activity by same principal). However, on its own it’s not adding much value over the noise.
- I’m keeping this as ESQL rule but converting it to a BBR
- keeping more fields for further context
- Changing investigation guide to be more relevant for hunting/correlation rule
AWS Discovery API Calls via CLI from a Single Resource
This rule is alerting as expected with low telemetry. It has to remain an ESQL rule as no other rule types can truncate the time window to 10 sec looking for a threshold of unique API calls coming from a single user.
- Keeping as ESQL rule
- Reduced execution window
- Keeping more fields for further context
- Adding highlighted fields
- Updated Investigation guide
* adding highlighted fields to keep parameter
* Apply suggestions from code review
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
* Apply suggestion from @imays11
---------
Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
Co-authored-by: Samirbous <64742097+Samirbous@users.noreply.github.com>
Co-authored-by: Terrance DeJesus <99630311+terrancedejesus@users.noreply.github.com>
This rule is performing as expected in telemetry, low volume rare behavior. No query changes needed.
- increased the severity and risk score
- reduced execution window
- reduced lookback window for new terms
- updated description and investigation guide
- slight edits to highlighted fields
Rule is alerting as expected, with low telemetry volume. Updates to rule query are to provide more alert context as an ESQL rule.
- reduced execution window
- added additional fields for more alert context, include customer-requested `data_stream.namespace` field
- added highlighted fields
- updated description and investigation guide
* [Rule Tuning] AWS Access Token Used from Multiple Addresses
This rule is extremely loud in telemetry ~2612 alerts in last 24 hours. There have also been a couple community requests for changes.
- reduced the scope of the alerts to only surface the "high" fidelity_score cases for `"multiple_ip_network_city"` or `"multiple_ip_network_city_user_agent"` criteria. This reduced telemetry by ~90%
- excluded 2 more benign service providers `support` which reduced volume by another 6%.
- added the `data_stream.namespace` field as requested.
- kept the rest of the rule logic visible so that if customers would like to broaden the scope of this rule again, they can duplicate the rules and revert back to the broader condition `Esql.activity_type != "normal_activity"`. This has been included as a comment in the rule query.
I will keep an eye on this rule in telemetry to determine it's value moving forward.
* nit IG format changes
`CreateCluster` is a common Redshift lifecycle operation that occurs frequently in normal workflows. Creating a new Redshift cluster offers no real advantage to an attacker and outside of cost, does not produce material impact for a target environment. This behavior aligns more with cloud infrastructure monitoring or posture management, which is important but not the focus of our detection ruleset.
Real world Redshift abuse centers on misuse of existing resources, such as snapshot sharing or copying or exposing the cluster through permissive VPC security group changes. These threat paths should be covered by other rules. Deprecating this creation-focused rule reduces noise and keeps the AWS ruleset aligned with real threat surfaces rather than infrastructure management.
* [Rule Tuning] AWS IAM Brute Force of Assume Role Policy
Description and primary tactic for this rule is misleading. The rule captures an IAM principal enumeration technique used by tools like PACU, it does not capture AssumeRole brute-force attempts. I've changed the primary tactic to Discover, changed the rule name and updated the rule description and Investigation Guide to more clearly reflect what behavior is being captured.
The query itself remains the same and the threshold values. I changed the execution window to the standard 5 min + 1 min lookback and was still able to capture the behavior.
* Apply suggestions from code review
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
* adding rule.threshold values
adding ["cloud.account.id", "user.name", "source.ip"] as group by fields
---------
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
`DeleteFileSystem` permanently removes an Amazon EFS file system and all stored data. This operation has no recovery path and represents a clear Impact-level destructive action when performed unintentionally or by an unauthorized actor. It is rare in most environments and typically limited to infrastructure teardown or automated provisioning workflows.
Currently this rule also matches `DeleteMountTarget` events. This action appears frequently in normal EFS lifecycle workflows and is not, by itself, a strong indicator of malicious intent. Since only `DeleteFileSystem` represents irreversible destructive impact, the rule has been narrowed to focus exclusively on the meaningful threat behavior.
- removed `DeleteMountTarget` scope from query
- rule name change and toml file name change to match new scope
- reduced execution window
- updated tags
- updated description, FP and IG
- added highlighted fields
* [Rule Tunings] AWS RDS Rules
#### AWS RDS DB Instance Made Public
- updated description and investigation guide
- added highlighted fields
#### AWS RDS DB Instance or Cluster Deletion Protection Disabled
- updated description and investigation guide
- added highlighted fields
#### AWS RDS Snapshot Deleted
- excluded `backup.amazonaws.com` as this is expected behavior. This exclusion reduces noise in telemetry by ~77%
- updated description and investigation guide
- added highlighted fields
#### AWS Deletion of RDS Instance or Cluster > AWS RDS DB Instance or Cluster Deleted
- reduced execution window
- slight name change to align with other rules
- updated description and investigation guide
- added highlighted fields
#### AWS RDS DB Instance Restored
- `event.type` used for `event_category_override` because event.category is not mapped for these API calls
- updated description and investigation guide
- added highlighted fields
#### AWS RDS DB Instance or Cluster Password Modified
- `event.type` used for `event_category_override` because event.category is not mapped for these API calls
- updated description and investigation guide
- added highlighted fields
#### AWS RDS Snapshot Export
- reduced execution window
- updated mitre mapping
- updated description and investigation guide
- added highlighted fields
* rule type change from eql to kql
changing rule type to kql since there's not eql specific functions needed for the query
Both these rules are have low volume in telemetry as expected, this is quite rare behavior. No major changes to the rule logic itself.
AWS IAM Roles Anywhere Profile Creation
- updated description and investigation guide
- reduced execution window
- added highlighted fields
AWS IAM Roles Anywhere Trust Anchor Created with External CA
- changed rule type to EQL to use `stringContains` instead of leading wildcard
- uses `event.type` as event category override field
- reduced execution window
- updated description and investigation guide
- added highlighted fields
This rule is performing as expected in telemetry, no query changes needed.
- reduced execution window
- updated description and investigation guide
- updated tags
- added highlighted fields
Noticed some false positives in alert telemetry leading to high alert volume. For all 3 rules I've added `source.ip:*` and excluded `user_agent.original: AWS Internal` to reduce noise from internal AWS services and backend API calls made on a user's behalf. These rules will instead stay focused on direct SDK/CLI triggered API calls.
- reduced execution window
- updated description, false positive and investigation guide sections
- added highlighted fields
- udpated tags and MITRE mapping
This rule is quite loud in telemetry. However over 80% of alerts come from a single cluster, which seems to have a lot of role usage and operation-type IAM Users which assume other roles constantly. This makes me think the majority of these alerts may be from multiple role sessions retrieving secrets, rather than a single user retrieving multiple secrets rapidly. I've looked at the prod telemetry data we have access to and found a few instances where a single secret is accessed multiple times or certain AWS services retrieve multple secrets rapidly. All of these instances have been accounted for in the tunings here.
- query has been changed to remove `GetBatchSecretValue` from the query since this API call actually calls the `GetSecretValue` for each of the secrets it's retrieving. So we only need to look for `GetSecretValue`
- I've added the AWS services found in prod data that contribute to rapid secret retrieval
- Changed the threshold parameters to look for a single `user.id` retrieving more than 20 unique secret values (`aws.cloudtrail.request_parameters`)
- updated the description and investigation guide
- reduced execution window
* [Rule Tuning] AWS IAM API Calls via Temporary Session Tokens
Came across some false positives as I was rule testing.
Temporary credentials are also created for AWS Internal operations and when an AWS service operates on your behalf. These both should be excluded from results. I saw this in my own testing, in prod data and in our alert telemetry. These cases can be excluded by looking for `source.ip:*` as this field is not populated for those internal AWS operations.
NOTE: Another false positive instance has been highlighted in the investigation guide. A legitimate AWS console login session is given temporary (ASIA) credentials which are populated for all the operations performed during that session. The only way to distinguish between these events and other temporary session token events, like those granted via AssumeRole or GetSessionToken, is with a field populated as `sessionCredentialFromConsole: true`. Right now our Integration does not map this field and it can only be found in `event.original`. I am putting in an Integration request to populate this field, which we could then use to further reduce false positives for rules like this.
- updated description , false positives, and investigation guide
- added `source.ip:*` to query
* excluding AWS Internal user agent
excluding AWS Internal user agent
#### Deprecate RDS DB Instance/Cluster lifecycle detections
`CreateDBInstance`, `CreateDBCluster`, `StopDBInstance`, `StopDBCluster`. These events occur frequently in normal workflows and do not reflect known attacker techniques. They are simply RDS lifecycle operations, with no real impact from an attacker-target perspective. These actions don't have a meaningful benefit for an attacker or cause a meaningful impact for a target. Threat activity around RDS is typically centered around snapshot sharing, export, and public exposure, which is already covered by other rules. There is also a theoretical case to be made for detecting destructive actions against RDS resources like `instance|cluster|snapshot Deletion`, this is covered by other rules. Removing these creation and stoppage rules reduces noise and keeps the AWS ruleset more aligned with real threat surfaces rather than infrastructure management.
#### Deprecate Outdated DBSecurityGroup API rules
`CreateDBSecurityGroup` and `DeleteDBSecurityGroup` were only used by RDS deployments on EC2-Classic, which AWS has fully retired. Modern RDS uses VPC Security Groups, making these APIs obsolete. These rules can no longer trigger and provide no threat-detection value.
Network-permission manipulation is fully covered by our existing VPC Security Group rule - "AWS EC2 Security Group Configuration Change".
Completing Deprecation Process for this rule. It has now been included in our ruleset with `Deprecated -` prefix for 2 release cycles and should now be moved to our `_deprecated` folder.
ElastiCache cache security groups are only used with EC2-Classic deployments.
AWS officially retired EC2-Classic and no longer supports launching ElastiCache
clusters in EC2-Classic networking environments.
All modern ElastiCache deployments run in a VPC and rely on standard EC2
security groups (ec2.amazonaws.com APIs) rather than CacheSecurityGroup APIs
(elasticache.amazonaws.com).
This behavior is covered by this existing rule:
- https://github.com/elastic/detection-rules/blob/fe642a879a412db71492f5d776e1e3338a531266/rules/integrations/aws/persistence_ec2_security_group_configuration_change_detection.toml
These rules no longer match any behavior in supported AWS
environments and so should be deprecated. This PR:
- Marks both rules with `Deprecated - ` title to start deprecation process
- Updates rule description to clarify that they are only relevant for historical
EC2-Classic log analysis.
- Recommends relying on the existing EC2 security group rule for network-control
changes impacting ElastiCache in VPC-based deployments.
I've tested this scenario by creating an Elasticache cluster, creating, and modifying security group rules. Below is a screenshot verifying that the activity is indeed captured by the normal EC2/VPC security group rule. There were no alerts triggered for the "Elasticache Security Group" Rules