Files
sigma-rules/docs-dev/experimental-machine-learning/experimental-detections.md
T
Eric Forte eadcd9d3e0 [FR] Add Env Var DR_CLI_MAX_WIDTH and DaC Docs Updates (#4518)
* Add Env Var DR_CLI_MAX_WIDTH

* Version Bump

* Update limit from 120 to 240

* Clean references to reference main

* Update Readme with DaC Info

* Add DaC to Table of Contents

* Bump Patch Version

* Updated naming and add dac md

* Organize Imports

* Deprecate upload-rule

* Update docs/detections-as-code.md

Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>

* move docs to docs-dev

* Sort custom rules imports

* Remove duplicate

* Fix typo

* Bump Patch Version

---------

Co-authored-by: Mika Ayenson, PhD <Mikaayenson@users.noreply.github.com>
2025-03-10 12:59:12 -04:00

1.6 KiB

Experimental ML Jobs and Rules

The ingest pipeline enriches process events by adding additional fields, which are used to power several rules. The experimental rules and jobs are staged separately from the model bundles under releases, with the tag ML-experimental-detections-YYYMMDD-N. New releases with this tag may contain either updates to existing rules or new experimental detections.

Note that if a rule is of type = "machine_learning", then it may be dependent on uploading and running a machine learning job first. If this is the case, it will likely be annotated within the note field of the rule.

Uploading rules

Unzip the release bundle and upload these rules individually.

Rules are now stored in ndjson format and can be imported into Kibana via the security app detections page.

Earlier releases stored the rules in toml format. These can be uploaded using the 7.12 branch CLI using the kibana import-rules command

Uploading ML Jobs and Datafeeds

Unzip the release bundle and then run python -m detection_rules es <args> experimental ml upload-job <ml_job.json>

To delete a job/datafeed, run python -m detection_rules es <args> experimental ml delete-job <job-name> <job-type>

The CLI automatically identifies whether the provided input file is an ML job or datafeed.

Take note of any errors as the jobs and datafeeds may have dependencies on each other which may require stopping and/or removing referenced jobs/datafeeds first.