Add Fibratus EDR profile + dashboard cache + GrumpyCats package split
Fibratus EDR profile (kind: fibratus). Pull-from-event-log model, same
shape DetonatorAgent's FibratusEdrPlugin.cs uses: operator configures
Fibratus on the EDR VM with alertsenders.eventlog: {enabled: true,
format: json}; rule matches land in the Application log. Whiskers gains
GET /api/alerts/fibratus/since which wevtutil-queries the log,
extracts <TimeCreated SystemTime> + <EventID> + <Data>, ships the raw
JSON blobs back. The new FibratusEdrAnalyzer mirrors Elastic's
two-phase shape — Phase 1 exec, Phase 2 polls Whiskers — and normalizes
Fibratus's actual schema (events[].proc.{name,exe,cmdline,parent_name,
parent_cmdline,ancestors} + bare tactic.id/technique.id/subtechnique.id
labels) into the saved-view renderer's dict.
Whiskers /api/info now reports telemetry_sources: ['fibratus'] when
fibratus.exe is at C:\Program Files\Fibratus\Bin\, so the
orchestrator can preflight before dispatching. wevtutil's single-quoted
attribute output is parsed correctly.
Dashboard reachability cache (services.edr_health). 30s TTL +
background poller every 15s. Per-probe timeouts dropped 4s/5s -> 2s.
First load post-boot waits at most one probe cycle; every subsequent
load <5ms (cache hit).
GrumpyCats package split: 1085-line monolith into:
grumpycat.py — orchestrator (14 lines)
cli/ — parser, handlers, runner
litterbox_client/ — base + per-domain mixins (files, analysis,
doppelganger, results, edr, reports, system)
composed into LitterBoxClient.
LitterBoxMCP.py rewires its one import. New CLI subcommand
fibratus-alerts and matching MCP tool fibratus_alerts_since pull
Fibratus alerts via a LitterBox passthrough endpoint
(/api/edr/fibratus/<profile>/alerts/since) for wire-checking the agent
without dispatching a payload.
CHANGELOG updated.
This commit is contained in:
+4
-3
@@ -11,6 +11,7 @@ All notable changes to this project will be documented in this file.
|
||||
- RedEdr now captures Microsoft-Windows-Kernel-File / -Network / -Audit-API-Calls / Antimalware-Engine ETW events; new tabs surface File Ops / Network / Audit API / Defender with Process Tree panel and ETW Provider Diagnostics
|
||||
- Defender threat verdicts at runtime contribute +50 to the Detection Score (only verdicts; scan activity stays descriptive)
|
||||
- Whiskers — single-binary Rust HTTP agent (`Whiskers/`) for dispatching payloads to a separate EDR-instrumented Windows VM
|
||||
- Fibratus profile kind (`kind: fibratus` in `Config/edr_profiles/<name>.yml`) — pull-from-event-log alternative to Elastic Defend, matching DetonatorAgent's `FibratusEdrPlugin.cs` integration shape. The operator configures Fibratus on the EDR VM with `alertsenders.eventlog: {enabled: true, format: json}`; rule matches land in the Windows Application event log under `Provider=Fibratus`. Whiskers gains a `GET /api/alerts/fibratus/since?from=…&until=…` endpoint that wevtutil-queries the log for records inside the run window and returns the raw JSON `<Data>` blobs (the agent does no parsing). The new `FibratusEdrAnalyzer` mirrors the Elastic two-phase shape — Phase 1 exec, Phase 2 polls Whiskers — and normalizes Fibratus's actual schema (`events[].proc.{name,exe,cmdline,parent_name,parent_cmdline,ancestors}` + bare `tactic.id`/`technique.id`/`subtechnique.id` labels) into the saved-view renderer's dict shape. No new alert-source coupling on Whiskers beyond one event-log query endpoint.
|
||||
- Whiskers `--install` / `--uninstall` flags register an `ONLOGON` Windows scheduled task so the agent auto-starts at user logon (no UAC, runs as the invoking user)
|
||||
- Whiskers `--samples-dir` flag; default drop path is now `<agent-exe-dir>/samples/` (auto-created on first write) instead of `C:\Users\Public\Downloads\`
|
||||
- Whiskers chunked-XOR write (64 KiB working buffer) — multi-MB payloads finish in milliseconds instead of the 10+ seconds the byte-by-byte loop took, which had been timing out the orchestrator
|
||||
@@ -25,7 +26,7 @@ All notable changes to this project will be documented in this file.
|
||||
- New pages: system dashboard at `/` (live scanner availability + EDR agent reachability, polls every minute), `/whiskers` agent inventory, `/analyze/all/<target>` "All" pipeline coordinator, `/results/edr/<profile>/<target>` saved-view (rich detail — MITRE chips, call stack, expandable per-alert detail, raw `_source` — backed by the same renderer as the live scan view)
|
||||
- "All" analysis mode: client-side coordinator runs Static + every EDR profile in parallel; Dynamic waits only for Static (EDR is on a remote VM, no local resource contention)
|
||||
- DLL execution support — payloads ending in `.dll` spawn via `rundll32.exe <path>,<entry> [args...]` in both the Whiskers agent and the local Dynamic analyzer; entry point comes from the executable-args field
|
||||
- GrumpyCats client gained EDR + scanner-health methods (`analyze_edr`, `get_edr_results`, `get_edr_index`, `list_edr_profiles`, `get_edr_agents_status`, `get_scanners_status`, `wait_for_edr_completion`); CLI subcommands `edr-run` / `edr-results` / `edr-profiles` / `edr-status` / `scanners`; matching MCP tools surface the same to LLM clients
|
||||
- GrumpyCats client gained EDR + scanner-health methods (`analyze_edr`, `get_edr_results`, `get_edr_index`, `list_edr_profiles`, `get_edr_agents_status`, `get_scanners_status`, `wait_for_edr_completion`, `fibratus_alerts_since`); CLI subcommands `edr-run` / `edr-results` / `edr-profiles` / `edr-status` / `scanners` / `fibratus-alerts`; matching MCP tools surface the same to LLM clients
|
||||
|
||||
### Changed
|
||||
- Drop-zone moved from `/` to `/upload` (GET); `/` is now the system dashboard. Existing POST `/upload` for file uploads is unchanged.
|
||||
@@ -34,8 +35,8 @@ All notable changes to this project will be documented in this file.
|
||||
- Status flow simplified — `blocked_polling_alerts` collapsed into `polling_alerts` plus an orthogonal `summary.blocked_by_av` flag (the polling itself is purely LitterBox→Elastic, so the EDR-VM blocked/clean state shouldn't affect the polling status).
|
||||
- Sidebar nav: added Dashboard entry; Upload nav points to `/upload`; "Agents" renamed to "Whiskers" (route also at `/whiskers`).
|
||||
- File-info detection score no longer folds in EDR results (EDR is its own analysis type with its own page; it shouldn't bleed into the static + dynamic + PE score).
|
||||
|
||||
### Changed
|
||||
- Dashboard `/api/edr/agents/status` is now backed by a 30s TTL cache pre-warmed by a background poller (`services.edr_health`); per-probe timeouts dropped from 4s/5s to 2s. Cold path under 2s, every subsequent dashboard load <5ms (warm cache hit), and the auto-refresh tick stays within cache TTL.
|
||||
- Whiskers `/api/info` now reports `telemetry_sources: ["fibratus"]` when Fibratus is installed at `C:\Program Files\Fibratus\Bin\fibratus.exe` so the orchestrator can preflight before dispatching to a Fibratus profile.
|
||||
- Backend split into Flask blueprints, services, and a `utils/` package; subprocess analyzers consolidated under `BaseSubprocessAnalyzer`
|
||||
- Frontend split into per-tool ES6 modules with shared utils; reusable Jinja macros for scanner tables
|
||||
- Full UI redesign on a terminal/IDE shell with new `.lb-*` design tokens and JetBrains Mono throughout
|
||||
|
||||
@@ -0,0 +1,86 @@
|
||||
# Fibratus EDR profile for LitterBox.
|
||||
#
|
||||
# Copy this file to `fibratus.yml` (drop the .example) and fill in your values.
|
||||
# Real profiles (*.yml without .example) are gitignored.
|
||||
#
|
||||
# Architecture (pull-from-event-log model — the same shape DetonatorAgent's
|
||||
# FibratusEdrPlugin.cs uses):
|
||||
#
|
||||
# payload --> Whiskers.exe (your EDR VM, port 8080) --> spawned process
|
||||
# |
|
||||
# v ETW
|
||||
# Fibratus
|
||||
# |
|
||||
# v alertsenders.eventlog
|
||||
# Windows Application Event Log
|
||||
# ^
|
||||
# | wevtutil qe Application ...
|
||||
# Whiskers /api/alerts/fibratus/since
|
||||
# ^
|
||||
# | GET (LitterBox initiates)
|
||||
# this profile's analyzer
|
||||
#
|
||||
# Whiskers stays the same one-binary deployment as for Elastic profiles —
|
||||
# only difference is the new GET /api/alerts/fibratus/since endpoint, which
|
||||
# wevtutil-queries the local Application log for `Provider=Fibratus`
|
||||
# records inside the run's time window. No inbound network direction
|
||||
# from the VM, no shared secrets, no buffer.
|
||||
#
|
||||
# Required setup on the EDR VM (one-time):
|
||||
#
|
||||
# 1. Install Fibratus from https://github.com/rabbitstack/fibratus/releases
|
||||
# (Whiskers expects the default install path, C:\Program Files\Fibratus\,
|
||||
# to detect Fibratus presence; /api/info will report
|
||||
# `telemetry_sources: ["fibratus"]` once it's there.)
|
||||
#
|
||||
# 2. Edit `%PROGRAMFILES%\Fibratus\Config\fibratus.yml` so rule matches
|
||||
# land in the Application event log as JSON:
|
||||
#
|
||||
# alertsenders:
|
||||
# eventlog:
|
||||
# enabled: true
|
||||
# format: json # CRITICAL — the analyzer parses the <Data> field as JSON
|
||||
#
|
||||
# No other section needs to change. Leave `output.console` / `output.http`
|
||||
# / `output.elasticsearch` alone — we don't use the event-stream outputs.
|
||||
#
|
||||
# 3. Make sure `filters.rules.enabled: true` and `filters.rules.from-paths`
|
||||
# points at the rule pack you want active. Fibratus ships a useful
|
||||
# default set under `%PROGRAMFILES%\Fibratus\Rules\`.
|
||||
#
|
||||
# 4. Restart the Fibratus service:
|
||||
# net stop fibratus
|
||||
# net start fibratus
|
||||
#
|
||||
# 5. Smoke-test the path: trigger a known rule (e.g. spawn `mshta.exe http://...`)
|
||||
# and confirm an entry appears under Event Viewer > Windows Logs >
|
||||
# Application with Source `Fibratus` and a JSON `Data` payload.
|
||||
|
||||
name: "fibratus"
|
||||
display_name: "Fibratus"
|
||||
|
||||
# Kind discriminator — selects the analyzer + alert source. Without this,
|
||||
# the loader assumes elastic and will demand elastic_url / elastic_apikey.
|
||||
kind: "fibratus"
|
||||
|
||||
# Whiskers agent on the EDR VM. The agent self-reports its hostname via
|
||||
# GET /api/info — no need to set it here. Move the agent to a different
|
||||
# VM and the integration follows it.
|
||||
agent_url: "http://192.168.1.100:8080"
|
||||
|
||||
# How long to wait after a successful payload exit before giving up on
|
||||
# alert correlation. Fibratus writes alerts to the event log in real-time
|
||||
# so this is much shorter than the Elastic equivalent.
|
||||
wait_seconds_for_alerts: 30
|
||||
|
||||
# Schema parity with Elastic profiles — Fibratus is detection-only so the
|
||||
# AV-block path doesn't really apply, but the analyzer reads this anyway.
|
||||
av_block_wait_seconds: 30
|
||||
|
||||
# Hard cap on payload runtime. Whiskers kills the process after this many
|
||||
# seconds and the analyzer collects whatever stdout/stderr was captured.
|
||||
exec_timeout_seconds: 60
|
||||
|
||||
# Optional: where on the EDR VM to drop the payload. Defaults to Whiskers's
|
||||
# own samples/ directory if unset.
|
||||
# drop_path: "C:\\Users\\Public\\sample.exe"
|
||||
@@ -20,7 +20,7 @@ sys.path.insert(0, str(Path(__file__).parent))
|
||||
from mcp.server.fastmcp import FastMCP
|
||||
from pydantic import Field
|
||||
|
||||
from grumpycat import LitterBoxClient
|
||||
from litterbox_client import LitterBoxClient
|
||||
|
||||
# CRITICAL: stdio transport speaks JSON-RPC over stdout. Logs MUST go to
|
||||
# stderr or they corrupt the protocol stream and break the connection.
|
||||
@@ -237,6 +237,25 @@ async def get_edr_index(
|
||||
return await _call(client.get_edr_index, file_hash)
|
||||
|
||||
|
||||
@mcp.tool()
|
||||
async def fibratus_alerts_since(
|
||||
profile: Annotated[str, Field(description="Fibratus profile name (must be kind=fibratus).")],
|
||||
since_iso: Annotated[str, Field(description="ISO8601 lower bound in UTC, e.g. '2026-04-30T00:00:00Z'.")],
|
||||
until_iso: Annotated[Optional[str], Field(description="ISO8601 upper bound in UTC; defaults to now.")] = None,
|
||||
) -> dict:
|
||||
"""Test/debug: pull Fibratus rule-match alerts via Whiskers without dispatching a payload.
|
||||
|
||||
LitterBox proxies the request to the registered profile's Whiskers agent,
|
||||
which `wevtutil`-queries the EDR VM's Application event log for
|
||||
`Provider=Fibratus` records inside the time window. Useful right after
|
||||
Fibratus is set up on a new VM to verify `alertsenders.eventlog` with
|
||||
`format: json` is actually emitting alerts before running real payloads.
|
||||
Returns `{supported, events: [{time_created, event_id, data}]}` —
|
||||
`data` is the raw JSON Fibratus produced.
|
||||
"""
|
||||
return await _call(client.fibratus_alerts_since, profile, since_iso, until_iso)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# System health — local scanner inventory
|
||||
# =============================================================================
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
"""CLI parser + per-command handlers.
|
||||
|
||||
The orchestrator (`grumpycat.py`) imports `build_parser`, `setup_client`,
|
||||
`COMMAND_HANDLERS`, and `run` from here and stays minimal.
|
||||
"""
|
||||
|
||||
from .handlers import COMMAND_HANDLERS
|
||||
from .parser import build_parser, setup_client
|
||||
from .runner import run
|
||||
|
||||
__all__ = ["build_parser", "setup_client", "COMMAND_HANDLERS", "run"]
|
||||
@@ -0,0 +1,292 @@
|
||||
"""Command handlers — one `_cmd_*` per CLI subcommand, plus the
|
||||
`COMMAND_HANDLERS` dispatch table. Each handler is intentionally thin:
|
||||
it pulls fields off `args`, calls the right client method(s), and
|
||||
prints / json.dumps the result.
|
||||
|
||||
Grouped by domain via comment headers to mirror the parser layout.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from typing import Dict
|
||||
|
||||
from litterbox_client import LitterBoxClient
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Intake — upload, analyze-pid, delete
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_upload(client: LitterBoxClient, args):
|
||||
result = client.upload_file(args.file, file_name=args.name)
|
||||
file_hash = result["file_info"]["md5"]
|
||||
print(f"File uploaded successfully. Hash: {file_hash}")
|
||||
|
||||
for analysis_type in args.analysis or []:
|
||||
print(f"Running {analysis_type} analysis...")
|
||||
analysis_args = args.args if analysis_type == "dynamic" else None
|
||||
result = client.analyze_file(
|
||||
file_hash, analysis_type,
|
||||
cmd_args=analysis_args, wait_for_completion=True,
|
||||
)
|
||||
_print_analysis_result(result)
|
||||
|
||||
|
||||
def _cmd_upload_driver(client: LitterBoxClient, args):
|
||||
result = client.upload_and_analyze_driver(
|
||||
args.file, file_name=args.name, run_holygrail=args.holygrail,
|
||||
)
|
||||
file_hash = result["upload"]["file_info"]["md5"]
|
||||
print(f"Driver uploaded successfully. Hash: {file_hash}")
|
||||
|
||||
if args.holygrail and result["holygrail"]:
|
||||
if "error" in result["holygrail"]:
|
||||
print(f"HolyGrail analysis failed: {result['holygrail']['error']}")
|
||||
else:
|
||||
print("HolyGrail analysis completed")
|
||||
print(json.dumps(result["holygrail"], indent=2))
|
||||
|
||||
|
||||
def _cmd_analyze_pid(client: LitterBoxClient, args):
|
||||
print(f"Analyzing process {args.pid}...")
|
||||
result = client.analyze_file(
|
||||
str(args.pid), "dynamic",
|
||||
cmd_args=args.args, wait_for_completion=args.wait,
|
||||
)
|
||||
_print_analysis_result(result)
|
||||
|
||||
|
||||
def _cmd_delete(client: LitterBoxClient, args):
|
||||
print("Deletion Results:")
|
||||
print(json.dumps(client.delete_file(args.hash), indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Results
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_results(client: LitterBoxClient, args):
|
||||
if args.comprehensive:
|
||||
result = client.get_comprehensive_results(args.target)
|
||||
print("Comprehensive Results:")
|
||||
print(json.dumps(result, indent=2))
|
||||
return
|
||||
|
||||
if not args.type:
|
||||
print("Please specify --type or use --comprehensive")
|
||||
sys.exit(1)
|
||||
|
||||
if args.type == "holygrail":
|
||||
result = client.get_holygrail_results(args.target)
|
||||
else:
|
||||
result = client.get_results(args.target, args.type)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# EDR — Whiskers + Elastic / Fibratus
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_edr_run(client: LitterBoxClient, args):
|
||||
print(f"Dispatching {args.hash} to EDR profile '{args.profile}'...")
|
||||
phase1 = client.analyze_edr(
|
||||
args.hash, args.profile,
|
||||
cmd_args=args.args, xor_key=args.xor_key,
|
||||
)
|
||||
print("Phase-1 result:")
|
||||
print(json.dumps(phase1, indent=2))
|
||||
|
||||
if args.wait and (phase1 or {}).get("status") == "polling_alerts":
|
||||
print(f"\nPolling for Phase-2 settle (timeout {args.timeout}s)...")
|
||||
final = client.wait_for_edr_completion(args.hash, args.profile, timeout=args.timeout)
|
||||
print("Phase-2 result:")
|
||||
print(json.dumps(final, indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_results(client: LitterBoxClient, args):
|
||||
if args.profile:
|
||||
result = client.get_edr_results(args.hash, args.profile)
|
||||
else:
|
||||
result = client.get_edr_index(args.hash)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_profiles(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.list_edr_profiles(), indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_status(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.get_edr_agents_status(), indent=2))
|
||||
|
||||
|
||||
def _cmd_scanners(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.get_scanners_status(), indent=2))
|
||||
|
||||
|
||||
def _cmd_fibratus_alerts(client: LitterBoxClient, args):
|
||||
result = client.fibratus_alerts_since(args.profile, args.since, args.until)
|
||||
if not result.get("supported", True):
|
||||
print(f"Profile '{args.profile}' agent reports Fibratus is NOT installed (telemetry_sources empty).")
|
||||
print("Confirm C:\\Program Files\\Fibratus\\Bin\\fibratus.exe exists on the VM.")
|
||||
return
|
||||
events = result.get("events") or []
|
||||
print(f"Fibratus alerts for profile '{args.profile}' between {args.since} and {args.until or 'now'}:")
|
||||
print(f" count: {len(events)}")
|
||||
if events:
|
||||
# First 5 alert titles + severities for a quick sanity read.
|
||||
# Full JSON for everything follows so tools can pipe it.
|
||||
for ev in events[:5]:
|
||||
try:
|
||||
payload = json.loads(ev.get("data") or "{}")
|
||||
except (ValueError, TypeError):
|
||||
payload = {}
|
||||
t = payload.get("title") or "<unparsed>"
|
||||
sev = payload.get("severity") or "?"
|
||||
ts = ev.get("time_created") or "?"
|
||||
print(f" - [{sev:>8}] {ts} {t}")
|
||||
if len(events) > 5:
|
||||
print(f" ... and {len(events) - 5} more")
|
||||
print()
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Doppelganger
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_doppelganger_scan(client: LitterBoxClient, args):
|
||||
print(f"Running doppelganger scan with type: {args.type}")
|
||||
print(json.dumps(client.run_blender_scan(), indent=2))
|
||||
|
||||
|
||||
def _cmd_doppelganger_analyze(client: LitterBoxClient, args):
|
||||
print(f"Running doppelganger analysis with type: {args.type}")
|
||||
if args.type == "blender":
|
||||
result = client.compare_with_blender(args.hash)
|
||||
else:
|
||||
result = client.analyze_with_fuzzy(args.hash, args.threshold)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_doppelganger_db(client: LitterBoxClient, args):
|
||||
print("Creating doppelganger fuzzy database...")
|
||||
print(json.dumps(client.create_fuzzy_database(args.folder, args.extensions), indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Reports
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_report(client: LitterBoxClient, args):
|
||||
if args.browser:
|
||||
print(f"Opening report for {args.target} in browser...")
|
||||
if not client.open_report_in_browser(args.target):
|
||||
print("Failed to open report in browser.")
|
||||
sys.exit(1)
|
||||
elif args.download:
|
||||
print(f"Downloading report for {args.target}...")
|
||||
output_path = client.download_report(args.target, args.output)
|
||||
print(f"Report saved to: {output_path}")
|
||||
else:
|
||||
print(client.get_report(args.target))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# System
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _cmd_status(client: LitterBoxClient, args):
|
||||
if args.full:
|
||||
result = client.get_system_status()
|
||||
print("System Status:")
|
||||
else:
|
||||
result = client.check_health()
|
||||
print("Health Check:")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_health(client: LitterBoxClient, args):
|
||||
result = client.check_health()
|
||||
status = result.get("status", "unknown")
|
||||
print("Service is healthy" if status == "ok" else f"Service status: {status}")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_files(client: LitterBoxClient, args):
|
||||
print("Files Summary:")
|
||||
print(json.dumps(client.get_files_summary(), indent=2))
|
||||
|
||||
|
||||
def _cmd_cleanup(client: LitterBoxClient, args):
|
||||
if args.all:
|
||||
args.uploads = args.results = args.analysis = True
|
||||
result = client.cleanup(
|
||||
include_uploads=args.uploads,
|
||||
include_results=args.results,
|
||||
include_analysis=args.analysis,
|
||||
)
|
||||
print("Cleanup Results:")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Shared formatting helper
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _print_analysis_result(result: Dict):
|
||||
"""Pretty-print an `analyze_file` result, branching on status."""
|
||||
status = result.get("status", "unknown")
|
||||
|
||||
if status == "early_termination":
|
||||
print("Process terminated early:")
|
||||
print(f" Error: {result.get('error')}")
|
||||
details = result.get("details", {})
|
||||
if details:
|
||||
print(" Details:")
|
||||
for key, value in details.items():
|
||||
print(f" {key}: {value}")
|
||||
elif status == "error":
|
||||
print(f"Analysis failed: {result.get('error')}")
|
||||
if "details" in result:
|
||||
print(f" Details: {result['details']}")
|
||||
elif status == "success":
|
||||
print("Analysis completed successfully")
|
||||
print(json.dumps(result.get("results", result), indent=2))
|
||||
else:
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Dispatch table — keyed by the CLI subcommand name. The orchestrator
|
||||
# (grumpycat.py) looks up the right handler here.
|
||||
# =============================================================================
|
||||
|
||||
|
||||
COMMAND_HANDLERS = {
|
||||
"upload": _cmd_upload,
|
||||
"upload-driver": _cmd_upload_driver,
|
||||
"analyze-pid": _cmd_analyze_pid,
|
||||
"delete": _cmd_delete,
|
||||
"results": _cmd_results,
|
||||
"edr-run": _cmd_edr_run,
|
||||
"edr-results": _cmd_edr_results,
|
||||
"edr-profiles": _cmd_edr_profiles,
|
||||
"edr-status": _cmd_edr_status,
|
||||
"fibratus-alerts": _cmd_fibratus_alerts,
|
||||
"scanners": _cmd_scanners,
|
||||
"doppelganger-scan": _cmd_doppelganger_scan,
|
||||
"doppelganger-analyze": _cmd_doppelganger_analyze,
|
||||
"doppelganger-db": _cmd_doppelganger_db,
|
||||
"report": _cmd_report,
|
||||
"status": _cmd_status,
|
||||
"health": _cmd_health,
|
||||
"files": _cmd_files,
|
||||
"cleanup": _cmd_cleanup,
|
||||
}
|
||||
@@ -0,0 +1,218 @@
|
||||
"""argparse parser for `grumpycat` + the helper that wires CLI args
|
||||
into a configured `LitterBoxClient`.
|
||||
|
||||
Subcommand definitions are grouped by domain via comment headers so the
|
||||
parser stays readable as the surface grows.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
|
||||
from litterbox_client import LitterBoxClient
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
"""Build the full CLI parser including every subcommand."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="LitterBox Payload-Analysis Client",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=_EPILOG,
|
||||
)
|
||||
|
||||
# Global options apply to every subcommand.
|
||||
parser.add_argument("--debug", action="store_true", help="Enable debug logging")
|
||||
parser.add_argument("--url", default="http://127.0.0.1:1337", help="LitterBox server URL")
|
||||
parser.add_argument("--timeout", type=int, default=120, help="Request timeout in seconds")
|
||||
parser.add_argument("--no-verify-ssl", action="store_true", help="Disable SSL verification")
|
||||
parser.add_argument("--proxy", help="Proxy URL (e.g., http://proxy:8080)")
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Command to execute")
|
||||
|
||||
_add_intake(subparsers)
|
||||
_add_results(subparsers)
|
||||
_add_edr(subparsers)
|
||||
_add_doppelganger(subparsers)
|
||||
_add_reports(subparsers)
|
||||
_add_system(subparsers)
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def setup_client(args) -> LitterBoxClient:
|
||||
"""Build a `LitterBoxClient` from parsed CLI args."""
|
||||
client_kwargs = {
|
||||
"base_url": args.url,
|
||||
"timeout": args.timeout,
|
||||
"verify_ssl": not args.no_verify_ssl,
|
||||
"logger": logging.getLogger("litterbox"),
|
||||
}
|
||||
if args.proxy:
|
||||
client_kwargs["proxy_config"] = {"http": args.proxy, "https": args.proxy}
|
||||
return LitterBoxClient(**client_kwargs)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Subcommand groups — one private helper per domain so build_parser() stays
|
||||
# easy to scan top-down.
|
||||
# =============================================================================
|
||||
|
||||
|
||||
def _add_intake(subparsers):
|
||||
"""upload, upload-driver, analyze-pid, delete."""
|
||||
upload = subparsers.add_parser("upload", help="Upload file for analysis")
|
||||
upload.add_argument("file", help="File to upload")
|
||||
upload.add_argument("--name", help="Custom name for the file")
|
||||
upload.add_argument(
|
||||
"--analysis", nargs="+", choices=["static", "dynamic"],
|
||||
help="Run analysis after upload",
|
||||
)
|
||||
upload.add_argument("--args", nargs="+", help="Command line arguments for dynamic analysis")
|
||||
|
||||
driver = subparsers.add_parser("upload-driver", help="Upload kernel driver")
|
||||
driver.add_argument("file", help="Driver file to upload")
|
||||
driver.add_argument("--name", help="Custom name for the driver")
|
||||
driver.add_argument("--holygrail", action="store_true", help="Run HolyGrail analysis")
|
||||
|
||||
apid = subparsers.add_parser("analyze-pid", help="Analyze running process")
|
||||
apid.add_argument("pid", type=int, help="Process ID to analyze")
|
||||
apid.add_argument("--wait", action="store_true", help="Wait for analysis completion")
|
||||
apid.add_argument("--args", nargs="+", help="Command line arguments")
|
||||
|
||||
dl = subparsers.add_parser("delete", help="Delete file and its results")
|
||||
dl.add_argument("hash", help="File hash to delete")
|
||||
|
||||
|
||||
def _add_results(subparsers):
|
||||
"""results."""
|
||||
res = subparsers.add_parser("results", help="Get analysis results")
|
||||
res.add_argument("target", help="File hash or PID")
|
||||
res.add_argument(
|
||||
"--type", choices=["static", "dynamic", "info", "holygrail"],
|
||||
help="Type of results to retrieve",
|
||||
)
|
||||
res.add_argument(
|
||||
"--comprehensive", action="store_true",
|
||||
help="Get all available results in parallel",
|
||||
)
|
||||
|
||||
|
||||
def _add_edr(subparsers):
|
||||
"""edr-run, edr-results, edr-profiles, edr-status, scanners, fibratus-alerts."""
|
||||
run = subparsers.add_parser(
|
||||
"edr-run", help="Dispatch a payload to an EDR profile (Whiskers + Elastic / Fibratus)",
|
||||
)
|
||||
run.add_argument("hash", help="File hash returned by upload")
|
||||
run.add_argument("--profile", required=True, help="EDR profile name (Config/edr_profiles/<name>.yml)")
|
||||
run.add_argument("--args", nargs="+", help="Command-line arguments passed to the payload")
|
||||
run.add_argument("--xor-key", type=int, help="Single byte (0-255) used to XOR-encode the payload in transit")
|
||||
run.add_argument("--wait", action="store_true", help="Poll until Phase-2 settles")
|
||||
run.add_argument("--timeout", type=float, default=180.0, help="Phase-2 wait timeout in seconds (default 180)")
|
||||
|
||||
edrres = subparsers.add_parser("edr-results", help="Read saved EDR findings for a target")
|
||||
edrres.add_argument("hash", help="File hash")
|
||||
edrres.add_argument("--profile", help="Specific profile (omit for the cross-profile index)")
|
||||
|
||||
subparsers.add_parser("edr-profiles", help="List registered EDR profiles")
|
||||
subparsers.add_parser("edr-status", help="Live probe of every EDR profile (Whiskers + backend reachability)")
|
||||
subparsers.add_parser("scanners", help="Inventory of configured local analyzers and whether their binaries exist")
|
||||
|
||||
# Fibratus testing helper — wire-check the agent's event-log query
|
||||
# path without running a payload. Useful right after Fibratus is set
|
||||
# up on a new VM to confirm `format: json` + rule matches reach us.
|
||||
fib = subparsers.add_parser(
|
||||
"fibratus-alerts",
|
||||
help="Test/debug: pull Fibratus alerts via Whiskers for a registered profile (no exec)",
|
||||
)
|
||||
fib.add_argument("--profile", required=True, help="Fibratus profile name (kind=fibratus)")
|
||||
fib.add_argument(
|
||||
"--from", dest="since", required=True,
|
||||
help="ISO8601 lower bound (UTC), e.g. 2026-04-30T00:00:00Z",
|
||||
)
|
||||
fib.add_argument("--until", dest="until", help="ISO8601 upper bound (UTC); defaults to now")
|
||||
|
||||
|
||||
def _add_doppelganger(subparsers):
|
||||
"""doppelganger-scan / -analyze / -db."""
|
||||
scan = subparsers.add_parser("doppelganger-scan", help="Run doppelganger scan")
|
||||
scan.add_argument(
|
||||
"--type", choices=["blender"], default="blender",
|
||||
help="Type of scan to perform",
|
||||
)
|
||||
|
||||
ana = subparsers.add_parser("doppelganger-analyze", help="Doppelganger analysis")
|
||||
ana.add_argument("hash", help="File hash to analyze")
|
||||
ana.add_argument(
|
||||
"--type", choices=["blender", "fuzzy"], required=True,
|
||||
help="Type of analysis",
|
||||
)
|
||||
ana.add_argument(
|
||||
"--threshold", type=int, default=1,
|
||||
help="Similarity threshold for fuzzy analysis",
|
||||
)
|
||||
|
||||
db = subparsers.add_parser("doppelganger-db", help="Create doppelganger database")
|
||||
db.add_argument("--folder", required=True, help="Folder path to process")
|
||||
db.add_argument("--extensions", nargs="+", help="File extensions to include")
|
||||
|
||||
|
||||
def _add_reports(subparsers):
|
||||
"""report."""
|
||||
rpt = subparsers.add_parser("report", help="Generate analysis report")
|
||||
rpt.add_argument("target", help="File hash or process ID")
|
||||
rpt.add_argument("--download", action="store_true", help="Download the report")
|
||||
rpt.add_argument("--output", help="Output path for downloaded report")
|
||||
rpt.add_argument("--browser", action="store_true", help="Open report in browser")
|
||||
|
||||
|
||||
def _add_system(subparsers):
|
||||
"""status, health, files, cleanup."""
|
||||
status = subparsers.add_parser("status", help="Get system status")
|
||||
status.add_argument("--full", action="store_true", help="Get comprehensive status")
|
||||
|
||||
subparsers.add_parser("health", help="Check service health")
|
||||
subparsers.add_parser("files", help="Get summary of all analyzed files")
|
||||
|
||||
cu = subparsers.add_parser("cleanup", help="Clean up analysis artifacts")
|
||||
cu.add_argument("--all", action="store_true", help="Clean all artifacts")
|
||||
cu.add_argument("--uploads", action="store_true", help="Clean upload directory")
|
||||
cu.add_argument("--results", action="store_true", help="Clean results directory")
|
||||
cu.add_argument("--analysis", action="store_true", help="Clean analysis artifacts")
|
||||
|
||||
|
||||
_EPILOG = """
|
||||
Examples:
|
||||
# Upload and analyze a file
|
||||
%(prog)s upload malware.exe --analysis static dynamic
|
||||
|
||||
# Upload and analyze a kernel driver
|
||||
%(prog)s upload-driver rootkit.sys --holygrail
|
||||
|
||||
# Analyze a running process
|
||||
%(prog)s analyze-pid 1234 --wait
|
||||
|
||||
# Get comprehensive results
|
||||
%(prog)s results abc123def --comprehensive
|
||||
|
||||
# Run Doppelganger operations
|
||||
%(prog)s doppelganger-scan --type blender
|
||||
%(prog)s doppelganger-analyze abc123def --type fuzzy --threshold 85
|
||||
|
||||
# EDR (Whiskers + Elastic Defend or Fibratus)
|
||||
%(prog)s edr-profiles
|
||||
%(prog)s edr-status
|
||||
%(prog)s edr-run abc123def --profile elastic --wait
|
||||
%(prog)s edr-run abc123def --profile fibratus --wait
|
||||
%(prog)s edr-results abc123def --profile fibratus
|
||||
|
||||
# EDR testing — verify the Fibratus alert wire without running a payload
|
||||
%(prog)s fibratus-alerts --profile fibratus --from 2026-04-30T00:00:00Z
|
||||
|
||||
# System operations
|
||||
%(prog)s status --full
|
||||
%(prog)s scanners
|
||||
%(prog)s cleanup --all
|
||||
|
||||
# Report operations
|
||||
%(prog)s report abc123def --browser
|
||||
%(prog)s report abc123def --download --output ./reports/
|
||||
"""
|
||||
@@ -0,0 +1,59 @@
|
||||
"""CLI runner — parse args, dispatch, handle exceptions uniformly.
|
||||
|
||||
Lifted out of `grumpycat.py` so the orchestrator stays a few lines.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sys
|
||||
|
||||
from litterbox_client import LitterBoxAPIError, LitterBoxError
|
||||
|
||||
from .handlers import COMMAND_HANDLERS
|
||||
from .parser import build_parser, setup_client
|
||||
|
||||
|
||||
def run(argv=None) -> int:
|
||||
"""Parse `argv`, dispatch to the right handler, return an exit code.
|
||||
|
||||
Catches the client's structured exceptions and turns them into clean
|
||||
error messages + non-zero exit codes. Unexpected exceptions print
|
||||
their type/message at INFO; with `--debug`, the full stack trace.
|
||||
"""
|
||||
parser = build_parser()
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
log_level = logging.DEBUG if args.debug else logging.INFO
|
||||
logging.basicConfig(
|
||||
level=log_level,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
||||
)
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
return 0
|
||||
|
||||
handler = COMMAND_HANDLERS.get(args.command)
|
||||
if handler is None:
|
||||
parser.print_help()
|
||||
return 1
|
||||
|
||||
try:
|
||||
with setup_client(args) as client:
|
||||
handler(client, args)
|
||||
return 0
|
||||
except LitterBoxAPIError as e:
|
||||
logging.error(f"API Error (Status {e.status_code}): {str(e)}")
|
||||
if args.debug and e.response:
|
||||
logging.debug(f"Response data: {e.response}")
|
||||
return 1
|
||||
except LitterBoxError as e:
|
||||
logging.error(f"Client Error: {str(e)}")
|
||||
return 1
|
||||
except KeyboardInterrupt:
|
||||
print("\nOperation cancelled by user")
|
||||
return 130
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected Error: {str(e)}")
|
||||
if args.debug:
|
||||
logging.exception("Detailed error information:")
|
||||
return 1
|
||||
+15
-1005
@@ -1,1018 +1,28 @@
|
||||
import argparse
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import tempfile
|
||||
import webbrowser
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional, Dict, List, Union, BinaryIO
|
||||
"""GrumpyCats — CLI orchestrator for the LitterBox payload-analysis sandbox.
|
||||
|
||||
import requests
|
||||
from requests.adapters import HTTPAdapter, Retry
|
||||
from urllib.parse import urljoin
|
||||
The CLI surface and the API client are split into packages:
|
||||
|
||||
litterbox_client/ — the LitterBoxClient class, composed from
|
||||
per-domain mixins (files / analysis / doppelganger /
|
||||
results / edr / reports / system).
|
||||
|
||||
class LitterBoxError(Exception):
|
||||
"""Base exception for LitterBox client errors"""
|
||||
pass
|
||||
cli/ — argparse parser, per-command handlers, and the
|
||||
runner that ties them together.
|
||||
|
||||
|
||||
class LitterBoxAPIError(LitterBoxError):
|
||||
"""Exception for API-related errors"""
|
||||
def __init__(self, message: str, status_code: Optional[int] = None, response: Optional[Dict] = None):
|
||||
super().__init__(message)
|
||||
self.status_code = status_code
|
||||
self.response = response
|
||||
|
||||
|
||||
class LitterBoxClient:
|
||||
"""Optimized Python client for LitterBox payload-analysis sandbox API."""
|
||||
|
||||
def __init__(self,
|
||||
base_url: str = "http://127.0.0.1:1337",
|
||||
timeout: int = 120,
|
||||
max_retries: int = 3,
|
||||
verify_ssl: bool = True,
|
||||
logger: Optional[logging.Logger] = None,
|
||||
proxy_config: Optional[Dict] = None,
|
||||
headers: Optional[Dict] = None):
|
||||
"""Initialize the LitterBox client."""
|
||||
self.base_url = base_url.rstrip('/')
|
||||
self.timeout = timeout
|
||||
self.verify_ssl = verify_ssl
|
||||
self.logger = logger or logging.getLogger(__name__)
|
||||
self.proxy_config = proxy_config
|
||||
self.headers = headers or {}
|
||||
self.session = self._create_session(max_retries)
|
||||
|
||||
def _create_session(self, max_retries: int) -> requests.Session:
|
||||
"""Create and configure requests session with retries."""
|
||||
session = requests.Session()
|
||||
|
||||
retry_strategy = Retry(
|
||||
total=max_retries,
|
||||
backoff_factor=0.5,
|
||||
status_forcelist=[429, 500, 502, 503, 504],
|
||||
allowed_methods=["HEAD", "GET", "POST", "PUT", "DELETE", "OPTIONS", "TRACE"]
|
||||
)
|
||||
adapter = HTTPAdapter(max_retries=retry_strategy)
|
||||
session.mount("http://", adapter)
|
||||
session.mount("https://", adapter)
|
||||
|
||||
if self.proxy_config:
|
||||
session.proxies.update(self.proxy_config)
|
||||
if not self.verify_ssl:
|
||||
session.verify = False
|
||||
# Suppress SSL warnings
|
||||
import urllib3
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
session.headers.update(self.headers)
|
||||
return session
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
"""Make HTTP request with enhanced error handling."""
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
self.logger.debug(f"Making {method} request to {url}")
|
||||
|
||||
try:
|
||||
kwargs.setdefault('timeout', self.timeout)
|
||||
response = self.session.request(method, url, **kwargs)
|
||||
|
||||
# Log response details for debugging
|
||||
self.logger.debug(f"Response status: {response.status_code}")
|
||||
|
||||
response.raise_for_status()
|
||||
return response
|
||||
|
||||
except requests.exceptions.HTTPError as e:
|
||||
try:
|
||||
error_data = response.json()
|
||||
except (ValueError, AttributeError):
|
||||
error_data = {'error': response.text}
|
||||
|
||||
error_msg = error_data.get('error', f'HTTP {response.status_code} error')
|
||||
raise LitterBoxAPIError(
|
||||
error_msg,
|
||||
status_code=response.status_code,
|
||||
response=error_data
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise LitterBoxError(f"Request failed: {str(e)}")
|
||||
|
||||
def _validate_command_args(self, cmd_args: Optional[List[str]]) -> Dict:
|
||||
"""Enhanced command line argument validation."""
|
||||
if cmd_args is None:
|
||||
return {}
|
||||
|
||||
if not isinstance(cmd_args, list):
|
||||
raise ValueError("Arguments must be provided as a list")
|
||||
|
||||
if not all(isinstance(arg, str) for arg in cmd_args):
|
||||
raise ValueError("All arguments must be strings")
|
||||
|
||||
# Enhanced security validation
|
||||
dangerous_chars = [';', '&', '|', '`', '$', '(', ')', '{', '}']
|
||||
for arg in cmd_args:
|
||||
if any(char in arg for char in dangerous_chars):
|
||||
raise ValueError(f"Dangerous character detected in argument: {arg}")
|
||||
|
||||
return {'args': cmd_args}
|
||||
|
||||
def _validate_analysis_type(self, analysis_type: str, valid_types: List[str]):
|
||||
"""Validate analysis type with better error messages."""
|
||||
if analysis_type not in valid_types:
|
||||
raise ValueError(f"Invalid analysis_type '{analysis_type}'. Must be one of: {', '.join(valid_types)}")
|
||||
|
||||
def _prepare_file_upload(self, file_path: Union[str, Path, BinaryIO], file_name: Optional[str] = None):
|
||||
"""Prepare file for upload with better error handling."""
|
||||
if isinstance(file_path, (str, Path)):
|
||||
path = Path(file_path)
|
||||
if not path.exists():
|
||||
raise LitterBoxError(f"File not found: {path}")
|
||||
if not path.is_file():
|
||||
raise LitterBoxError(f"Path is not a file: {path}")
|
||||
|
||||
return {'file': (file_name or path.name, open(path, 'rb'), 'application/octet-stream')}
|
||||
else:
|
||||
if not file_name:
|
||||
raise ValueError("file_name is required when uploading file-like objects")
|
||||
return {'file': (file_name, file_path, 'application/octet-stream')}
|
||||
|
||||
# =============================================================================
|
||||
# CORE FILE OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def upload_file(self, file_path: Union[str, Path, BinaryIO], file_name: Optional[str] = None) -> Dict:
|
||||
"""Upload a file for analysis."""
|
||||
files = self._prepare_file_upload(file_path, file_name)
|
||||
try:
|
||||
response = self._make_request('POST', '/upload', files=files)
|
||||
return response.json()
|
||||
finally:
|
||||
# If we opened the file ourselves (path-based), close it. For
|
||||
# caller-provided BinaryIO objects, the caller owns the handle.
|
||||
if isinstance(file_path, (str, Path)):
|
||||
files['file'][1].close()
|
||||
|
||||
def validate_process(self, pid: Union[str, int]) -> Dict:
|
||||
"""Validate if a process ID exists and is accessible."""
|
||||
response = self._make_request('POST', f'/validate/{pid}')
|
||||
return response.json()
|
||||
|
||||
def delete_file(self, file_hash: str) -> Dict:
|
||||
"""Delete a file and its analysis results."""
|
||||
response = self._make_request('DELETE', f'/file/{file_hash}')
|
||||
return response.json()
|
||||
|
||||
# =============================================================================
|
||||
# ANALYSIS OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def analyze_file(self, target: str, analysis_type: str, cmd_args: Optional[List[str]] = None,
|
||||
wait_for_completion: bool = True, verify_file: bool = False) -> Dict:
|
||||
"""Run analysis on a file or process with enhanced validation."""
|
||||
self._validate_analysis_type(analysis_type, ['static', 'dynamic'])
|
||||
|
||||
# Enhanced PID validation for dynamic analysis
|
||||
if analysis_type == 'dynamic' and target.isdigit():
|
||||
try:
|
||||
self.validate_process(target)
|
||||
except LitterBoxAPIError as e:
|
||||
if e.status_code == 404:
|
||||
raise LitterBoxError(f"Process with PID {target} not found or not accessible")
|
||||
raise
|
||||
elif analysis_type == 'static' and target.isdigit():
|
||||
raise ValueError("Cannot perform static analysis on PID")
|
||||
|
||||
# For non-PID targets, verify the file exists if requested
|
||||
if not target.isdigit() and verify_file:
|
||||
try:
|
||||
self.get_file_info(target)
|
||||
except LitterBoxAPIError as e:
|
||||
if e.status_code == 404:
|
||||
raise LitterBoxError(f"File {target} not found or not yet available")
|
||||
|
||||
params = {'wait': '1' if wait_for_completion else '0'}
|
||||
data = self._validate_command_args(cmd_args)
|
||||
|
||||
response = self._make_request('POST', f'/analyze/{analysis_type}/{target}',
|
||||
params=params, json=data)
|
||||
|
||||
result = response.json()
|
||||
|
||||
# Enhanced result handling
|
||||
if result.get('status') == 'early_termination':
|
||||
self.logger.warning(f"Analysis terminated early: {result.get('error')}")
|
||||
elif result.get('status') == 'error':
|
||||
self.logger.error(f"Analysis failed: {result.get('error')}")
|
||||
|
||||
return result
|
||||
|
||||
def analyze_holygrail(self, file_hash: str, wait_for_completion: bool = True) -> Dict:
|
||||
"""Run HolyGrail BYOVD analysis on a kernel driver."""
|
||||
params = {'hash': file_hash}
|
||||
if wait_for_completion:
|
||||
params['wait'] = '1'
|
||||
|
||||
response = self._make_request('GET', '/holygrail', params=params)
|
||||
return response.json()
|
||||
|
||||
def upload_and_analyze_driver(self, file_path: Union[str, Path, BinaryIO],
|
||||
file_name: Optional[str] = None,
|
||||
run_holygrail: bool = True) -> Dict:
|
||||
"""Upload a kernel driver and optionally run HolyGrail analysis."""
|
||||
# Upload the driver
|
||||
upload_result = self.upload_file(file_path, file_name)
|
||||
file_hash = upload_result['file_info']['md5']
|
||||
|
||||
results = {
|
||||
'upload': upload_result,
|
||||
'holygrail': None
|
||||
}
|
||||
|
||||
if run_holygrail:
|
||||
try:
|
||||
results['holygrail'] = self.analyze_holygrail(file_hash)
|
||||
except LitterBoxError as e:
|
||||
self.logger.error(f"HolyGrail analysis failed: {e}")
|
||||
results['holygrail'] = {'error': str(e)}
|
||||
|
||||
return results
|
||||
|
||||
# =============================================================================
|
||||
# DOPPELGANGER OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def _validate_doppelganger_params(self, analysis_type: str, operation: str,
|
||||
file_hash: Optional[str], folder_path: Optional[str]):
|
||||
"""Enhanced doppelganger parameter validation."""
|
||||
if analysis_type not in ['blender', 'fuzzy']:
|
||||
raise ValueError("analysis_type must be either 'blender' or 'fuzzy'")
|
||||
|
||||
if operation == 'scan' and analysis_type != 'blender':
|
||||
raise ValueError("scan operation is only available for blender analysis")
|
||||
|
||||
if operation == 'create_db' and not folder_path:
|
||||
raise ValueError("folder_path is required for create_db operation")
|
||||
|
||||
if operation == 'analyze' and not file_hash:
|
||||
raise ValueError("file_hash is required for analyze operation")
|
||||
|
||||
def doppelganger_operation(self, analysis_type: str, operation: str,
|
||||
file_hash: Optional[str] = None, folder_path: Optional[str] = None,
|
||||
extensions: Optional[List[str]] = None, threshold: int = 1) -> Dict:
|
||||
"""Unified doppelganger operations with enhanced error handling."""
|
||||
self._validate_doppelganger_params(analysis_type, operation, file_hash, folder_path)
|
||||
|
||||
# For GET requests (comparisons)
|
||||
if file_hash and operation in ['compare', 'analyze']:
|
||||
params = {'type': analysis_type, 'hash': file_hash}
|
||||
if operation == 'analyze' and analysis_type == 'fuzzy':
|
||||
params['threshold'] = threshold
|
||||
response = self._make_request('GET', '/doppelganger', params=params)
|
||||
return response.json()
|
||||
|
||||
# For POST requests
|
||||
data = {'type': analysis_type, 'operation': operation}
|
||||
|
||||
if operation == 'create_db':
|
||||
data['folder_path'] = folder_path
|
||||
if extensions:
|
||||
data['extensions'] = extensions
|
||||
elif operation == 'analyze':
|
||||
data['hash'] = file_hash
|
||||
data['threshold'] = threshold
|
||||
|
||||
response = self._make_request('POST', '/doppelganger', json=data)
|
||||
return response.json()
|
||||
|
||||
def run_blender_scan(self) -> Dict:
|
||||
"""Run a system-wide Blender scan."""
|
||||
return self.doppelganger_operation('blender', 'scan')
|
||||
|
||||
def compare_with_blender(self, file_hash: str) -> Dict:
|
||||
"""Compare a file with current system state using Blender."""
|
||||
return self.doppelganger_operation('blender', 'compare', file_hash=file_hash)
|
||||
|
||||
def create_fuzzy_database(self, folder_path: str, extensions: Optional[List[str]] = None) -> Dict:
|
||||
"""Create fuzzy hash database from folder."""
|
||||
return self.doppelganger_operation('fuzzy', 'create_db',
|
||||
folder_path=folder_path, extensions=extensions)
|
||||
|
||||
def analyze_with_fuzzy(self, file_hash: str, threshold: int = 1) -> Dict:
|
||||
"""Analyze file using fuzzy hash comparison."""
|
||||
return self.doppelganger_operation('fuzzy', 'analyze',
|
||||
file_hash=file_hash, threshold=threshold)
|
||||
|
||||
# =============================================================================
|
||||
# RESULT RETRIEVAL
|
||||
# =============================================================================
|
||||
|
||||
def get_results(self, target: str, analysis_type: str) -> Dict:
|
||||
"""Get results for a specific analysis type."""
|
||||
self._validate_analysis_type(analysis_type, ['static', 'dynamic', 'info'])
|
||||
response = self._make_request('GET', f'/results/{analysis_type}/{target}')
|
||||
return response.json()
|
||||
|
||||
def get_file_info(self, target: str) -> Dict:
|
||||
"""Get file information via API endpoint."""
|
||||
response = self._make_request('GET', f'/api/results/info/{target}')
|
||||
return response.json()
|
||||
|
||||
def get_static_results(self, target: str) -> Dict:
|
||||
"""Get static analysis results via API endpoint."""
|
||||
response = self._make_request('GET', f'/api/results/static/{target}')
|
||||
return response.json()
|
||||
|
||||
def get_dynamic_results(self, target: str) -> Dict:
|
||||
"""Get dynamic analysis results via API endpoint."""
|
||||
response = self._make_request('GET', f'/api/results/dynamic/{target}')
|
||||
return response.json()
|
||||
|
||||
def get_holygrail_results(self, target: str) -> Dict:
|
||||
"""Get HolyGrail/BYOVD analysis results via API endpoint."""
|
||||
response = self._make_request('GET', f'/api/results/holygrail/{target}')
|
||||
return response.json()
|
||||
|
||||
def get_risk_assessment(self, target: str) -> Dict:
|
||||
"""Get the computed detection assessment (score, level, triggering indicators) for a target."""
|
||||
response = self._make_request('GET', f'/api/results/risk/{target}')
|
||||
return response.json()
|
||||
|
||||
# =============================================================================
|
||||
# EDR (Whiskers + Elastic Defend) OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def list_edr_profiles(self) -> Dict:
|
||||
"""List EDR profiles registered under Config/edr_profiles/."""
|
||||
response = self._make_request('GET', '/api/edr/profiles')
|
||||
return response.json()
|
||||
|
||||
def get_edr_agents_status(self) -> Dict:
|
||||
"""Probe each registered EDR profile's Whiskers agent + Elastic stack.
|
||||
|
||||
Returns reachability, hostname, agent version, lock state, cluster
|
||||
info etc. The server fans out probes in parallel; total wall time
|
||||
is the slowest probe."""
|
||||
response = self._make_request('GET', '/api/edr/agents/status')
|
||||
return response.json()
|
||||
|
||||
def analyze_edr(self, file_hash: str, profile: str,
|
||||
cmd_args: Optional[List[str]] = None,
|
||||
xor_key: Optional[int] = None) -> Dict:
|
||||
"""Dispatch a payload to a registered EDR profile.
|
||||
|
||||
Returns the Phase-1 result immediately (status='polling_alerts'
|
||||
when execution succeeded; 'blocked_by_av' / 'agent_unreachable' /
|
||||
'busy' / 'error' otherwise). Phase-2 (Elastic alert correlation)
|
||||
runs in a server-side daemon thread; poll
|
||||
`get_edr_results(file_hash, profile)` until status is no longer
|
||||
'polling_alerts'.
|
||||
"""
|
||||
data = {}
|
||||
if cmd_args:
|
||||
data.update(self._validate_command_args(cmd_args))
|
||||
if xor_key is not None:
|
||||
if not 0 <= xor_key <= 255:
|
||||
raise ValueError(f"xor_key must be 0-255, got {xor_key}")
|
||||
data['xor_key'] = xor_key
|
||||
|
||||
response = self._make_request(
|
||||
'POST', f'/analyze/edr/{profile}/{file_hash}', json=data,
|
||||
)
|
||||
return response.json()
|
||||
|
||||
def get_edr_results(self, file_hash: str, profile: str) -> Dict:
|
||||
"""Fetch the saved findings for a specific EDR profile run."""
|
||||
response = self._make_request(
|
||||
'GET', f'/api/results/edr/{profile}/{file_hash}',
|
||||
)
|
||||
return response.json()
|
||||
|
||||
def get_edr_index(self, file_hash: str) -> Dict:
|
||||
"""Fetch every saved EDR run for a target (one entry per profile)."""
|
||||
response = self._make_request('GET', f'/api/results/edr/{file_hash}')
|
||||
return response.json()
|
||||
|
||||
def wait_for_edr_completion(self, file_hash: str, profile: str,
|
||||
interval: float = 3.0,
|
||||
timeout: float = 180.0) -> Dict:
|
||||
"""Block until Phase-2 settles (status leaves 'polling_alerts'),
|
||||
the saved JSON appears for the first time, or `timeout` seconds
|
||||
elapse. Returns the last-seen findings dict (may still be
|
||||
'polling_alerts' on timeout — caller decides what to do)."""
|
||||
import time
|
||||
deadline = time.monotonic() + timeout
|
||||
last = None
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
last = self.get_edr_results(file_hash, profile)
|
||||
if (last or {}).get('status') and last.get('status') != 'polling_alerts':
|
||||
return last
|
||||
except LitterBoxAPIError as e:
|
||||
# 404 just means Phase-1 hasn't been kicked off yet — keep polling.
|
||||
if e.status_code != 404:
|
||||
raise
|
||||
time.sleep(interval)
|
||||
return last or {'status': 'timeout', 'error': f'Phase-2 timeout after {timeout}s'}
|
||||
|
||||
# =============================================================================
|
||||
# SYSTEM HEALTH
|
||||
# =============================================================================
|
||||
|
||||
def get_scanners_status(self) -> Dict:
|
||||
"""Inventory of configured analyzers and whether their binaries
|
||||
exist on disk (drives the dashboard scanner panel)."""
|
||||
response = self._make_request('GET', '/api/system/scanners')
|
||||
return response.json()
|
||||
|
||||
def get_files_summary(self) -> Dict:
|
||||
"""Get summary of all analyzed files and processes."""
|
||||
response = self._make_request('GET', '/files')
|
||||
return response.json()
|
||||
|
||||
def get_comprehensive_results(self, target: str) -> Dict:
|
||||
"""Get all available results for a target in one call.
|
||||
|
||||
The five GETs are independent, so they fan out across a small
|
||||
thread pool — wall time on a populated target drops from
|
||||
sequential (~5×) to roughly the slowest single response.
|
||||
Includes EDR runs (across every profile) when present.
|
||||
"""
|
||||
fetchers = [
|
||||
('file_info', self.get_file_info),
|
||||
('static_results', self.get_static_results),
|
||||
('dynamic_results', self.get_dynamic_results),
|
||||
('holygrail_results', self.get_holygrail_results),
|
||||
('edr_index', self.get_edr_index),
|
||||
]
|
||||
|
||||
def _safe_fetch(method):
|
||||
try:
|
||||
return method(target)
|
||||
except LitterBoxAPIError as e:
|
||||
# 404 means "this analysis type wasn't run for this target",
|
||||
# which is normal and shouldn't bubble up as an error.
|
||||
return None if e.status_code == 404 else {'error': str(e)}
|
||||
except LitterBoxError as e:
|
||||
return {'error': str(e)}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=len(fetchers)) as executor:
|
||||
futures = {key: executor.submit(_safe_fetch, fn) for key, fn in fetchers}
|
||||
results = {key: fut.result() for key, fut in futures.items()}
|
||||
|
||||
results['target'] = target
|
||||
return results
|
||||
|
||||
# =============================================================================
|
||||
# REPORT OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def get_report(self, target: str, download: bool = False) -> Union[str, bytes]:
|
||||
"""Get analysis report for a file or process."""
|
||||
params = {'download': 'true' if download else 'false'}
|
||||
response = self._make_request('GET', f'/api/report/{target}', params=params)
|
||||
return response.content if download else response.text
|
||||
|
||||
def download_report(self, target: str, output_path: Optional[str] = None) -> str:
|
||||
"""Download analysis report and save it to disk."""
|
||||
response = self._make_request('GET', f'/api/report/{target}',
|
||||
params={'download': 'true'}, stream=True)
|
||||
|
||||
filename = self._extract_filename_from_response(response, target)
|
||||
|
||||
# Determine final output path
|
||||
if output_path:
|
||||
if os.path.isdir(output_path):
|
||||
save_path = os.path.join(output_path, filename)
|
||||
else:
|
||||
save_path = output_path
|
||||
else:
|
||||
save_path = filename
|
||||
|
||||
try:
|
||||
with open(save_path, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if chunk: # filter out keep-alive chunks
|
||||
f.write(chunk)
|
||||
self.logger.info(f"Report saved to {save_path}")
|
||||
return save_path
|
||||
except Exception as e:
|
||||
raise LitterBoxError(f"Failed to save report: {str(e)}")
|
||||
|
||||
def _extract_filename_from_response(self, response: requests.Response, target: str) -> str:
|
||||
"""Extract filename from Content-Disposition header or create default."""
|
||||
content_disposition = response.headers.get('Content-Disposition', '')
|
||||
|
||||
if 'filename=' in content_disposition:
|
||||
match = re.search(r'filename="([^"]+)"', content_disposition)
|
||||
if match:
|
||||
return match.group(1)
|
||||
|
||||
# Default filename with timestamp
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
return f"LitterBox_Report_{target[:8]}_{timestamp}.html"
|
||||
|
||||
def open_report_in_browser(self, target: str) -> bool:
|
||||
"""Generate a report and open it in the default web browser."""
|
||||
try:
|
||||
report_content = self.get_report(target, download=False)
|
||||
|
||||
# Create temporary file
|
||||
fd, path = tempfile.mkstemp(suffix='.html', prefix='litterbox_report_')
|
||||
try:
|
||||
with os.fdopen(fd, 'w', encoding='utf-8') as tmp:
|
||||
tmp.write(report_content)
|
||||
|
||||
webbrowser.open('file://' + path)
|
||||
self.logger.info(f"Report opened in browser from {path}")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to open report in browser: {str(e)}")
|
||||
return False
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to generate report: {str(e)}")
|
||||
return False
|
||||
|
||||
# =============================================================================
|
||||
# SYSTEM OPERATIONS
|
||||
# =============================================================================
|
||||
|
||||
def cleanup(self, include_uploads: bool = True, include_results: bool = True,
|
||||
include_analysis: bool = True) -> Dict:
|
||||
"""Clean up analysis artifacts and uploaded files."""
|
||||
data = {
|
||||
'cleanup_uploads': include_uploads,
|
||||
'cleanup_results': include_results,
|
||||
'cleanup_analysis': include_analysis
|
||||
}
|
||||
response = self._make_request('POST', '/cleanup', json=data)
|
||||
return response.json()
|
||||
|
||||
def check_health(self) -> Dict:
|
||||
"""Check the health status of the LitterBox service."""
|
||||
# Use direct request without retries for health check
|
||||
url = urljoin(self.base_url, '/health')
|
||||
|
||||
try:
|
||||
response = requests.get(url, timeout=self.timeout, verify=self.verify_ssl)
|
||||
if response.status_code in [200, 503]: # Both OK and degraded are valid
|
||||
return response.json()
|
||||
else:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.error(f"Health check failed: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Unable to connect to service",
|
||||
"details": str(e)
|
||||
}
|
||||
|
||||
def get_system_status(self) -> Dict:
|
||||
"""Get comprehensive system status including health and file summary."""
|
||||
try:
|
||||
health = self.check_health()
|
||||
files_summary = self.get_files_summary()
|
||||
|
||||
return {
|
||||
'health': health,
|
||||
'files_summary': files_summary,
|
||||
'status': 'healthy' if health.get('status') == 'ok' else 'degraded'
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'health': {'status': 'error', 'error': str(e)},
|
||||
'files_summary': None,
|
||||
'status': 'error'
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# CONTEXT MANAGERS AND CLEANUP
|
||||
# =============================================================================
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.close()
|
||||
|
||||
def close(self):
|
||||
"""Close the session and cleanup resources."""
|
||||
if hasattr(self, 'session'):
|
||||
self.session.close()
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# ENHANCED COMMAND LINE INTERFACE
|
||||
# =============================================================================
|
||||
|
||||
def create_enhanced_parser():
|
||||
"""Create enhanced argument parser with all available operations."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Enhanced LitterBox Payload-Analysis Client",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
# Upload and analyze a file
|
||||
%(prog)s upload malware.exe --analysis static dynamic
|
||||
|
||||
# Upload and analyze a kernel driver
|
||||
%(prog)s upload-driver rootkit.sys --holygrail
|
||||
|
||||
# Analyze a running process
|
||||
%(prog)s analyze-pid 1234 --wait
|
||||
|
||||
# Get comprehensive results
|
||||
%(prog)s results abc123def --comprehensive
|
||||
|
||||
# Run Doppelganger operations
|
||||
%(prog)s doppelganger-scan --type blender
|
||||
%(prog)s doppelganger-analyze abc123def --type fuzzy --threshold 85
|
||||
|
||||
# EDR (Whiskers + Elastic Defend)
|
||||
%(prog)s edr-profiles
|
||||
%(prog)s edr-status
|
||||
%(prog)s edr-run abc123def --profile elastic --wait
|
||||
%(prog)s edr-results abc123def --profile elastic
|
||||
|
||||
# System operations
|
||||
%(prog)s status --full
|
||||
%(prog)s scanners
|
||||
%(prog)s cleanup --all
|
||||
|
||||
# Report operations
|
||||
%(prog)s report abc123def --browser
|
||||
%(prog)s report abc123def --download --output ./reports/
|
||||
This module stays a few lines on purpose — it just dispatches into
|
||||
`cli.run`. Adding a new command means: write a method in the right
|
||||
client mixin, add a subparser in `cli/parser.py`, write a `_cmd_*`
|
||||
handler in `cli/handlers.py`, register it in `COMMAND_HANDLERS`. Done.
|
||||
"""
|
||||
)
|
||||
|
||||
# Global options
|
||||
parser.add_argument('--debug', action='store_true', help='Enable debug logging')
|
||||
parser.add_argument('--url', default='http://127.0.0.1:1337', help='LitterBox server URL')
|
||||
parser.add_argument('--timeout', type=int, default=120, help='Request timeout in seconds')
|
||||
parser.add_argument('--no-verify-ssl', action='store_true', help='Disable SSL verification')
|
||||
parser.add_argument('--proxy', help='Proxy URL (e.g., http://proxy:8080)')
|
||||
|
||||
subparsers = parser.add_subparsers(dest='command', help='Command to execute')
|
||||
|
||||
# Upload command
|
||||
upload_parser = subparsers.add_parser('upload', help='Upload file for analysis')
|
||||
upload_parser.add_argument('file', help='File to upload')
|
||||
upload_parser.add_argument('--name', help='Custom name for the file')
|
||||
upload_parser.add_argument('--analysis', nargs='+', choices=['static', 'dynamic'],
|
||||
help='Run analysis after upload')
|
||||
upload_parser.add_argument('--args', nargs='+', help='Command line arguments for dynamic analysis')
|
||||
|
||||
# Upload driver command
|
||||
driver_parser = subparsers.add_parser('upload-driver', help='Upload kernel driver')
|
||||
driver_parser.add_argument('file', help='Driver file to upload')
|
||||
driver_parser.add_argument('--name', help='Custom name for the driver')
|
||||
driver_parser.add_argument('--holygrail', action='store_true', help='Run HolyGrail analysis')
|
||||
|
||||
# Analyze PID command
|
||||
analyze_pid_parser = subparsers.add_parser('analyze-pid', help='Analyze running process')
|
||||
analyze_pid_parser.add_argument('pid', type=int, help='Process ID to analyze')
|
||||
analyze_pid_parser.add_argument('--wait', action='store_true', help='Wait for analysis completion')
|
||||
analyze_pid_parser.add_argument('--args', nargs='+', help='Command line arguments')
|
||||
|
||||
# Results command
|
||||
results_parser = subparsers.add_parser('results', help='Get analysis results')
|
||||
results_parser.add_argument('target', help='File hash or PID')
|
||||
results_parser.add_argument('--type', choices=['static', 'dynamic', 'info', 'holygrail'],
|
||||
help='Type of results to retrieve')
|
||||
results_parser.add_argument('--comprehensive', action='store_true',
|
||||
help='Get all available results')
|
||||
|
||||
# EDR commands
|
||||
edr_run_parser = subparsers.add_parser('edr-run', help='Dispatch a payload to an EDR profile (Whiskers + Elastic)')
|
||||
edr_run_parser.add_argument('hash', help='File hash returned by upload')
|
||||
edr_run_parser.add_argument('--profile', required=True, help='EDR profile name (Config/edr_profiles/<name>.yml)')
|
||||
edr_run_parser.add_argument('--args', nargs='+', help='Command-line arguments passed to the payload')
|
||||
edr_run_parser.add_argument('--xor-key', type=int, help='Single byte (0-255) used to XOR-encode the payload in transit')
|
||||
edr_run_parser.add_argument('--wait', action='store_true', help='Poll until Phase-2 (Elastic alert correlation) settles')
|
||||
edr_run_parser.add_argument('--timeout', type=float, default=180.0, help='Phase-2 wait timeout in seconds (default 180)')
|
||||
|
||||
edr_results_parser = subparsers.add_parser('edr-results', help='Read saved EDR findings for a target')
|
||||
edr_results_parser.add_argument('hash', help='File hash')
|
||||
edr_results_parser.add_argument('--profile', help='Specific profile (omit to get the index across every profile)')
|
||||
import sys
|
||||
|
||||
subparsers.add_parser('edr-profiles', help='List registered EDR profiles')
|
||||
subparsers.add_parser('edr-status', help='Live probe of every EDR profile (Whiskers + Elastic reachability)')
|
||||
subparsers.add_parser('scanners', help='Inventory of configured local analyzers and whether their binaries exist')
|
||||
|
||||
# Doppelganger scan command
|
||||
doppelganger_scan_parser = subparsers.add_parser('doppelganger-scan', help='Run doppelganger scan')
|
||||
doppelganger_scan_parser.add_argument('--type', choices=['blender'], default='blender',
|
||||
help='Type of scan to perform')
|
||||
|
||||
# Doppelganger analyze command
|
||||
doppelganger_analyze_parser = subparsers.add_parser('doppelganger-analyze', help='Doppelganger analysis')
|
||||
doppelganger_analyze_parser.add_argument('hash', help='File hash to analyze')
|
||||
doppelganger_analyze_parser.add_argument('--type', choices=['blender', 'fuzzy'], required=True,
|
||||
help='Type of analysis')
|
||||
doppelganger_analyze_parser.add_argument('--threshold', type=int, default=1,
|
||||
help='Similarity threshold for fuzzy analysis')
|
||||
|
||||
# Doppelganger database command
|
||||
db_parser = subparsers.add_parser('doppelganger-db', help='Create doppelganger database')
|
||||
db_parser.add_argument('--folder', required=True, help='Folder path to process')
|
||||
db_parser.add_argument('--extensions', nargs='+', help='File extensions to include')
|
||||
|
||||
# Report command
|
||||
report_parser = subparsers.add_parser('report', help='Generate analysis report')
|
||||
report_parser.add_argument('target', help='File hash or process ID')
|
||||
report_parser.add_argument('--download', action='store_true', help='Download the report')
|
||||
report_parser.add_argument('--output', help='Output path for downloaded report')
|
||||
report_parser.add_argument('--browser', action='store_true', help='Open report in browser')
|
||||
|
||||
# System commands
|
||||
subparsers.add_parser('status', help='Get system status').add_argument(
|
||||
'--full', action='store_true', help='Get comprehensive status')
|
||||
subparsers.add_parser('health', help='Check service health')
|
||||
subparsers.add_parser('files', help='Get summary of all analyzed files')
|
||||
|
||||
# Cleanup command
|
||||
cleanup_parser = subparsers.add_parser('cleanup', help='Clean up analysis artifacts')
|
||||
cleanup_parser.add_argument('--all', action='store_true', help='Clean all artifacts')
|
||||
cleanup_parser.add_argument('--uploads', action='store_true', help='Clean upload directory')
|
||||
cleanup_parser.add_argument('--results', action='store_true', help='Clean results directory')
|
||||
cleanup_parser.add_argument('--analysis', action='store_true', help='Clean analysis artifacts')
|
||||
|
||||
# Delete command
|
||||
delete_parser = subparsers.add_parser('delete', help='Delete file and its results')
|
||||
delete_parser.add_argument('hash', help='File hash to delete')
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def setup_enhanced_client(args) -> LitterBoxClient:
|
||||
"""Create client instance from command line arguments."""
|
||||
client_kwargs = {
|
||||
'base_url': args.url,
|
||||
'timeout': args.timeout,
|
||||
'verify_ssl': not args.no_verify_ssl,
|
||||
'logger': logging.getLogger('litterbox'),
|
||||
}
|
||||
|
||||
if args.proxy:
|
||||
client_kwargs['proxy_config'] = {'http': args.proxy, 'https': args.proxy}
|
||||
|
||||
return LitterBoxClient(**client_kwargs)
|
||||
|
||||
|
||||
def handle_enhanced_analysis_result(result: Dict, analysis_type: str):
|
||||
"""Enhanced result handling with better formatting."""
|
||||
status = result.get('status', 'unknown')
|
||||
|
||||
if status == 'early_termination':
|
||||
print("Process terminated early:")
|
||||
print(f" Error: {result.get('error')}")
|
||||
details = result.get('details', {})
|
||||
if details:
|
||||
print(" Details:")
|
||||
for key, value in details.items():
|
||||
print(f" {key}: {value}")
|
||||
elif status == 'error':
|
||||
print(f"Analysis failed: {result.get('error')}")
|
||||
if 'details' in result:
|
||||
print(f" Details: {result['details']}")
|
||||
elif status == 'success':
|
||||
print("Analysis completed successfully")
|
||||
print(json.dumps(result.get('results', result), indent=2))
|
||||
else:
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# COMMAND HANDLERS
|
||||
# Each handler does the work for one CLI subcommand. main() looks up the
|
||||
# right handler in the COMMAND_HANDLERS dispatch table below.
|
||||
# =============================================================================
|
||||
|
||||
def _cmd_upload(client: LitterBoxClient, args):
|
||||
result = client.upload_file(args.file, file_name=args.name)
|
||||
file_hash = result['file_info']['md5']
|
||||
print(f"File uploaded successfully. Hash: {file_hash}")
|
||||
|
||||
for analysis_type in args.analysis or []:
|
||||
print(f"Running {analysis_type} analysis...")
|
||||
analysis_args = args.args if analysis_type == 'dynamic' else None
|
||||
result = client.analyze_file(file_hash, analysis_type,
|
||||
cmd_args=analysis_args, wait_for_completion=True)
|
||||
handle_enhanced_analysis_result(result, analysis_type)
|
||||
|
||||
|
||||
def _cmd_upload_driver(client: LitterBoxClient, args):
|
||||
result = client.upload_and_analyze_driver(args.file, file_name=args.name,
|
||||
run_holygrail=args.holygrail)
|
||||
file_hash = result['upload']['file_info']['md5']
|
||||
print(f"Driver uploaded successfully. Hash: {file_hash}")
|
||||
|
||||
if args.holygrail and result['holygrail']:
|
||||
if 'error' in result['holygrail']:
|
||||
print(f"HolyGrail analysis failed: {result['holygrail']['error']}")
|
||||
else:
|
||||
print("HolyGrail analysis completed")
|
||||
print(json.dumps(result['holygrail'], indent=2))
|
||||
|
||||
|
||||
def _cmd_analyze_pid(client: LitterBoxClient, args):
|
||||
print(f"Analyzing process {args.pid}...")
|
||||
result = client.analyze_file(str(args.pid), 'dynamic',
|
||||
cmd_args=args.args, wait_for_completion=args.wait)
|
||||
handle_enhanced_analysis_result(result, 'dynamic')
|
||||
|
||||
|
||||
def _cmd_results(client: LitterBoxClient, args):
|
||||
if args.comprehensive:
|
||||
result = client.get_comprehensive_results(args.target)
|
||||
print("Comprehensive Results:")
|
||||
print(json.dumps(result, indent=2))
|
||||
return
|
||||
|
||||
if not args.type:
|
||||
print("Please specify --type or use --comprehensive")
|
||||
sys.exit(1)
|
||||
|
||||
if args.type == 'holygrail':
|
||||
result = client.get_holygrail_results(args.target)
|
||||
else:
|
||||
result = client.get_results(args.target, args.type)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_run(client: LitterBoxClient, args):
|
||||
print(f"Dispatching {args.hash} to EDR profile '{args.profile}'...")
|
||||
phase1 = client.analyze_edr(args.hash, args.profile,
|
||||
cmd_args=args.args, xor_key=args.xor_key)
|
||||
print("Phase-1 result:")
|
||||
print(json.dumps(phase1, indent=2))
|
||||
|
||||
if args.wait and (phase1 or {}).get('status') == 'polling_alerts':
|
||||
print(f"\nPolling for Phase-2 settle (timeout {args.timeout}s)...")
|
||||
final = client.wait_for_edr_completion(args.hash, args.profile, timeout=args.timeout)
|
||||
print("Phase-2 result:")
|
||||
print(json.dumps(final, indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_results(client: LitterBoxClient, args):
|
||||
if args.profile:
|
||||
result = client.get_edr_results(args.hash, args.profile)
|
||||
else:
|
||||
result = client.get_edr_index(args.hash)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_profiles(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.list_edr_profiles(), indent=2))
|
||||
|
||||
|
||||
def _cmd_edr_status(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.get_edr_agents_status(), indent=2))
|
||||
|
||||
|
||||
def _cmd_scanners(client: LitterBoxClient, args):
|
||||
print(json.dumps(client.get_scanners_status(), indent=2))
|
||||
|
||||
|
||||
def _cmd_doppelganger_scan(client: LitterBoxClient, args):
|
||||
print(f"Running doppelganger scan with type: {args.type}")
|
||||
print(json.dumps(client.run_blender_scan(), indent=2))
|
||||
|
||||
|
||||
def _cmd_doppelganger_analyze(client: LitterBoxClient, args):
|
||||
print(f"Running doppelganger analysis with type: {args.type}")
|
||||
if args.type == 'blender':
|
||||
result = client.compare_with_blender(args.hash)
|
||||
else:
|
||||
result = client.analyze_with_fuzzy(args.hash, args.threshold)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_doppelganger_db(client: LitterBoxClient, args):
|
||||
print("Creating doppelganger fuzzy database...")
|
||||
print(json.dumps(client.create_fuzzy_database(args.folder, args.extensions), indent=2))
|
||||
|
||||
|
||||
def _cmd_report(client: LitterBoxClient, args):
|
||||
if args.browser:
|
||||
print(f"Opening report for {args.target} in browser...")
|
||||
if not client.open_report_in_browser(args.target):
|
||||
print("Failed to open report in browser.")
|
||||
sys.exit(1)
|
||||
elif args.download:
|
||||
print(f"Downloading report for {args.target}...")
|
||||
output_path = client.download_report(args.target, args.output)
|
||||
print(f"Report saved to: {output_path}")
|
||||
else:
|
||||
print(client.get_report(args.target))
|
||||
|
||||
|
||||
def _cmd_status(client: LitterBoxClient, args):
|
||||
if args.full:
|
||||
result = client.get_system_status()
|
||||
print("System Status:")
|
||||
else:
|
||||
result = client.check_health()
|
||||
print("Health Check:")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_health(client: LitterBoxClient, args):
|
||||
result = client.check_health()
|
||||
status = result.get('status', 'unknown')
|
||||
print("Service is healthy" if status == 'ok' else f"Service status: {status}")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_files(client: LitterBoxClient, args):
|
||||
print("Files Summary:")
|
||||
print(json.dumps(client.get_files_summary(), indent=2))
|
||||
|
||||
|
||||
def _cmd_cleanup(client: LitterBoxClient, args):
|
||||
if args.all:
|
||||
args.uploads = args.results = args.analysis = True
|
||||
result = client.cleanup(include_uploads=args.uploads,
|
||||
include_results=args.results,
|
||||
include_analysis=args.analysis)
|
||||
print("Cleanup Results:")
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _cmd_delete(client: LitterBoxClient, args):
|
||||
print("Deletion Results:")
|
||||
print(json.dumps(client.delete_file(args.hash), indent=2))
|
||||
|
||||
|
||||
# Dispatch table — keyed by the CLI subcommand name.
|
||||
COMMAND_HANDLERS = {
|
||||
'upload': _cmd_upload,
|
||||
'upload-driver': _cmd_upload_driver,
|
||||
'analyze-pid': _cmd_analyze_pid,
|
||||
'results': _cmd_results,
|
||||
'edr-run': _cmd_edr_run,
|
||||
'edr-results': _cmd_edr_results,
|
||||
'edr-profiles': _cmd_edr_profiles,
|
||||
'edr-status': _cmd_edr_status,
|
||||
'scanners': _cmd_scanners,
|
||||
'doppelganger-scan': _cmd_doppelganger_scan,
|
||||
'doppelganger-analyze': _cmd_doppelganger_analyze,
|
||||
'doppelganger-db': _cmd_doppelganger_db,
|
||||
'report': _cmd_report,
|
||||
'status': _cmd_status,
|
||||
'health': _cmd_health,
|
||||
'files': _cmd_files,
|
||||
'cleanup': _cmd_cleanup,
|
||||
'delete': _cmd_delete,
|
||||
}
|
||||
from cli import run
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point. Parses args, dispatches to the right handler."""
|
||||
parser = create_enhanced_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
log_level = logging.DEBUG if args.debug else logging.INFO
|
||||
logging.basicConfig(
|
||||
level=log_level,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
return
|
||||
|
||||
handler = COMMAND_HANDLERS.get(args.command)
|
||||
if handler is None:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
with setup_enhanced_client(args) as client:
|
||||
handler(client, args)
|
||||
except LitterBoxAPIError as e:
|
||||
logging.error(f"API Error (Status {e.status_code}): {str(e)}")
|
||||
if args.debug and e.response:
|
||||
logging.debug(f"Response data: {e.response}")
|
||||
sys.exit(1)
|
||||
except LitterBoxError as e:
|
||||
logging.error(f"Client Error: {str(e)}")
|
||||
sys.exit(1)
|
||||
except KeyboardInterrupt:
|
||||
print("\nOperation cancelled by user")
|
||||
sys.exit(130)
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected Error: {str(e)}")
|
||||
if args.debug:
|
||||
logging.exception("Detailed error information:")
|
||||
sys.exit(1)
|
||||
sys.exit(run())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
main()
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
"""Python client for the LitterBox payload-analysis sandbox API.
|
||||
|
||||
The client class is split into per-domain mixins (files, analysis,
|
||||
doppelganger, results, edr, reports, system) composed onto a small
|
||||
`_BaseClient` that owns the requests Session and the generic
|
||||
`_make_request` helper. Public callers import `LitterBoxClient` from
|
||||
this package and treat it as one class — the split is purely an
|
||||
internal organization win for maintenance.
|
||||
"""
|
||||
|
||||
from .client import LitterBoxClient
|
||||
from .exceptions import LitterBoxAPIError, LitterBoxError
|
||||
|
||||
__all__ = ["LitterBoxClient", "LitterBoxError", "LitterBoxAPIError"]
|
||||
@@ -0,0 +1,90 @@
|
||||
"""Run static / dynamic / HolyGrail analyses.
|
||||
|
||||
Static + dynamic both go through `analyze_file`; BYOVD analyses use the
|
||||
dedicated `/holygrail` endpoint. `upload_and_analyze_driver` is a
|
||||
convenience wrapper for the typical "upload .sys then run HolyGrail"
|
||||
flow.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import BinaryIO, Dict, List, Optional, Union
|
||||
|
||||
from .exceptions import LitterBoxAPIError, LitterBoxError
|
||||
|
||||
|
||||
class AnalysisMixin:
|
||||
def analyze_file(
|
||||
self,
|
||||
target: str,
|
||||
analysis_type: str,
|
||||
cmd_args: Optional[List[str]] = None,
|
||||
wait_for_completion: bool = True,
|
||||
verify_file: bool = False,
|
||||
) -> Dict:
|
||||
"""Run analysis on a file or PID. `target` is either a file MD5
|
||||
or a numeric PID (dynamic only)."""
|
||||
self._validate_analysis_type(analysis_type, ["static", "dynamic"])
|
||||
|
||||
# Pre-validate the PID for dynamic-on-pid analysis so the caller
|
||||
# gets a clean ValueError rather than a server-side 404.
|
||||
if analysis_type == "dynamic" and target.isdigit():
|
||||
try:
|
||||
self.validate_process(target)
|
||||
except LitterBoxAPIError as e:
|
||||
if e.status_code == 404:
|
||||
raise LitterBoxError(f"Process with PID {target} not found or not accessible")
|
||||
raise
|
||||
elif analysis_type == "static" and target.isdigit():
|
||||
raise ValueError("Cannot perform static analysis on PID")
|
||||
|
||||
# Optional file existence check before the (potentially expensive) analysis.
|
||||
if not target.isdigit() and verify_file:
|
||||
try:
|
||||
self.get_file_info(target)
|
||||
except LitterBoxAPIError as e:
|
||||
if e.status_code == 404:
|
||||
raise LitterBoxError(f"File {target} not found or not yet available")
|
||||
|
||||
params = {"wait": "1" if wait_for_completion else "0"}
|
||||
data = self._validate_command_args(cmd_args)
|
||||
|
||||
response = self._make_request(
|
||||
"POST", f"/analyze/{analysis_type}/{target}",
|
||||
params=params, json=data,
|
||||
)
|
||||
|
||||
result = response.json()
|
||||
if result.get("status") == "early_termination":
|
||||
self.logger.warning(f"Analysis terminated early: {result.get('error')}")
|
||||
elif result.get("status") == "error":
|
||||
self.logger.error(f"Analysis failed: {result.get('error')}")
|
||||
return result
|
||||
|
||||
def analyze_holygrail(self, file_hash: str, wait_for_completion: bool = True) -> Dict:
|
||||
"""Run HolyGrail BYOVD analysis on a kernel driver."""
|
||||
params = {"hash": file_hash}
|
||||
if wait_for_completion:
|
||||
params["wait"] = "1"
|
||||
response = self._make_request("GET", "/holygrail", params=params)
|
||||
return response.json()
|
||||
|
||||
def upload_and_analyze_driver(
|
||||
self,
|
||||
file_path: Union[str, Path, BinaryIO],
|
||||
file_name: Optional[str] = None,
|
||||
run_holygrail: bool = True,
|
||||
) -> Dict:
|
||||
"""Upload a kernel driver and (by default) immediately run HolyGrail."""
|
||||
upload_result = self.upload_file(file_path, file_name)
|
||||
file_hash = upload_result["file_info"]["md5"]
|
||||
|
||||
results = {"upload": upload_result, "holygrail": None}
|
||||
|
||||
if run_holygrail:
|
||||
try:
|
||||
results["holygrail"] = self.analyze_holygrail(file_hash)
|
||||
except LitterBoxError as e:
|
||||
self.logger.error(f"HolyGrail analysis failed: {e}")
|
||||
results["holygrail"] = {"error": str(e)}
|
||||
|
||||
return results
|
||||
@@ -0,0 +1,163 @@
|
||||
"""Foundation class for `LitterBoxClient` — owns the requests Session,
|
||||
the generic `_make_request` helper, and shared validation utilities.
|
||||
|
||||
Each domain mixin (files / analysis / doppelganger / ...) inherits from
|
||||
this through the final `LitterBoxClient` composition and uses
|
||||
`self._make_request` plus the validation helpers without depending on
|
||||
each other.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import BinaryIO, Dict, List, Optional, Union
|
||||
|
||||
import requests
|
||||
from requests.adapters import HTTPAdapter, Retry
|
||||
from urllib.parse import urljoin
|
||||
|
||||
from .exceptions import LitterBoxAPIError, LitterBoxError
|
||||
|
||||
|
||||
class _BaseClient:
|
||||
"""Session + generic HTTP helpers + shared validation primitives."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str = "http://127.0.0.1:1337",
|
||||
timeout: int = 120,
|
||||
max_retries: int = 3,
|
||||
verify_ssl: bool = True,
|
||||
logger: Optional[logging.Logger] = None,
|
||||
proxy_config: Optional[Dict] = None,
|
||||
headers: Optional[Dict] = None,
|
||||
):
|
||||
"""Initialize the LitterBox client."""
|
||||
self.base_url = base_url.rstrip("/")
|
||||
self.timeout = timeout
|
||||
self.verify_ssl = verify_ssl
|
||||
self.logger = logger or logging.getLogger(__name__)
|
||||
self.proxy_config = proxy_config
|
||||
self.headers = headers or {}
|
||||
self.session = self._create_session(max_retries)
|
||||
|
||||
# ---- session lifecycle ---------------------------------------------
|
||||
|
||||
def _create_session(self, max_retries: int) -> requests.Session:
|
||||
"""Create and configure requests session with retries."""
|
||||
session = requests.Session()
|
||||
|
||||
retry_strategy = Retry(
|
||||
total=max_retries,
|
||||
backoff_factor=0.5,
|
||||
status_forcelist=[429, 500, 502, 503, 504],
|
||||
allowed_methods=["HEAD", "GET", "POST", "PUT", "DELETE", "OPTIONS", "TRACE"],
|
||||
)
|
||||
adapter = HTTPAdapter(max_retries=retry_strategy)
|
||||
session.mount("http://", adapter)
|
||||
session.mount("https://", adapter)
|
||||
|
||||
if self.proxy_config:
|
||||
session.proxies.update(self.proxy_config)
|
||||
if not self.verify_ssl:
|
||||
session.verify = False
|
||||
# Suppress SSL warnings
|
||||
import urllib3
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
session.headers.update(self.headers)
|
||||
return session
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.close()
|
||||
|
||||
def close(self):
|
||||
"""Close the session and cleanup resources."""
|
||||
if hasattr(self, "session"):
|
||||
self.session.close()
|
||||
|
||||
# ---- HTTP --------------------------------------------------------
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
"""Issue one HTTP request, raise structured errors on failure.
|
||||
|
||||
Wraps `requests.request` so callers get `LitterBoxAPIError` on
|
||||
HTTP 4xx/5xx (with the parsed body) and `LitterBoxError` on
|
||||
transport-level failures. Default timeout is the per-call
|
||||
`self.timeout`; callers can override with `timeout=` in kwargs.
|
||||
"""
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
self.logger.debug(f"Making {method} request to {url}")
|
||||
|
||||
try:
|
||||
kwargs.setdefault("timeout", self.timeout)
|
||||
response = self.session.request(method, url, **kwargs)
|
||||
self.logger.debug(f"Response status: {response.status_code}")
|
||||
response.raise_for_status()
|
||||
return response
|
||||
|
||||
except requests.exceptions.HTTPError:
|
||||
try:
|
||||
error_data = response.json()
|
||||
except (ValueError, AttributeError):
|
||||
error_data = {"error": response.text}
|
||||
|
||||
error_msg = error_data.get("error", f"HTTP {response.status_code} error")
|
||||
raise LitterBoxAPIError(
|
||||
error_msg,
|
||||
status_code=response.status_code,
|
||||
response=error_data,
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise LitterBoxError(f"Request failed: {str(e)}")
|
||||
|
||||
# ---- validation helpers -------------------------------------------
|
||||
|
||||
def _validate_command_args(self, cmd_args: Optional[List[str]]) -> Dict:
|
||||
"""Validate a list of payload command-line arguments and shape
|
||||
them into the JSON body the analysis endpoints expect."""
|
||||
if cmd_args is None:
|
||||
return {}
|
||||
|
||||
if not isinstance(cmd_args, list):
|
||||
raise ValueError("Arguments must be provided as a list")
|
||||
|
||||
if not all(isinstance(arg, str) for arg in cmd_args):
|
||||
raise ValueError("All arguments must be strings")
|
||||
|
||||
# Block shell-meta characters that could enable command injection
|
||||
# if a downstream consumer fails to quote them properly.
|
||||
dangerous_chars = [";", "&", "|", "`", "$", "(", ")", "{", "}"]
|
||||
for arg in cmd_args:
|
||||
if any(char in arg for char in dangerous_chars):
|
||||
raise ValueError(f"Dangerous character detected in argument: {arg}")
|
||||
|
||||
return {"args": cmd_args}
|
||||
|
||||
def _validate_analysis_type(self, analysis_type: str, valid_types: List[str]):
|
||||
"""Validate analysis type with better error messages."""
|
||||
if analysis_type not in valid_types:
|
||||
raise ValueError(
|
||||
f"Invalid analysis_type '{analysis_type}'. "
|
||||
f"Must be one of: {', '.join(valid_types)}"
|
||||
)
|
||||
|
||||
def _prepare_file_upload(
|
||||
self,
|
||||
file_path: Union[str, Path, BinaryIO],
|
||||
file_name: Optional[str] = None,
|
||||
):
|
||||
"""Prepare a multipart `file` payload from a path or file-like."""
|
||||
if isinstance(file_path, (str, Path)):
|
||||
path = Path(file_path)
|
||||
if not path.exists():
|
||||
raise LitterBoxError(f"File not found: {path}")
|
||||
if not path.is_file():
|
||||
raise LitterBoxError(f"Path is not a file: {path}")
|
||||
return {"file": (file_name or path.name, open(path, "rb"), "application/octet-stream")}
|
||||
|
||||
if not file_name:
|
||||
raise ValueError("file_name is required when uploading file-like objects")
|
||||
return {"file": (file_name, file_path, "application/octet-stream")}
|
||||
@@ -0,0 +1,38 @@
|
||||
"""Final `LitterBoxClient` — composes the per-domain mixins onto
|
||||
`_BaseClient`. Public callers import this name from `litterbox_client`
|
||||
(re-exported in __init__) and treat it as a single class.
|
||||
"""
|
||||
|
||||
from .analysis import AnalysisMixin
|
||||
from .base import _BaseClient
|
||||
from .doppelganger import DoppelgangerMixin
|
||||
from .edr import EdrMixin
|
||||
from .files import FilesMixin
|
||||
from .reports import ReportsMixin
|
||||
from .results import ResultsMixin
|
||||
from .system import SystemMixin
|
||||
|
||||
|
||||
class LitterBoxClient(
|
||||
FilesMixin,
|
||||
AnalysisMixin,
|
||||
DoppelgangerMixin,
|
||||
ResultsMixin,
|
||||
EdrMixin,
|
||||
ReportsMixin,
|
||||
SystemMixin,
|
||||
_BaseClient,
|
||||
):
|
||||
"""Python client for the LitterBox payload-analysis sandbox API.
|
||||
|
||||
The class itself is a thin composition: each domain (files,
|
||||
analysis, doppelganger, results, EDR, reports, system) lives in its
|
||||
own mixin module under `litterbox_client/`, and they all share the
|
||||
`_BaseClient`'s requests Session + `_make_request` helpers.
|
||||
|
||||
Usage stays the same as the pre-split single-file client:
|
||||
|
||||
with LitterBoxClient("http://localhost:1337") as c:
|
||||
result = c.upload_file("malware.exe")
|
||||
...
|
||||
"""
|
||||
@@ -0,0 +1,85 @@
|
||||
"""Doppelganger operations — Blender system snapshot + FuzzyHash similarity.
|
||||
|
||||
The single `/doppelganger` server endpoint multiplexes both engines and
|
||||
the four operation kinds (`scan` / `compare` / `analyze` / `create_db`).
|
||||
We expose one unified entry point plus four convenience wrappers.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
|
||||
class DoppelgangerMixin:
|
||||
def doppelganger_operation(
|
||||
self,
|
||||
analysis_type: str,
|
||||
operation: str,
|
||||
file_hash: Optional[str] = None,
|
||||
folder_path: Optional[str] = None,
|
||||
extensions: Optional[List[str]] = None,
|
||||
threshold: int = 1,
|
||||
) -> Dict:
|
||||
"""Unified Doppelganger entry point. The four convenience wrappers
|
||||
below cover the operations that actually have CLI shape."""
|
||||
self._validate_doppelganger_params(analysis_type, operation, file_hash, folder_path)
|
||||
|
||||
# Comparison reads use GET so they're cheap to retry.
|
||||
if file_hash and operation in ["compare", "analyze"]:
|
||||
params = {"type": analysis_type, "hash": file_hash}
|
||||
if operation == "analyze" and analysis_type == "fuzzy":
|
||||
params["threshold"] = threshold
|
||||
response = self._make_request("GET", "/doppelganger", params=params)
|
||||
return response.json()
|
||||
|
||||
# Mutating ops (scan / create_db / analyze with extra args) go via POST.
|
||||
data = {"type": analysis_type, "operation": operation}
|
||||
|
||||
if operation == "create_db":
|
||||
data["folder_path"] = folder_path
|
||||
if extensions:
|
||||
data["extensions"] = extensions
|
||||
elif operation == "analyze":
|
||||
data["hash"] = file_hash
|
||||
data["threshold"] = threshold
|
||||
|
||||
response = self._make_request("POST", "/doppelganger", json=data)
|
||||
return response.json()
|
||||
|
||||
def run_blender_scan(self) -> Dict:
|
||||
"""Run a system-wide Blender host snapshot."""
|
||||
return self.doppelganger_operation("blender", "scan")
|
||||
|
||||
def compare_with_blender(self, file_hash: str) -> Dict:
|
||||
"""Compare a file against the latest Blender host snapshot."""
|
||||
return self.doppelganger_operation("blender", "compare", file_hash=file_hash)
|
||||
|
||||
def create_fuzzy_database(
|
||||
self, folder_path: str, extensions: Optional[List[str]] = None,
|
||||
) -> Dict:
|
||||
"""(Re)build the FuzzyHash baseline DB from a folder of references."""
|
||||
return self.doppelganger_operation(
|
||||
"fuzzy", "create_db", folder_path=folder_path, extensions=extensions,
|
||||
)
|
||||
|
||||
def analyze_with_fuzzy(self, file_hash: str, threshold: int = 1) -> Dict:
|
||||
"""Score a payload's similarity to the baseline via fuzzy hashing."""
|
||||
return self.doppelganger_operation(
|
||||
"fuzzy", "analyze", file_hash=file_hash, threshold=threshold,
|
||||
)
|
||||
|
||||
# ---- internal --------------------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def _validate_doppelganger_params(
|
||||
analysis_type: str,
|
||||
operation: str,
|
||||
file_hash: Optional[str],
|
||||
folder_path: Optional[str],
|
||||
):
|
||||
if analysis_type not in ["blender", "fuzzy"]:
|
||||
raise ValueError("analysis_type must be either 'blender' or 'fuzzy'")
|
||||
if operation == "scan" and analysis_type != "blender":
|
||||
raise ValueError("scan operation is only available for blender analysis")
|
||||
if operation == "create_db" and not folder_path:
|
||||
raise ValueError("folder_path is required for create_db operation")
|
||||
if operation == "analyze" and not file_hash:
|
||||
raise ValueError("file_hash is required for analyze operation")
|
||||
@@ -0,0 +1,119 @@
|
||||
"""EDR operations — Whiskers + Elastic Defend + Fibratus profiles.
|
||||
|
||||
Two split-phase analyzer flavours live behind one set of endpoints:
|
||||
* `kind: elastic` — LitterBox queries an Elastic stack for alerts.
|
||||
* `kind: fibratus` — LitterBox polls Whiskers's event-log endpoint for
|
||||
Fibratus rule matches (DetonatorAgent shape).
|
||||
|
||||
The CLI / MCP helpers don't need to care about the kind for dispatch
|
||||
(`analyze_edr` works for both); they only diverge for the
|
||||
`fibratus_alerts_since` test helper.
|
||||
"""
|
||||
|
||||
import time
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from .exceptions import LitterBoxAPIError
|
||||
|
||||
|
||||
class EdrMixin:
|
||||
def list_edr_profiles(self) -> Dict:
|
||||
"""List EDR profiles registered under Config/edr_profiles/."""
|
||||
response = self._make_request("GET", "/api/edr/profiles")
|
||||
return response.json()
|
||||
|
||||
def get_edr_agents_status(self) -> Dict:
|
||||
"""Latest reachability snapshot for every registered EDR profile.
|
||||
|
||||
Server-side TTL-cached + pre-warmed by a background poller, so
|
||||
this is effectively instant under steady-state operation."""
|
||||
response = self._make_request("GET", "/api/edr/agents/status")
|
||||
return response.json()
|
||||
|
||||
def analyze_edr(
|
||||
self,
|
||||
file_hash: str,
|
||||
profile: str,
|
||||
cmd_args: Optional[List[str]] = None,
|
||||
xor_key: Optional[int] = None,
|
||||
) -> Dict:
|
||||
"""Dispatch a payload to a registered EDR profile.
|
||||
|
||||
Returns the Phase-1 result immediately (status='polling_alerts'
|
||||
on a successful exec; 'blocked_by_av' / 'agent_unreachable' /
|
||||
'busy' / 'error' otherwise). Phase-2 (alert correlation) runs in
|
||||
a server-side daemon thread; poll
|
||||
`get_edr_results(file_hash, profile)` until status is no longer
|
||||
'polling_alerts', or use `wait_for_edr_completion` below.
|
||||
"""
|
||||
data: Dict = {}
|
||||
if cmd_args:
|
||||
data.update(self._validate_command_args(cmd_args))
|
||||
if xor_key is not None:
|
||||
if not 0 <= xor_key <= 255:
|
||||
raise ValueError(f"xor_key must be 0-255, got {xor_key}")
|
||||
data["xor_key"] = xor_key
|
||||
|
||||
response = self._make_request(
|
||||
"POST", f"/analyze/edr/{profile}/{file_hash}", json=data,
|
||||
)
|
||||
return response.json()
|
||||
|
||||
def get_edr_results(self, file_hash: str, profile: str) -> Dict:
|
||||
"""Fetch the saved findings for a specific EDR profile run."""
|
||||
response = self._make_request(
|
||||
"GET", f"/api/results/edr/{profile}/{file_hash}",
|
||||
)
|
||||
return response.json()
|
||||
|
||||
def get_edr_index(self, file_hash: str) -> Dict:
|
||||
"""Fetch every saved EDR run for a target (one entry per profile)."""
|
||||
response = self._make_request("GET", f"/api/results/edr/{file_hash}")
|
||||
return response.json()
|
||||
|
||||
def wait_for_edr_completion(
|
||||
self,
|
||||
file_hash: str,
|
||||
profile: str,
|
||||
interval: float = 3.0,
|
||||
timeout: float = 180.0,
|
||||
) -> Dict:
|
||||
"""Block until Phase-2 settles, the saved JSON appears for the
|
||||
first time, or `timeout` elapses. Returns the last-seen findings
|
||||
dict (may still be 'polling_alerts' on timeout — caller decides)."""
|
||||
deadline = time.monotonic() + timeout
|
||||
last: Optional[Dict] = None
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
last = self.get_edr_results(file_hash, profile)
|
||||
if (last or {}).get("status") and last.get("status") != "polling_alerts":
|
||||
return last
|
||||
except LitterBoxAPIError as e:
|
||||
# 404 just means Phase-1 hasn't kicked off yet — keep polling.
|
||||
if e.status_code != 404:
|
||||
raise
|
||||
time.sleep(interval)
|
||||
return last or {"status": "timeout", "error": f"Phase-2 timeout after {timeout}s"}
|
||||
|
||||
def fibratus_alerts_since(
|
||||
self,
|
||||
profile: str,
|
||||
since_iso: str,
|
||||
until_iso: Optional[str] = None,
|
||||
) -> Dict:
|
||||
"""Test/debug helper: ask LitterBox to passthrough-query the
|
||||
Whiskers agent's `/api/alerts/fibratus/since` for `profile`.
|
||||
|
||||
Useful right after Fibratus is installed on a new VM — you can
|
||||
verify the Fibratus → Application event log → Whiskers wire end
|
||||
to end without dispatching a payload. Returns the agent's raw
|
||||
`{supported, events: [...]}` shape; `data` strings inside each
|
||||
event are unparsed JSON the caller can deserialize.
|
||||
"""
|
||||
params = {"from": since_iso}
|
||||
if until_iso:
|
||||
params["until"] = until_iso
|
||||
response = self._make_request(
|
||||
"GET", f"/api/edr/fibratus/{profile}/alerts/since", params=params,
|
||||
)
|
||||
return response.json()
|
||||
@@ -0,0 +1,31 @@
|
||||
"""Exception types raised by LitterBoxClient.
|
||||
|
||||
Two layers:
|
||||
* `LitterBoxError` — base class. Thrown for transport-level failures
|
||||
(DNS, connection refused, TLS, file-not-found
|
||||
on upload, etc).
|
||||
* `LitterBoxAPIError` — a subclass for HTTP-error responses where
|
||||
LitterBox itself returned a structured error.
|
||||
Carries the parsed body and status code so
|
||||
callers can branch on 404 / 409 / etc.
|
||||
"""
|
||||
|
||||
from typing import Dict, Optional
|
||||
|
||||
|
||||
class LitterBoxError(Exception):
|
||||
"""Base exception for LitterBox client errors."""
|
||||
|
||||
|
||||
class LitterBoxAPIError(LitterBoxError):
|
||||
"""Exception for API-level errors (HTTP 4xx / 5xx with a parsed body)."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
status_code: Optional[int] = None,
|
||||
response: Optional[Dict] = None,
|
||||
):
|
||||
super().__init__(message)
|
||||
self.status_code = status_code
|
||||
self.response = response
|
||||
@@ -0,0 +1,37 @@
|
||||
"""File-management operations: upload, PID validation, delete."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import BinaryIO, Dict, Optional, Union
|
||||
|
||||
|
||||
class FilesMixin:
|
||||
"""Upload a file (or validate / delete one)."""
|
||||
|
||||
def upload_file(
|
||||
self,
|
||||
file_path: Union[str, Path, BinaryIO],
|
||||
file_name: Optional[str] = None,
|
||||
) -> Dict:
|
||||
"""Upload a file for analysis. Returns the server's `file_info`
|
||||
block including the canonical MD5 hash you'll feed into other
|
||||
endpoints.
|
||||
"""
|
||||
files = self._prepare_file_upload(file_path, file_name)
|
||||
try:
|
||||
response = self._make_request("POST", "/upload", files=files)
|
||||
return response.json()
|
||||
finally:
|
||||
# If we opened the file ourselves (path-based), close it. For
|
||||
# caller-provided BinaryIO objects, the caller owns the handle.
|
||||
if isinstance(file_path, (str, Path)):
|
||||
files["file"][1].close()
|
||||
|
||||
def validate_process(self, pid) -> Dict:
|
||||
"""Validate that a PID exists and is accessible for dynamic analysis."""
|
||||
response = self._make_request("POST", f"/validate/{pid}")
|
||||
return response.json()
|
||||
|
||||
def delete_file(self, file_hash: str) -> Dict:
|
||||
"""Delete a file and all of its analysis results."""
|
||||
response = self._make_request("DELETE", f"/file/{file_hash}")
|
||||
return response.json()
|
||||
@@ -0,0 +1,87 @@
|
||||
"""Report retrieval — inline HTML, save-to-disk, open-in-browser."""
|
||||
|
||||
import os
|
||||
import re
|
||||
import tempfile
|
||||
import webbrowser
|
||||
from datetime import datetime
|
||||
from typing import Optional, Union
|
||||
|
||||
import requests
|
||||
|
||||
from .exceptions import LitterBoxError
|
||||
|
||||
|
||||
class ReportsMixin:
|
||||
def get_report(self, target: str, download: bool = False) -> Union[str, bytes]:
|
||||
"""Fetch the analysis report. Returns the HTML string by default,
|
||||
or raw bytes when `download=True` (useful for piping)."""
|
||||
params = {"download": "true" if download else "false"}
|
||||
response = self._make_request("GET", f"/api/report/{target}", params=params)
|
||||
return response.content if download else response.text
|
||||
|
||||
def download_report(self, target: str, output_path: Optional[str] = None) -> str:
|
||||
"""Download the report and save to disk. Returns the path written.
|
||||
|
||||
Streams chunks to disk so multi-MB reports don't sit in memory.
|
||||
"""
|
||||
response = self._make_request(
|
||||
"GET", f"/api/report/{target}",
|
||||
params={"download": "true"}, stream=True,
|
||||
)
|
||||
|
||||
filename = self._extract_filename_from_response(response, target)
|
||||
|
||||
# If `output_path` is a directory, save inside it; else use it as-is.
|
||||
if output_path:
|
||||
save_path = (
|
||||
os.path.join(output_path, filename)
|
||||
if os.path.isdir(output_path)
|
||||
else output_path
|
||||
)
|
||||
else:
|
||||
save_path = filename
|
||||
|
||||
try:
|
||||
with open(save_path, "wb") as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if chunk: # filter out keep-alive chunks
|
||||
f.write(chunk)
|
||||
self.logger.info(f"Report saved to {save_path}")
|
||||
return save_path
|
||||
except Exception as e:
|
||||
raise LitterBoxError(f"Failed to save report: {str(e)}")
|
||||
|
||||
def open_report_in_browser(self, target: str) -> bool:
|
||||
"""Render the report and open it in the default browser via a
|
||||
temp file. Returns False on any failure (logged)."""
|
||||
try:
|
||||
report_content = self.get_report(target, download=False)
|
||||
|
||||
fd, path = tempfile.mkstemp(suffix=".html", prefix="litterbox_report_")
|
||||
try:
|
||||
with os.fdopen(fd, "w", encoding="utf-8") as tmp:
|
||||
tmp.write(report_content)
|
||||
webbrowser.open("file://" + path)
|
||||
self.logger.info(f"Report opened in browser from {path}")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to open report in browser: {str(e)}")
|
||||
return False
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to generate report: {str(e)}")
|
||||
return False
|
||||
|
||||
# ---- internal -------------------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def _extract_filename_from_response(response: requests.Response, target: str) -> str:
|
||||
"""Pull the filename from the Content-Disposition header, or
|
||||
synthesize one with a timestamp for fallback."""
|
||||
content_disposition = response.headers.get("Content-Disposition", "")
|
||||
if "filename=" in content_disposition:
|
||||
match = re.search(r'filename="([^"]+)"', content_disposition)
|
||||
if match:
|
||||
return match.group(1)
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
return f"LitterBox_Report_{target[:8]}_{timestamp}.html"
|
||||
@@ -0,0 +1,41 @@
|
||||
"""Read saved analysis results.
|
||||
|
||||
Per-tool getters (file_info / static / dynamic / holygrail / risk) plus
|
||||
`get_results` which is the legacy multi-purpose handle that backs the
|
||||
`results --type` CLI.
|
||||
"""
|
||||
|
||||
from typing import Dict
|
||||
|
||||
|
||||
class ResultsMixin:
|
||||
def get_results(self, target: str, analysis_type: str) -> Dict:
|
||||
"""Get results for a specific analysis type via the page route."""
|
||||
self._validate_analysis_type(analysis_type, ["static", "dynamic", "info"])
|
||||
response = self._make_request("GET", f"/results/{analysis_type}/{target}")
|
||||
return response.json()
|
||||
|
||||
def get_file_info(self, target: str) -> Dict:
|
||||
"""File metadata: type, size, hashes, entropy, PE structure, etc."""
|
||||
response = self._make_request("GET", f"/api/results/info/{target}")
|
||||
return response.json()
|
||||
|
||||
def get_static_results(self, target: str) -> Dict:
|
||||
"""Static analysis output (YARA / CheckPlz / Stringnalyzer)."""
|
||||
response = self._make_request("GET", f"/api/results/static/{target}")
|
||||
return response.json()
|
||||
|
||||
def get_dynamic_results(self, target: str) -> Dict:
|
||||
"""Dynamic analysis output (memory scanners + behavioral telemetry)."""
|
||||
response = self._make_request("GET", f"/api/results/dynamic/{target}")
|
||||
return response.json()
|
||||
|
||||
def get_holygrail_results(self, target: str) -> Dict:
|
||||
"""HolyGrail BYOVD output for a driver."""
|
||||
response = self._make_request("GET", f"/api/results/holygrail/{target}")
|
||||
return response.json()
|
||||
|
||||
def get_risk_assessment(self, target: str) -> Dict:
|
||||
"""Computed detection assessment: score, level, triggering indicators."""
|
||||
response = self._make_request("GET", f"/api/results/risk/{target}")
|
||||
return response.json()
|
||||
@@ -0,0 +1,113 @@
|
||||
"""System health, fleet summary, cleanup, and the parallel
|
||||
`get_comprehensive_results` aggregator."""
|
||||
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from typing import Dict
|
||||
|
||||
import requests
|
||||
from urllib.parse import urljoin
|
||||
|
||||
from .exceptions import LitterBoxAPIError, LitterBoxError
|
||||
|
||||
|
||||
class SystemMixin:
|
||||
# ---- health & inventory --------------------------------------------
|
||||
|
||||
def check_health(self) -> Dict:
|
||||
"""Lightweight liveness probe of the LitterBox service.
|
||||
Bypasses the Session's retry adapter — we want a fast yes/no,
|
||||
not a probe that retries through transient failures."""
|
||||
url = urljoin(self.base_url, "/health")
|
||||
try:
|
||||
response = requests.get(url, timeout=self.timeout, verify=self.verify_ssl)
|
||||
if response.status_code in (200, 503): # OK and degraded both valid
|
||||
return response.json()
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.logger.error(f"Health check failed: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Unable to connect to service",
|
||||
"details": str(e),
|
||||
}
|
||||
|
||||
def get_files_summary(self) -> Dict:
|
||||
"""Get summary of all analyzed files and processes."""
|
||||
response = self._make_request("GET", "/files")
|
||||
return response.json()
|
||||
|
||||
def get_system_status(self) -> Dict:
|
||||
"""Combined health + files-summary snapshot for `status --full`."""
|
||||
try:
|
||||
health = self.check_health()
|
||||
files_summary = self.get_files_summary()
|
||||
return {
|
||||
"health": health,
|
||||
"files_summary": files_summary,
|
||||
"status": "healthy" if health.get("status") == "ok" else "degraded",
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"health": {"status": "error", "error": str(e)},
|
||||
"files_summary": None,
|
||||
"status": "error",
|
||||
}
|
||||
|
||||
def get_scanners_status(self) -> Dict:
|
||||
"""Inventory of configured analyzers and whether their binaries
|
||||
exist on disk (drives the dashboard scanner panel)."""
|
||||
response = self._make_request("GET", "/api/system/scanners")
|
||||
return response.json()
|
||||
|
||||
# ---- destructive ---------------------------------------------------
|
||||
|
||||
def cleanup(
|
||||
self,
|
||||
include_uploads: bool = True,
|
||||
include_results: bool = True,
|
||||
include_analysis: bool = True,
|
||||
) -> Dict:
|
||||
"""Wipe analysis artifacts. Destructive — confirm with the user
|
||||
before calling unless they explicitly asked."""
|
||||
data = {
|
||||
"cleanup_uploads": include_uploads,
|
||||
"cleanup_results": include_results,
|
||||
"cleanup_analysis": include_analysis,
|
||||
}
|
||||
response = self._make_request("POST", "/cleanup", json=data)
|
||||
return response.json()
|
||||
|
||||
# ---- multi-fetch convenience --------------------------------------
|
||||
|
||||
def get_comprehensive_results(self, target: str) -> Dict:
|
||||
"""Get all available results for a target in one call.
|
||||
|
||||
The five GETs are independent, so they fan out across a small
|
||||
thread pool — wall time on a populated target drops from
|
||||
sequential (~5×) to roughly the slowest single response.
|
||||
Includes EDR runs (across every profile) when present.
|
||||
"""
|
||||
fetchers = [
|
||||
("file_info", self.get_file_info),
|
||||
("static_results", self.get_static_results),
|
||||
("dynamic_results", self.get_dynamic_results),
|
||||
("holygrail_results", self.get_holygrail_results),
|
||||
("edr_index", self.get_edr_index),
|
||||
]
|
||||
|
||||
def _safe_fetch(method):
|
||||
try:
|
||||
return method(target)
|
||||
except LitterBoxAPIError as e:
|
||||
# 404 means "this analysis type wasn't run for this target",
|
||||
# which is normal and shouldn't bubble up as an error.
|
||||
return None if e.status_code == 404 else {"error": str(e)}
|
||||
except LitterBoxError as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=len(fetchers)) as executor:
|
||||
futures = {key: executor.submit(_safe_fetch, fn) for key, fn in fetchers}
|
||||
results = {key: fut.result() for key, fut in futures.items()}
|
||||
|
||||
results["target"] = target
|
||||
return results
|
||||
@@ -0,0 +1,320 @@
|
||||
//! `GET /api/alerts/fibratus/since?from=<ISO8601>&until=<ISO8601>`
|
||||
//!
|
||||
//! Pulls Fibratus rule-match alerts from the Windows Application event
|
||||
//! log within the requested UTC window. Fibratus, when configured with
|
||||
//! `alertsenders.eventlog: {enabled: true, format: json}`, writes each
|
||||
//! rule match as a record under `Provider Name="Fibratus"` whose
|
||||
//! `<EventData><Data>...</Data></EventData>` field carries the alert as
|
||||
//! a JSON string.
|
||||
//!
|
||||
//! We shell out to the built-in `wevtutil` CLI rather than linking
|
||||
//! against the Win32 Event Log API:
|
||||
//! - no Rust dependency on the heavyweight `windows` crate
|
||||
//! - wevtutil ships on every supported Windows version
|
||||
//! - the XPath query Fibratus needs is well-supported and trivial
|
||||
//!
|
||||
//! Whiskers does NO parsing of the JSON `<Data>` payload. We just
|
||||
//! extract the `<TimeCreated>`, `<EventID>`, and `<Data>` text per
|
||||
//! event and ship them back. LitterBox parses on its side, matching
|
||||
//! DetonatorAgent's split.
|
||||
|
||||
use std::process::Stdio;
|
||||
|
||||
use axum::extract::Query;
|
||||
use axum::http::StatusCode;
|
||||
use axum::Json;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::process::Command;
|
||||
|
||||
use crate::api::info::has_fibratus;
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct FibratusAlertsQuery {
|
||||
/// Inclusive lower bound on `TimeCreated/@SystemTime`, ISO8601 / RFC3339.
|
||||
pub from: String,
|
||||
/// Inclusive upper bound. Optional; we substitute "now" if missing.
|
||||
pub until: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct FibratusAlert {
|
||||
pub time_created: String,
|
||||
pub event_id: u32,
|
||||
/// Raw JSON string from the event's `<Data>` field. Whiskers does not
|
||||
/// parse this; LitterBox does.
|
||||
pub data: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct FibratusAlertsResponse {
|
||||
pub supported: bool,
|
||||
pub events: Vec<FibratusAlert>,
|
||||
}
|
||||
|
||||
pub async fn since(
|
||||
Query(q): Query<FibratusAlertsQuery>,
|
||||
) -> Result<Json<FibratusAlertsResponse>, (StatusCode, Json<serde_json::Value>)> {
|
||||
if !has_fibratus() {
|
||||
return Ok(Json(FibratusAlertsResponse {
|
||||
supported: false,
|
||||
events: Vec::new(),
|
||||
}));
|
||||
}
|
||||
|
||||
// Match DetonatorAgent's FibratusEdrPlugin.cs format exactly:
|
||||
// start: yyyy-MM-ddTHH:mm:ss.000000000Z (rounded DOWN to the second
|
||||
// so we catch events that fired in the same second as run_start)
|
||||
// end: yyyy-MM-ddTHH:mm:ss.fffffffffZ (full 9-digit precision)
|
||||
let from_xpath = parse_iso_round_down(&q.from)
|
||||
.ok_or_else(|| bad_request("invalid `from` timestamp (expected ISO8601)"))?;
|
||||
let until_owned;
|
||||
let until_xpath = match q.until.as_deref() {
|
||||
Some(u) => parse_iso_full_precision(u)
|
||||
.ok_or_else(|| bad_request("invalid `until` timestamp (expected ISO8601)"))?,
|
||||
None => {
|
||||
// No explicit upper bound → use "now". Format with full precision.
|
||||
until_owned = format_full_precision(chrono::Utc::now());
|
||||
until_owned
|
||||
}
|
||||
};
|
||||
|
||||
// XPath shape (incl. spaces around operators) mirrors DetonatorAgent's
|
||||
// FibratusEdrPlugin.cs verbatim — that exact query is known to work
|
||||
// against the Application log on recent Windows builds.
|
||||
let xpath = format!(
|
||||
"*[System[Provider[@Name='Fibratus'] and TimeCreated[@SystemTime >= '{from_xpath}' and @SystemTime <= '{until_xpath}']]]"
|
||||
);
|
||||
let from = from_xpath.clone();
|
||||
let until = until_xpath.clone();
|
||||
|
||||
tracing::info!(from, until, "Querying Fibratus alerts from Application log");
|
||||
|
||||
let output = Command::new("wevtutil.exe")
|
||||
.arg("qe")
|
||||
.arg("Application")
|
||||
.arg(format!("/q:{xpath}"))
|
||||
.arg("/f:xml")
|
||||
.arg("/e:Events")
|
||||
.stdin(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped())
|
||||
.output()
|
||||
.await;
|
||||
|
||||
let output = match output {
|
||||
Ok(o) => o,
|
||||
Err(e) => {
|
||||
tracing::error!(error = %e, "wevtutil spawn failed");
|
||||
return Err((
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
Json(serde_json::json!({
|
||||
"error": format!("wevtutil spawn failed: {e}"),
|
||||
})),
|
||||
));
|
||||
}
|
||||
};
|
||||
|
||||
if !output.status.success() {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr).into_owned();
|
||||
tracing::error!(stderr, "wevtutil exited non-zero");
|
||||
return Err((
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
Json(serde_json::json!({
|
||||
"error": format!("wevtutil exited {:?}", output.status.code()),
|
||||
"stderr": stderr,
|
||||
})),
|
||||
));
|
||||
}
|
||||
|
||||
let xml = String::from_utf8_lossy(&output.stdout);
|
||||
let events = parse_events(&xml);
|
||||
tracing::info!(count = events.len(), "Fibratus alerts returned");
|
||||
Ok(Json(FibratusAlertsResponse { supported: true, events }))
|
||||
}
|
||||
|
||||
/// Parse an inbound RFC3339 timestamp, then format as the round-DOWN
|
||||
/// (zero-fractional) UTC form DetonatorAgent uses for `from`:
|
||||
/// `yyyy-MM-ddTHH:mm:ss.000000000Z`
|
||||
fn parse_iso_round_down(raw: &str) -> Option<String> {
|
||||
let dt = chrono::DateTime::parse_from_rfc3339(raw).ok()?;
|
||||
let utc = dt.with_timezone(&chrono::Utc);
|
||||
Some(utc.format("%Y-%m-%dT%H:%M:%S.000000000Z").to_string())
|
||||
}
|
||||
|
||||
/// Parse an inbound RFC3339 timestamp, format with full 9-digit
|
||||
/// fractional-second precision in UTC.
|
||||
fn parse_iso_full_precision(raw: &str) -> Option<String> {
|
||||
let dt = chrono::DateTime::parse_from_rfc3339(raw).ok()?;
|
||||
Some(format_full_precision(dt.with_timezone(&chrono::Utc)))
|
||||
}
|
||||
|
||||
fn format_full_precision(dt: chrono::DateTime<chrono::Utc>) -> String {
|
||||
dt.format("%Y-%m-%dT%H:%M:%S%.9fZ").to_string()
|
||||
}
|
||||
|
||||
/// Extract per-event `(time_created, event_id, data)` tuples from a
|
||||
/// `<Events>...<Event>...</Event>...</Events>` XML blob. We don't need a
|
||||
/// full XML parser for this — the format wevtutil emits is deterministic,
|
||||
/// and we want exactly three fields per event. Quick string-scanning
|
||||
/// keeps the binary lean.
|
||||
fn parse_events(xml: &str) -> Vec<FibratusAlert> {
|
||||
let mut out = Vec::new();
|
||||
let mut cursor = 0usize;
|
||||
while let Some(start_rel) = xml[cursor..].find("<Event ") {
|
||||
let event_start = cursor + start_rel;
|
||||
let event_end_rel = match xml[event_start..].find("</Event>") {
|
||||
Some(p) => p,
|
||||
None => break,
|
||||
};
|
||||
let event_end = event_start + event_end_rel + "</Event>".len();
|
||||
let event_block = &xml[event_start..event_end];
|
||||
cursor = event_end;
|
||||
|
||||
let time_created = extract_attr(event_block, "<TimeCreated", "SystemTime")
|
||||
.unwrap_or_default();
|
||||
let event_id = extract_text_between(event_block, "<EventID", ">", "</EventID>")
|
||||
.and_then(|s| s.trim().parse::<u32>().ok())
|
||||
.unwrap_or(0);
|
||||
// Fibratus's eventlog sender writes the JSON as ONE `<Data>` element
|
||||
// with no Name attribute. Be permissive about the open-tag attrs
|
||||
// since some Windows builds add them; we just want the inner text.
|
||||
let data = extract_text_between(event_block, "<Data", ">", "</Data>")
|
||||
.unwrap_or_default();
|
||||
if data.is_empty() {
|
||||
continue;
|
||||
}
|
||||
out.push(FibratusAlert {
|
||||
time_created,
|
||||
event_id,
|
||||
data: xml_unescape(&data),
|
||||
});
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
/// Find an attribute value inside an element's open tag, accepting both
|
||||
/// single- and double-quoted forms. wevtutil's XML output uses single
|
||||
/// quotes (e.g. `<TimeCreated SystemTime='2026-04-30T...'/>`); our test
|
||||
/// fixtures used double; we tolerate either.
|
||||
fn extract_attr(haystack: &str, open_marker: &str, attr: &str) -> Option<String> {
|
||||
let tag_start = haystack.find(open_marker)?;
|
||||
let tag_end = haystack[tag_start..].find('>')?;
|
||||
let tag = &haystack[tag_start..tag_start + tag_end];
|
||||
for quote in &['"', '\''] {
|
||||
let attr_marker = format!("{attr}={quote}");
|
||||
if let Some(rel) = tag.find(&attr_marker) {
|
||||
let aval_start = rel + attr_marker.len();
|
||||
if let Some(end_rel) = tag[aval_start..].find(*quote) {
|
||||
return Some(tag[aval_start..aval_start + end_rel].to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Pull the text between an open tag and its close tag.
|
||||
/// `extract_text_between(haystack, "<EventID", ">", "</EventID>")` returns
|
||||
/// the content between the first `<EventID...>` open and `</EventID>`.
|
||||
fn extract_text_between(
|
||||
haystack: &str,
|
||||
open_marker: &str,
|
||||
open_end: &str,
|
||||
close_marker: &str,
|
||||
) -> Option<String> {
|
||||
let mstart = haystack.find(open_marker)?;
|
||||
let after_marker = &haystack[mstart..];
|
||||
let open_end_rel = after_marker.find(open_end)?;
|
||||
let body_start = mstart + open_end_rel + open_end.len();
|
||||
let body_end_rel = haystack[body_start..].find(close_marker)?;
|
||||
Some(haystack[body_start..body_start + body_end_rel].to_string())
|
||||
}
|
||||
|
||||
/// Decode the XML escape sequences wevtutil emits inside `<Data>` blobs.
|
||||
/// Fibratus's JSON commonly contains `"`, `&`, `<`, `>`
|
||||
/// after passing through the event log writer; LitterBox needs a clean
|
||||
/// JSON string to parse.
|
||||
fn xml_unescape(s: &str) -> String {
|
||||
s.replace(""", "\"")
|
||||
.replace("'", "'")
|
||||
.replace("<", "<")
|
||||
.replace(">", ">")
|
||||
.replace("&", "&")
|
||||
}
|
||||
|
||||
fn bad_request(msg: &str) -> (StatusCode, Json<serde_json::Value>) {
|
||||
(
|
||||
StatusCode::BAD_REQUEST,
|
||||
Json(serde_json::json!({ "error": msg })),
|
||||
)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn parses_a_minimal_event_block() {
|
||||
let xml = r#"<Events>
|
||||
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
|
||||
<System>
|
||||
<Provider Name="Fibratus" />
|
||||
<EventID Qualifiers="8192">62873</EventID>
|
||||
<TimeCreated SystemTime="2026-04-30T12:34:56.789Z" />
|
||||
</System>
|
||||
<EventData>
|
||||
<Data>{"title": "rule X", "severity": "high"}</Data>
|
||||
</EventData>
|
||||
</Event>
|
||||
</Events>"#;
|
||||
let events = parse_events(xml);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert_eq!(events[0].time_created, "2026-04-30T12:34:56.789Z");
|
||||
assert_eq!(events[0].event_id, 62873);
|
||||
assert!(events[0].data.contains("rule X"));
|
||||
assert!(events[0].data.contains("\"severity\""));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn skips_events_with_empty_data() {
|
||||
let xml = r#"<Events>
|
||||
<Event><System><EventID>1</EventID></System><EventData></EventData></Event>
|
||||
</Events>"#;
|
||||
assert!(parse_events(xml).is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_wevtutil_style_single_quotes() {
|
||||
// wevtutil's actual output uses single-quoted attributes.
|
||||
let xml = r#"<Events>
|
||||
<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
|
||||
<System>
|
||||
<Provider Name='Fibratus'/>
|
||||
<EventID Qualifiers='8192'>18100</EventID>
|
||||
<TimeCreated SystemTime='2026-04-30T11:34:56.7890123Z'/>
|
||||
</System>
|
||||
<EventData>
|
||||
<Data>{"title": "Suspicious DLL load"}</Data>
|
||||
</EventData>
|
||||
</Event>
|
||||
</Events>"#;
|
||||
let events = parse_events(xml);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert_eq!(events[0].time_created, "2026-04-30T11:34:56.7890123Z");
|
||||
assert_eq!(events[0].event_id, 18100);
|
||||
assert!(events[0].data.contains("Suspicious DLL load"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn iso_round_down_zeroes_fractional() {
|
||||
let s = parse_iso_round_down("2026-04-30T12:34:56.789012+00:00").unwrap();
|
||||
assert_eq!(s, "2026-04-30T12:34:56.000000000Z");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn iso_full_precision_keeps_nanos() {
|
||||
let s = parse_iso_full_precision("2026-04-30T12:34:56.789012+00:00").unwrap();
|
||||
assert!(s.starts_with("2026-04-30T12:34:56."));
|
||||
assert!(s.ends_with("Z"));
|
||||
assert!(s.contains("789012"));
|
||||
}
|
||||
}
|
||||
@@ -1,17 +1,29 @@
|
||||
//! `GET /api/info` — agent self-reports its identity so LitterBox can
|
||||
//! filter Elastic alerts by the host's actual hostname without the operator
|
||||
//! configuring it manually.
|
||||
//! configuring it manually. `telemetry_sources` lets the orchestrator
|
||||
//! preflight which alert pipelines this VM can serve (currently: Fibratus
|
||||
//! installed-or-not).
|
||||
|
||||
use std::path::Path;
|
||||
|
||||
use axum::Json;
|
||||
use serde::Serialize;
|
||||
|
||||
const AGENT_VERSION: &str = env!("CARGO_PKG_VERSION");
|
||||
|
||||
/// Default Fibratus install path. LitterBox treats the presence of this
|
||||
/// binary as the cheap probe for "this VM can serve Fibratus alerts."
|
||||
const FIBRATUS_EXE: &str = r"C:\Program Files\Fibratus\Bin\fibratus.exe";
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct AgentInfo {
|
||||
pub hostname: String,
|
||||
pub os_version: String,
|
||||
pub agent_version: &'static str,
|
||||
/// Names of additional telemetry pipelines this VM can serve. The
|
||||
/// orchestrator dispatches a profile of `kind: fibratus` only when
|
||||
/// `"fibratus"` appears in this list.
|
||||
pub telemetry_sources: Vec<&'static str>,
|
||||
}
|
||||
|
||||
pub async fn get_info() -> Json<AgentInfo> {
|
||||
@@ -20,9 +32,21 @@ pub async fn get_info() -> Json<AgentInfo> {
|
||||
.unwrap_or_else(|_| "unknown".to_string());
|
||||
let os_version = os_info::get().to_string();
|
||||
|
||||
let mut telemetry_sources: Vec<&'static str> = Vec::new();
|
||||
if has_fibratus() {
|
||||
telemetry_sources.push("fibratus");
|
||||
}
|
||||
|
||||
Json(AgentInfo {
|
||||
hostname,
|
||||
os_version,
|
||||
agent_version: AGENT_VERSION,
|
||||
telemetry_sources,
|
||||
})
|
||||
}
|
||||
|
||||
/// Cheap "is Fibratus installed here?" check. Just a stat — we do not
|
||||
/// run the binary or query the event log on every /api/info hit.
|
||||
pub fn has_fibratus() -> bool {
|
||||
Path::new(FIBRATUS_EXE).exists()
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
//! REST endpoints, grouped by controller. Mirrors DetonatorAgent's
|
||||
//! `Controllers/` layout.
|
||||
|
||||
pub mod alerts;
|
||||
pub mod execute;
|
||||
pub mod info;
|
||||
pub mod lock;
|
||||
|
||||
@@ -129,6 +129,7 @@ async fn main() {
|
||||
.route("/api/logs/execution", get(api::logs::execution))
|
||||
.route("/api/logs/agent", get(api::logs::agent_logs))
|
||||
.route("/api/logs/agent", delete(api::logs::clear_agent_logs))
|
||||
.route("/api/alerts/fibratus/since", get(api::alerts::since))
|
||||
.with_state(state);
|
||||
|
||||
let listener = tokio::net::TcpListener::bind(addr)
|
||||
|
||||
@@ -59,6 +59,11 @@ def create_app():
|
||||
config=app.config,
|
||||
)
|
||||
|
||||
# Pre-warm the EDR-agent reachability cache so the dashboard never
|
||||
# waits for a fresh probe cycle. Idempotent — safe across reloads.
|
||||
from .services.edr_health import start_poller
|
||||
start_poller(app.extensions['litterbox'])
|
||||
|
||||
# Register blueprints
|
||||
from .blueprints import (
|
||||
analysis_bp,
|
||||
|
||||
@@ -155,6 +155,47 @@ class AgentClient:
|
||||
def clear_agent_logs(self) -> dict:
|
||||
return self._delete("/api/logs/agent")
|
||||
|
||||
# ---- alerts ---------------------------------------------------------
|
||||
|
||||
def get_fibratus_alerts(self, since_iso: str, until_iso: str) -> dict:
|
||||
"""Pull Fibratus rule-match alerts from the EDR VM's Application
|
||||
event log via Whiskers, filtered to the given UTC time window.
|
||||
|
||||
Whiskers wevtutil-queries the Application log for events whose
|
||||
Provider name is `Fibratus` (Fibratus's eventlog alert sender
|
||||
writes them there when configured with `format: json`). The agent
|
||||
does no parsing — it returns the raw `<Data>` JSON strings from
|
||||
each matched event record.
|
||||
|
||||
Returns a dict of shape:
|
||||
{
|
||||
"supported": bool, # false on a non-Fibratus VM
|
||||
"events": [
|
||||
{"time_created": "...", "event_id": int, "data": "<json>"},
|
||||
...
|
||||
]
|
||||
}
|
||||
|
||||
`supported: false` is a terminal state — Fibratus isn't installed
|
||||
on this VM, so the analyzer should give up and report the run as
|
||||
partial. `events: []` with `supported: true` just means no rule
|
||||
matches in the window yet (the analyzer keeps polling).
|
||||
"""
|
||||
# `params=` URL-encodes properly. We can NOT inline the timestamps
|
||||
# into the URL string — `+00:00` would be parsed as a literal `+`
|
||||
# which percent-decodes to space on the server side, and Whiskers
|
||||
# rejects the malformed timestamp with HTTP 400.
|
||||
url = f"{self.agent_url}/api/alerts/fibratus/since"
|
||||
try:
|
||||
resp = self.session.get(
|
||||
url,
|
||||
params={"from": since_iso, "until": until_iso},
|
||||
timeout=self.timeout,
|
||||
)
|
||||
except requests.RequestException as exc:
|
||||
raise AgentUnreachable(f"GET {url}: {exc}") from exc
|
||||
return self._handle(resp, url)
|
||||
|
||||
# ---- internals ------------------------------------------------------
|
||||
|
||||
def _get(self, path: str) -> dict:
|
||||
|
||||
@@ -0,0 +1,874 @@
|
||||
"""End-to-end orchestrator for one Fibratus-EDR run.
|
||||
|
||||
Pull-from-event-log model (same shape DetonatorAgent's FibratusEdrPlugin
|
||||
uses):
|
||||
|
||||
* Whiskers stays a near-pure exec runner — its only Fibratus-aware bit is
|
||||
GET /api/alerts/fibratus/since which wevtutil-queries the Windows
|
||||
Application event log for `Provider=Fibratus` alert records.
|
||||
* Fibratus on the EDR VM is configured with
|
||||
`alertsenders.eventlog: {enabled: true, format: json}`. Rule matches
|
||||
land in the Application log; the kernel event stream is unaffected.
|
||||
* This analyzer: lock + exec + wait + kill + log-fetch (Phase 1), then
|
||||
polls Whiskers for new Fibratus alerts in the run's `[run_start, now]`
|
||||
window (Phase 2), filters by filename, normalizes into the same dict
|
||||
shape the saved-view renderer expects.
|
||||
|
||||
Per-payload flow:
|
||||
|
||||
1. AgentClient.get_info -> hostname (matches what Fibratus tags onto alerts)
|
||||
2. AgentClient.lock_acquire
|
||||
3. AgentClient.exec(payload, args) -- XOR-encoded on the wire
|
||||
4. wait for exec to exit OR exec_timeout_seconds
|
||||
5. AgentClient.kill (idempotent)
|
||||
6. AgentClient.get_execution_logs
|
||||
7. AgentClient.lock_release
|
||||
8. Phase 2: poll AgentClient.get_fibratus_alerts(run_start, now) until
|
||||
the alert count is stable for SETTLE_SECONDS or wait budget elapses.
|
||||
9. Build findings dict — same shape Elastic returns.
|
||||
|
||||
Shape compatibility: the alert dicts we emit follow the same keys the
|
||||
ElasticEdrAnalyzer's Alert.to_dict() produces (title, severity,
|
||||
detected_at, details, raw) so the existing UI renderer doesn't fork.
|
||||
"""
|
||||
|
||||
import json as _json
|
||||
import logging
|
||||
import os
|
||||
import secrets
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from typing import List, Optional
|
||||
|
||||
from app.analyzers.base import BaseAnalyzer
|
||||
|
||||
from .agent_client import AgentBusy, AgentClient, AgentError, AgentUnreachable
|
||||
from .profile import EdrProfile
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
HIGH_SEVERITY = {"high", "critical"}
|
||||
|
||||
|
||||
class FibratusEdrAnalyzer(BaseAnalyzer):
|
||||
"""Per-profile Fibratus analyzer. One instance per dispatch.
|
||||
|
||||
Mirrors ElasticEdrAnalyzer's split-phase API (run_exec / run_correlation)
|
||||
so registry.dispatch_split() can drive either kind through one code path.
|
||||
"""
|
||||
|
||||
POLL_INTERVAL_SECONDS = 2.0
|
||||
SETTLE_SECONDS = 8.0
|
||||
|
||||
def __init__(self, config: dict, profile: EdrProfile):
|
||||
super().__init__(config)
|
||||
self.profile = profile
|
||||
self.agent = AgentClient(profile.agent_url)
|
||||
|
||||
# ---- BaseAnalyzer ---------------------------------------------------
|
||||
|
||||
def analyze(self, target):
|
||||
"""Synchronous full pipeline. CLI / tests use this; the HTTP route
|
||||
uses the split-phase API instead."""
|
||||
try:
|
||||
self.results = self._run(target, executable_args=None)
|
||||
except Exception as exc:
|
||||
logger.exception("Fibratus EDR run failed")
|
||||
self.results = {
|
||||
"status": "error",
|
||||
"error": str(exc),
|
||||
"profile": self.profile.name,
|
||||
}
|
||||
|
||||
def cleanup(self):
|
||||
# No long-lived resources — the agent's monitor task auto-deletes
|
||||
# the dropped payload after the spawned process exits, and the lock
|
||||
# is released inside the exec phase's finally block.
|
||||
pass
|
||||
|
||||
# ---- Public split-phase API ----------------------------------------
|
||||
|
||||
def run_exec(self, payload_path: str, executable_args: Optional[str] = None):
|
||||
"""Phase 1. Returns (phase_1_dict, continuation)."""
|
||||
try:
|
||||
return self._run_exec_phase(payload_path, executable_args)
|
||||
except Exception as exc:
|
||||
logger.exception("Fibratus EDR Phase 1 failed")
|
||||
return {
|
||||
"status": "error",
|
||||
"error": str(exc),
|
||||
"profile": self.profile.name,
|
||||
}, None
|
||||
|
||||
def run_correlation(self, continuation: dict) -> dict:
|
||||
"""Phase 2. Polls Whiskers for new Fibratus alerts and returns the
|
||||
final findings dict."""
|
||||
try:
|
||||
return self._correlate_and_finalize(
|
||||
continuation["outcome"],
|
||||
continuation["agent_info"],
|
||||
continuation["hostname"],
|
||||
continuation["run_start"],
|
||||
continuation.get("file_name"),
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.exception("Fibratus EDR Phase 2 failed")
|
||||
return {
|
||||
**(continuation.get("phase_1", {})),
|
||||
"status": "error",
|
||||
"error": f"alert correlation failed: {exc}",
|
||||
}
|
||||
|
||||
# ---- core flow ------------------------------------------------------
|
||||
|
||||
def _run(self, payload_path: str, executable_args: Optional[str]) -> dict:
|
||||
phase_1, continuation = self._run_exec_phase(payload_path, executable_args)
|
||||
if continuation is None:
|
||||
return phase_1
|
||||
return self._correlate_and_finalize(
|
||||
continuation["outcome"],
|
||||
continuation["agent_info"],
|
||||
continuation["hostname"],
|
||||
continuation["run_start"],
|
||||
continuation.get("file_name"),
|
||||
)
|
||||
|
||||
def _run_exec_phase(self, payload_path: str, executable_args: Optional[str]):
|
||||
if not os.path.isfile(payload_path):
|
||||
return {
|
||||
"status": "error",
|
||||
"error": f"payload not found: {payload_path}",
|
||||
"profile": self.profile.name,
|
||||
}, None
|
||||
|
||||
with open(payload_path, "rb") as f:
|
||||
file_bytes = f.read()
|
||||
filename = os.path.basename(payload_path)
|
||||
|
||||
# Step 1 — discover hostname.
|
||||
try:
|
||||
info = self.agent.get_info()
|
||||
except AgentUnreachable as exc:
|
||||
return self._unreachable_result(exc), None
|
||||
except AgentError as exc:
|
||||
return {
|
||||
"status": "error",
|
||||
"error": f"agent /api/info failed: {exc}",
|
||||
"profile": self.profile.name,
|
||||
}, None
|
||||
|
||||
hostname = info.get("hostname")
|
||||
if not hostname:
|
||||
return {
|
||||
"status": "error",
|
||||
"error": "agent did not report a hostname",
|
||||
"profile": self.profile.name,
|
||||
"agent_info": info,
|
||||
}, None
|
||||
logger.info(
|
||||
"Fibratus EDR run on %s (agent %s, OS %s)",
|
||||
hostname,
|
||||
info.get("agent_version"),
|
||||
info.get("os_version"),
|
||||
)
|
||||
|
||||
# Step 2 — single-occupancy lock on the agent.
|
||||
try:
|
||||
self.agent.lock_acquire()
|
||||
except AgentBusy as exc:
|
||||
return {
|
||||
"status": "busy",
|
||||
"error": (
|
||||
"the agent is currently running another payload; retry once it "
|
||||
"finishes (or call /api/lock/release if it appears stuck)"
|
||||
),
|
||||
"profile": self.profile.name,
|
||||
"detail": str(exc),
|
||||
}, None
|
||||
except (AgentUnreachable, AgentError) as exc:
|
||||
return {
|
||||
"status": "error",
|
||||
"error": f"lock_acquire failed: {exc}",
|
||||
"profile": self.profile.name,
|
||||
}, None
|
||||
|
||||
run_start = datetime.now(timezone.utc)
|
||||
try:
|
||||
exec_outcome = self._run_locked(
|
||||
file_bytes, filename, executable_args, hostname, run_start, info
|
||||
)
|
||||
finally:
|
||||
try:
|
||||
self.agent.lock_release()
|
||||
except (AgentUnreachable, AgentError) as exc:
|
||||
logger.error("lock_release failed for profile %s: %s", self.profile.name, exc)
|
||||
|
||||
if exec_outcome.get("_final"):
|
||||
terminal = dict(exec_outcome)
|
||||
terminal.pop("_final", None)
|
||||
return terminal, None
|
||||
|
||||
kind = exec_outcome["kind"]
|
||||
is_blocked = (kind == "virus")
|
||||
max_wait = (
|
||||
self.profile.av_block_wait_seconds if is_blocked
|
||||
else self.profile.wait_seconds_for_alerts
|
||||
)
|
||||
exec_logs = exec_outcome.get("exec_logs", {})
|
||||
killed_by_edr = (
|
||||
False if is_blocked
|
||||
else self._classify_kill(exec_logs, filename=filename)
|
||||
)
|
||||
raw_exec_status = exec_logs.get("status")
|
||||
exec_status_label = (
|
||||
"virus" if is_blocked
|
||||
else ("killed_by_edr" if killed_by_edr else raw_exec_status)
|
||||
)
|
||||
phase_1 = {
|
||||
"status": "polling_alerts",
|
||||
"profile": self.profile.name,
|
||||
"display_name": self.profile.display_name,
|
||||
"agent_info": info,
|
||||
"hostname": hostname,
|
||||
"execution": {
|
||||
"pid": exec_outcome.get("pid"),
|
||||
"stdout": exec_logs.get("stdout", ""),
|
||||
"stderr": exec_logs.get("stderr", ""),
|
||||
"exit_code": exec_logs.get("exit_code"),
|
||||
"exec_status": exec_status_label,
|
||||
"agent_exec_status": raw_exec_status,
|
||||
"killed_by_edr": killed_by_edr,
|
||||
"message": exec_outcome.get("exec_resp", {}).get("message"),
|
||||
},
|
||||
"alerts": [],
|
||||
"summary": {
|
||||
"total_alerts": 0,
|
||||
"high_severity_alerts": 0,
|
||||
"killed_by_edr": killed_by_edr,
|
||||
"run_start": run_start.isoformat(),
|
||||
"run_end": None,
|
||||
"wait_seconds_for_alerts": max_wait,
|
||||
"blocked_by_av": is_blocked,
|
||||
},
|
||||
}
|
||||
continuation = {
|
||||
"outcome": exec_outcome,
|
||||
"agent_info": info,
|
||||
"hostname": hostname,
|
||||
"run_start": run_start,
|
||||
"phase_1": phase_1,
|
||||
"file_name": filename,
|
||||
}
|
||||
return phase_1, continuation
|
||||
|
||||
def _run_locked(
|
||||
self,
|
||||
file_bytes: bytes,
|
||||
filename: str,
|
||||
executable_args: Optional[str],
|
||||
hostname: str,
|
||||
run_start: datetime,
|
||||
agent_info: dict,
|
||||
) -> dict:
|
||||
"""Agent-touching half — XOR-on-the-wire exec + wait + log fetch.
|
||||
Identical mechanics to the Elastic flow; only the surrounding
|
||||
correlation step differs."""
|
||||
xor_key = secrets.randbelow(256)
|
||||
xor_table = bytes(b ^ xor_key for b in range(256))
|
||||
xored = file_bytes.translate(xor_table)
|
||||
try:
|
||||
exec_resp = self.agent.exec(
|
||||
file_bytes=xored,
|
||||
filename=filename,
|
||||
drop_path=self.profile.drop_path,
|
||||
executable_args=executable_args,
|
||||
xor_key=xor_key,
|
||||
)
|
||||
except AgentUnreachable as exc:
|
||||
return {**self._unreachable_result(exc), "_final": True}
|
||||
except AgentError as exc:
|
||||
return {
|
||||
"status": "error",
|
||||
"error": f"exec failed: {exc}",
|
||||
"profile": self.profile.name,
|
||||
"agent_info": agent_info,
|
||||
"_final": True,
|
||||
}
|
||||
|
||||
exec_status = exec_resp.get("status")
|
||||
pid = exec_resp.get("pid")
|
||||
|
||||
if exec_status == "virus":
|
||||
return {
|
||||
"kind": "virus",
|
||||
"exec_resp": exec_resp,
|
||||
"exec_logs": {},
|
||||
"pid": None,
|
||||
"exec_end": datetime.now(timezone.utc),
|
||||
}
|
||||
|
||||
if exec_status != "ok" or pid is None:
|
||||
return {
|
||||
"status": "error",
|
||||
"error": f"unexpected exec response: {exec_resp}",
|
||||
"profile": self.profile.name,
|
||||
"agent_info": agent_info,
|
||||
"_final": True,
|
||||
}
|
||||
|
||||
self._wait_for_exit(self.profile.exec_timeout_seconds)
|
||||
exec_end = datetime.now(timezone.utc)
|
||||
|
||||
try:
|
||||
self.agent.kill()
|
||||
except (AgentUnreachable, AgentError) as exc:
|
||||
logger.warning("kill request failed (non-fatal): %s", exc)
|
||||
|
||||
try:
|
||||
exec_logs = self.agent.get_execution_logs()
|
||||
except (AgentUnreachable, AgentError) as exc:
|
||||
logger.warning("get_execution_logs failed: %s", exc)
|
||||
exec_logs = {}
|
||||
|
||||
return {
|
||||
"kind": "exec_completed",
|
||||
"exec_resp": exec_resp,
|
||||
"exec_logs": exec_logs,
|
||||
"pid": pid,
|
||||
"exec_end": exec_end,
|
||||
}
|
||||
|
||||
def _correlate_and_finalize(
|
||||
self,
|
||||
outcome: dict,
|
||||
agent_info: dict,
|
||||
hostname: str,
|
||||
run_start: datetime,
|
||||
file_name: Optional[str] = None,
|
||||
) -> dict:
|
||||
kind = outcome["kind"]
|
||||
max_wait = (
|
||||
self.profile.av_block_wait_seconds if kind == "virus"
|
||||
else self.profile.wait_seconds_for_alerts
|
||||
)
|
||||
label = "Fibratus AV-block alert" if kind == "virus" else "Fibratus alerts"
|
||||
logger.info(
|
||||
"Polling Whiskers Fibratus event log for %s on %s (file=%s, max %ds, settle %.0fs)",
|
||||
label, hostname, file_name or "*", max_wait, self.SETTLE_SECONDS,
|
||||
)
|
||||
poll_result = self._poll_alerts(run_start, max_wait, file_name)
|
||||
alerts = poll_result["alerts"]
|
||||
run_end = poll_result["run_end"]
|
||||
|
||||
if poll_result.get("error") and not alerts:
|
||||
return self._partial_result(
|
||||
poll_result["error"]["sub_status"],
|
||||
poll_result["error"]["message"],
|
||||
agent_info, outcome.get("exec_logs", {}), outcome.get("pid"),
|
||||
run_start, run_end, hostname, alerts=[],
|
||||
)
|
||||
|
||||
if kind == "virus":
|
||||
return self._virus_blocked_result(
|
||||
outcome["exec_resp"], agent_info, run_start, run_end, alerts, hostname,
|
||||
)
|
||||
return self._success_result(
|
||||
agent_info, outcome["exec_logs"], outcome["pid"],
|
||||
run_start, run_end, hostname, alerts, file_name=file_name,
|
||||
)
|
||||
|
||||
def _poll_alerts(
|
||||
self,
|
||||
run_start: datetime,
|
||||
max_wait_seconds: int,
|
||||
file_name: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Two-stage poll against Whiskers:
|
||||
1. Wait-for-first-hit — query GET /api/alerts/fibratus/since
|
||||
every POLL_INTERVAL_SECONDS until at least one rule match
|
||||
lands or the wait budget elapses.
|
||||
2. Settle phase — once the first hit lands, keep polling for
|
||||
SETTLE_SECONDS to catch the burst of related alerts.
|
||||
|
||||
Filename narrowing happens client-side: the agent returns every
|
||||
event log record `Provider=Fibratus` in the window; we drop the
|
||||
ones whose process info doesn't reference our payload filename.
|
||||
|
||||
Terminal `supported: false` (Fibratus not installed on the VM)
|
||||
short-circuits with `error` set so the caller can render a partial
|
||||
result.
|
||||
|
||||
Returns `{alerts: [normalized...], run_end, error|None}`.
|
||||
"""
|
||||
deadline = time.monotonic() + max_wait_seconds
|
||||
run_end = datetime.now(timezone.utc)
|
||||
latest_normalized: List[dict] = []
|
||||
settle_deadline = None
|
||||
last_seen_count = 0
|
||||
last_error = None
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
run_end = datetime.now(timezone.utc)
|
||||
try:
|
||||
resp = self.agent.get_fibratus_alerts(
|
||||
run_start.isoformat(), run_end.isoformat(),
|
||||
)
|
||||
last_error = None
|
||||
except (AgentUnreachable, AgentError) as exc:
|
||||
last_error = {
|
||||
"sub_status": "agent_error", "message": str(exc),
|
||||
}
|
||||
time.sleep(self.POLL_INTERVAL_SECONDS)
|
||||
continue
|
||||
|
||||
if not resp.get("supported", True):
|
||||
# Fibratus isn't installed on this VM — terminal, no point
|
||||
# polling further. The caller turns this into a partial
|
||||
# result with a clear "Fibratus not installed" message.
|
||||
return {
|
||||
"alerts": [],
|
||||
"run_end": run_end,
|
||||
"error": {
|
||||
"sub_status": "not_supported",
|
||||
"message": "Whiskers reports Fibratus is not installed on this VM",
|
||||
},
|
||||
}
|
||||
|
||||
raw_events = resp.get("events") or []
|
||||
latest_normalized = self._normalize_and_filter(raw_events, file_name)
|
||||
if latest_normalized:
|
||||
if settle_deadline is None:
|
||||
settle_deadline = time.monotonic() + self.SETTLE_SECONDS
|
||||
logger.info(
|
||||
"First Fibratus alert(s) landed (count=%d); settling for %.0fs",
|
||||
len(latest_normalized), self.SETTLE_SECONDS,
|
||||
)
|
||||
elif len(latest_normalized) > last_seen_count:
|
||||
settle_deadline = time.monotonic() + self.SETTLE_SECONDS
|
||||
last_seen_count = len(latest_normalized)
|
||||
if time.monotonic() >= settle_deadline:
|
||||
return {"alerts": latest_normalized, "run_end": run_end, "error": None}
|
||||
time.sleep(self.POLL_INTERVAL_SECONDS)
|
||||
|
||||
return {"alerts": latest_normalized, "run_end": run_end, "error": last_error}
|
||||
|
||||
def _normalize_and_filter(
|
||||
self, raw_events: list, file_name: Optional[str],
|
||||
) -> List[dict]:
|
||||
"""Parse the agent's event-log records (each a dict with
|
||||
time_created / event_id / data) into renderer-shaped alert dicts,
|
||||
dropping any whose process info doesn't reference `file_name`."""
|
||||
out: List[dict] = []
|
||||
for ev in raw_events:
|
||||
if not isinstance(ev, dict):
|
||||
continue
|
||||
data_str = ev.get("data")
|
||||
if not isinstance(data_str, str):
|
||||
continue
|
||||
try:
|
||||
payload = _json.loads(data_str)
|
||||
except ValueError:
|
||||
logger.debug("Fibratus alert with non-JSON Data field; skipping")
|
||||
continue
|
||||
if not isinstance(payload, dict):
|
||||
continue
|
||||
if file_name and not _payload_mentions_filename(payload, file_name):
|
||||
continue
|
||||
entry = {
|
||||
"received_at": ev.get("time_created") or "",
|
||||
"payload": payload,
|
||||
}
|
||||
out.append(self._normalize_alert(entry))
|
||||
return out
|
||||
|
||||
# ---- helpers --------------------------------------------------------
|
||||
|
||||
def _wait_for_exit(self, timeout_seconds: int) -> None:
|
||||
deadline = time.monotonic() + timeout_seconds
|
||||
poll_interval = 1.0
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
logs = self.agent.get_execution_logs()
|
||||
except (AgentUnreachable, AgentError):
|
||||
time.sleep(poll_interval)
|
||||
continue
|
||||
status = (logs.get("status") or "").lower()
|
||||
if status and status != "running":
|
||||
return
|
||||
time.sleep(poll_interval)
|
||||
|
||||
@classmethod
|
||||
def _classify_kill(
|
||||
cls,
|
||||
exec_logs: dict,
|
||||
*,
|
||||
filename: Optional[str] = None,
|
||||
alerts: Optional[list] = None,
|
||||
) -> bool:
|
||||
"""Same heuristic the Elastic analyzer uses — see the docstring
|
||||
there. For .dll payloads we require alert evidence to flag
|
||||
killed_by_edr because rundll32 exits non-zero for benign reasons."""
|
||||
raw_status = (exec_logs.get("status") or "").lower()
|
||||
if raw_status == "killed":
|
||||
return False
|
||||
exit_code = exec_logs.get("exit_code")
|
||||
if exit_code in (0, None):
|
||||
return False
|
||||
is_dll = bool(filename and filename.lower().endswith(".dll"))
|
||||
if is_dll:
|
||||
return bool(alerts)
|
||||
return True
|
||||
|
||||
# ---- result builders -----------------------------------------------
|
||||
|
||||
def _normalize_alert(self, entry: dict) -> dict:
|
||||
"""Convert a Fibratus event-log alert into the dict shape the
|
||||
existing tools/edr.js renderer expects (title / severity /
|
||||
detected_at / details / raw).
|
||||
|
||||
Real Fibratus alert JSON (alertsenders.eventlog format=json,
|
||||
Fibratus 2.4+ schema):
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"title": "LSASS access from unsigned executable",
|
||||
"text": "<one-line description, populated into details.reason>",
|
||||
"description": "<long rule explanation, populated into details.rule_description>",
|
||||
"severity": "high",
|
||||
"events": [
|
||||
{
|
||||
"name": "OpenProcess", "category": "process",
|
||||
"timestamp": "2026-04-30T04:53:45.359-07:00",
|
||||
"params": {...},
|
||||
"callstack": ["addr module!symbol", ...],
|
||||
"proc": {
|
||||
pid, ppid, name, exe, cmdline, cwd, sid, username, domain,
|
||||
integrity_level, parent_name, parent_cmdline, ancestors[]
|
||||
}
|
||||
},
|
||||
...
|
||||
],
|
||||
"labels": { # bare keys, no mitre.* prefix
|
||||
"tactic.id": "TA0006", "tactic.name": "Credential Access",
|
||||
"technique.id": "T1003", "technique.name": "OS Credential Dumping",
|
||||
"subtechnique.id": "T1003.001", ...
|
||||
},
|
||||
"tags": [...]
|
||||
}
|
||||
|
||||
Renderer's process card reads Elastic-Defend-flavored keys (name,
|
||||
pid, executable, command_line, ...). We map Fibratus's `proc` block
|
||||
accordingly. Parent process is embedded as `parent_name` /
|
||||
`parent_cmdline` keys inside the same `proc` dict (NOT a separate
|
||||
block) — we project those into a renderer-shaped parent dict.
|
||||
"""
|
||||
payload = entry.get("payload") or {}
|
||||
title = payload.get("title") or "Fibratus alert"
|
||||
severity = (payload.get("severity") or "").lower() or "unknown"
|
||||
rule_id = payload.get("id")
|
||||
reason = payload.get("text")
|
||||
rule_description = payload.get("description")
|
||||
|
||||
events = payload.get("events") if isinstance(payload.get("events"), list) else []
|
||||
first_event = events[0] if events else {}
|
||||
detected_at = (
|
||||
first_event.get("timestamp") if isinstance(first_event, dict) else None
|
||||
) or entry.get("received_at")
|
||||
|
||||
proc_dict = first_event.get("proc") if isinstance(first_event, dict) else None
|
||||
process = self._fibratus_proc_to_process(proc_dict)
|
||||
parent = self._fibratus_proc_to_parent(proc_dict)
|
||||
|
||||
labels = payload.get("labels") if isinstance(payload.get("labels"), dict) else {}
|
||||
mitre = self._fibratus_labels_to_mitre(labels)
|
||||
|
||||
rule_tags = payload.get("tags") if isinstance(payload.get("tags"), list) else []
|
||||
category = first_event.get("category") if isinstance(first_event, dict) else None
|
||||
if category and category not in rule_tags:
|
||||
rule_tags = list(rule_tags) + [category]
|
||||
|
||||
details = {
|
||||
"reason": reason,
|
||||
"rule_description": rule_description,
|
||||
"rule_id": rule_id,
|
||||
"rule_tags": rule_tags,
|
||||
"process": process,
|
||||
"parent": parent,
|
||||
"mitre": mitre,
|
||||
# Keep the raw events list around for any future renderer that
|
||||
# wants to surface the full kernel-event chain (callstacks,
|
||||
# ancestors, params).
|
||||
"fibratus_events": events,
|
||||
}
|
||||
|
||||
return {
|
||||
"title": title,
|
||||
"severity": severity,
|
||||
"rule_id": rule_id,
|
||||
"rule_uuid": rule_id,
|
||||
"detected_at": detected_at,
|
||||
"details": details,
|
||||
"raw": payload,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _fibratus_proc_to_process(proc) -> Optional[dict]:
|
||||
"""Map Fibratus's `events[].proc` dict to the Elastic-Defend-flavored
|
||||
keys the renderer's Process card reads."""
|
||||
if not isinstance(proc, dict):
|
||||
return None
|
||||
return {
|
||||
"name": proc.get("name"),
|
||||
"pid": proc.get("pid"),
|
||||
"executable": proc.get("exe"),
|
||||
"command_line": proc.get("cmdline"),
|
||||
"working_directory": proc.get("cwd"),
|
||||
"integrity_level": proc.get("integrity_level"),
|
||||
"entity_id": None,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _fibratus_proc_to_parent(proc) -> Optional[dict]:
|
||||
"""Fibratus embeds parent info as flat `parent_name` / `parent_cmdline`
|
||||
keys inside the child's `proc` dict (and the parent PID as `ppid`)
|
||||
rather than emitting a separate parent block. Project those into
|
||||
the renderer's Parent card shape."""
|
||||
if not isinstance(proc, dict):
|
||||
return None
|
||||
if not (proc.get("parent_name") or proc.get("parent_cmdline")):
|
||||
return None
|
||||
return {
|
||||
"name": proc.get("parent_name"),
|
||||
"pid": proc.get("ppid"),
|
||||
"executable": None, # Fibratus doesn't ship the parent's exe path
|
||||
"command_line": proc.get("parent_cmdline"),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _fibratus_labels_to_mitre(labels: dict) -> list:
|
||||
"""Group Fibratus's flat MITRE label map into the renderer's
|
||||
per-technique chip dicts. Fibratus's actual rule pack emits BARE
|
||||
keys (no `mitre.` prefix):
|
||||
tactic.id -> "TA0006"
|
||||
tactic.name -> "Credential Access"
|
||||
tactic.ref -> "https://attack.mitre.org/tactics/TA0006/"
|
||||
technique.id -> "T1003"
|
||||
technique.name -> "OS Credential Dumping"
|
||||
technique.ref -> "https://attack.mitre.org/techniques/T1003/"
|
||||
subtechnique.id -> "T1003.001"
|
||||
...
|
||||
We also accept the older `mitre.*`-prefixed forms in case some rule
|
||||
packs ship them, since the cost of the extra fallback is one dict
|
||||
lookup. Reference URLs come straight from Fibratus when present;
|
||||
otherwise we synthesize the canonical attack.mitre.org URL.
|
||||
"""
|
||||
if not isinstance(labels, dict):
|
||||
return []
|
||||
flat = {k.lower(): v for k, v in labels.items() if isinstance(v, str)}
|
||||
|
||||
def _pick(*candidates):
|
||||
for c in candidates:
|
||||
v = flat.get(c)
|
||||
if v:
|
||||
return v
|
||||
return None
|
||||
|
||||
tactic_id = _pick("tactic.id", "mitre.tactic.id", "mitre.tactics.id")
|
||||
tactic_name = _pick("tactic.name", "mitre.tactic.name", "mitre.tactics.name")
|
||||
tactic_ref = _pick("tactic.ref", "tactic.reference")
|
||||
technique_id = _pick("technique.id", "mitre.technique.id", "mitre.techniques.id")
|
||||
technique_name = _pick("technique.name", "mitre.technique.name", "mitre.techniques.name")
|
||||
technique_ref = _pick("technique.ref", "technique.reference")
|
||||
sub_id = _pick("subtechnique.id", "mitre.subtechnique.id", "mitre.subtechniques.id")
|
||||
sub_name = _pick("subtechnique.name", "mitre.subtechnique.name", "mitre.subtechniques.name")
|
||||
sub_ref = _pick("subtechnique.ref", "subtechnique.reference")
|
||||
|
||||
if not (tactic_id or tactic_name or technique_id or technique_name):
|
||||
return []
|
||||
|
||||
chip = {
|
||||
"tactic_id": tactic_id,
|
||||
"tactic_name": tactic_name,
|
||||
"tactic_reference": (
|
||||
tactic_ref
|
||||
or (f"https://attack.mitre.org/tactics/{tactic_id}/" if tactic_id else None)
|
||||
),
|
||||
"technique_id": technique_id,
|
||||
"technique_name": technique_name,
|
||||
"technique_reference": (
|
||||
technique_ref
|
||||
or (f"https://attack.mitre.org/techniques/{technique_id}/" if technique_id else None)
|
||||
),
|
||||
}
|
||||
if sub_id or sub_name:
|
||||
chip["subtechnique_id"] = sub_id
|
||||
chip["subtechnique_name"] = sub_name
|
||||
chip["subtechnique_reference"] = sub_ref or (
|
||||
(lambda: (
|
||||
f"https://attack.mitre.org/techniques/{sub_id.split('.', 1)[0]}/{sub_id.split('.', 1)[1]}/"
|
||||
))() if sub_id and "." in sub_id else None
|
||||
)
|
||||
return [chip]
|
||||
|
||||
def _success_result(
|
||||
self,
|
||||
agent_info: dict,
|
||||
exec_logs: dict,
|
||||
pid: int,
|
||||
run_start: datetime,
|
||||
run_end: datetime,
|
||||
hostname: str,
|
||||
alerts: List[dict], # already normalized by _normalize_and_filter
|
||||
file_name: Optional[str] = None,
|
||||
) -> dict:
|
||||
high_severity_count = sum(1 for a in alerts if a["severity"] in HIGH_SEVERITY)
|
||||
killed_by_edr = self._classify_kill(exec_logs, filename=file_name, alerts=alerts)
|
||||
raw_exec_status = exec_logs.get("status")
|
||||
exec_status_label = "killed_by_edr" if killed_by_edr else raw_exec_status
|
||||
|
||||
return {
|
||||
"status": "completed",
|
||||
"profile": self.profile.name,
|
||||
"display_name": self.profile.display_name,
|
||||
"agent_info": agent_info,
|
||||
"hostname": hostname,
|
||||
"execution": {
|
||||
"pid": pid,
|
||||
"stdout": exec_logs.get("stdout", ""),
|
||||
"stderr": exec_logs.get("stderr", ""),
|
||||
"exit_code": exec_logs.get("exit_code"),
|
||||
"exec_status": exec_status_label,
|
||||
"agent_exec_status": raw_exec_status,
|
||||
"killed_by_edr": killed_by_edr,
|
||||
},
|
||||
"alerts": alerts,
|
||||
"summary": {
|
||||
"total_alerts": len(alerts),
|
||||
"high_severity_alerts": high_severity_count,
|
||||
"killed_by_edr": killed_by_edr,
|
||||
"blocked_by_av": False,
|
||||
"run_start": run_start.isoformat(),
|
||||
"run_end": run_end.isoformat(),
|
||||
"wait_seconds_for_alerts": self.profile.wait_seconds_for_alerts,
|
||||
},
|
||||
}
|
||||
|
||||
def _partial_result(
|
||||
self,
|
||||
sub_status: str,
|
||||
error: str,
|
||||
agent_info: dict,
|
||||
exec_logs: dict,
|
||||
pid: Optional[int],
|
||||
run_start: datetime,
|
||||
run_end: datetime,
|
||||
hostname: str,
|
||||
alerts: List[dict],
|
||||
) -> dict:
|
||||
"""Execution succeeded but the agent / event-log query failed —
|
||||
surface what we have. Mirrors ElasticEdrAnalyzer._partial_result."""
|
||||
return {
|
||||
"status": "partial",
|
||||
"sub_status": sub_status,
|
||||
"error": error,
|
||||
"profile": self.profile.name,
|
||||
"display_name": self.profile.display_name,
|
||||
"agent_info": agent_info,
|
||||
"hostname": hostname,
|
||||
"execution": {
|
||||
"pid": pid,
|
||||
"stdout": exec_logs.get("stdout", ""),
|
||||
"stderr": exec_logs.get("stderr", ""),
|
||||
"exit_code": exec_logs.get("exit_code"),
|
||||
"exec_status": exec_logs.get("status"),
|
||||
},
|
||||
"alerts": alerts,
|
||||
"summary": {
|
||||
"total_alerts": 0,
|
||||
"high_severity_alerts": 0,
|
||||
"run_start": run_start.isoformat(),
|
||||
"run_end": run_end.isoformat(),
|
||||
},
|
||||
}
|
||||
|
||||
def _virus_blocked_result(
|
||||
self,
|
||||
exec_resp: dict,
|
||||
agent_info: dict,
|
||||
run_start: datetime,
|
||||
run_end: datetime,
|
||||
alerts: List[dict], # already normalized
|
||||
hostname: str,
|
||||
) -> dict:
|
||||
return {
|
||||
"status": "blocked_by_av",
|
||||
"profile": self.profile.name,
|
||||
"display_name": self.profile.display_name,
|
||||
"agent_info": agent_info,
|
||||
"hostname": hostname,
|
||||
"execution": {
|
||||
"pid": None,
|
||||
"stdout": "",
|
||||
"stderr": "",
|
||||
"exit_code": None,
|
||||
"exec_status": "virus",
|
||||
"message": exec_resp.get("message"),
|
||||
},
|
||||
"alerts": alerts,
|
||||
"summary": {
|
||||
"total_alerts": len(alerts),
|
||||
"high_severity_alerts": sum(
|
||||
1 for a in alerts if a["severity"] in HIGH_SEVERITY
|
||||
),
|
||||
"run_start": run_start.isoformat(),
|
||||
"run_end": run_end.isoformat(),
|
||||
"wait_seconds_for_alerts": self.profile.av_block_wait_seconds,
|
||||
"blocked_by_av": True,
|
||||
},
|
||||
}
|
||||
|
||||
def _unreachable_result(self, exc: Exception) -> dict:
|
||||
return {
|
||||
"status": "agent_unreachable",
|
||||
"error": str(exc),
|
||||
"profile": self.profile.name,
|
||||
"display_name": self.profile.display_name,
|
||||
"agent_url": self.profile.agent_url,
|
||||
}
|
||||
|
||||
|
||||
def _payload_mentions_filename(payload, file_name: str) -> bool:
|
||||
"""Check if `file_name` appears in any of the alert payload's
|
||||
process-related fields. Fibratus events tag every detection with
|
||||
`ps` (and `pps` for the parent), each carrying `exe`, `name`, `comm`
|
||||
(command line). Forgiving / depth-limited scan — the alert format
|
||||
isn't strictly typed across rule kinds.
|
||||
"""
|
||||
if not isinstance(payload, dict):
|
||||
return False
|
||||
needle = file_name.lower()
|
||||
|
||||
def _scan(obj, depth: int = 0) -> bool:
|
||||
if depth > 6:
|
||||
return False
|
||||
if isinstance(obj, str):
|
||||
return needle in obj.lower()
|
||||
if isinstance(obj, dict):
|
||||
return any(_scan(v, depth + 1) for v in obj.values())
|
||||
if isinstance(obj, list):
|
||||
return any(_scan(v, depth + 1) for v in obj)
|
||||
return False
|
||||
|
||||
# Fibratus's actual schema puts process info under `events[].proc`
|
||||
# (with `parent_name`, `parent_cmdline`, `ancestors[]` flat inside).
|
||||
# We also tolerate older field names (`ps` / `pps`) as a fallback for
|
||||
# any rule pack that uses the legacy shape.
|
||||
process_blobs = []
|
||||
for key in ("events", "proc", "ps", "pps", "process"):
|
||||
v = payload.get(key)
|
||||
if v is not None:
|
||||
process_blobs.append(v)
|
||||
if process_blobs:
|
||||
return any(_scan(b) for b in process_blobs)
|
||||
return _scan(payload)
|
||||
@@ -26,14 +26,28 @@ class EdrProfileError(ValueError):
|
||||
"""Profile YAML failed validation. Message names the field at fault."""
|
||||
|
||||
|
||||
# Recognized profile kinds.
|
||||
# `elastic` — LitterBox queries an Elastic Defend / Detection-Engine cluster.
|
||||
# `fibratus` — LitterBox polls Whiskers's GET /api/alerts/fibratus/since,
|
||||
# which wevtutil-queries the EDR VM's Application event log
|
||||
# for `Provider=Fibratus` alert records (DetonatorAgent shape).
|
||||
_PROFILE_KINDS = {"elastic", "fibratus"}
|
||||
|
||||
|
||||
@dataclass
|
||||
class EdrProfile:
|
||||
name: str
|
||||
display_name: str
|
||||
agent_url: str
|
||||
elastic_url: str
|
||||
elastic_apikey: str
|
||||
# `kind` discriminates analyzer behavior. Defaults to "elastic" so older
|
||||
# profile YAMLs (no `kind` key) keep working unchanged.
|
||||
kind: str = "elastic"
|
||||
|
||||
# Elastic-only fields. Required when kind=elastic, ignored when kind=fibratus.
|
||||
elastic_url: Optional[str] = None
|
||||
elastic_apikey: Optional[str] = None
|
||||
elastic_verify_tls: bool = False
|
||||
|
||||
wait_seconds_for_alerts: int = 90
|
||||
# Max polling window for the AV-block path. The orchestrator polls
|
||||
# Elastic every 2s and early-returns as soon as the prevention alert
|
||||
@@ -53,12 +67,22 @@ class EdrProfile:
|
||||
f"profile must be a YAML mapping, got {type(data).__name__}"
|
||||
)
|
||||
|
||||
required = ("name", "display_name", "agent_url", "elastic_url", "elastic_apikey")
|
||||
kind = (data.get("kind") or "elastic").strip().lower()
|
||||
if kind not in _PROFILE_KINDS:
|
||||
raise EdrProfileError(
|
||||
f"unknown profile kind {kind!r} — must be one of {sorted(_PROFILE_KINDS)}"
|
||||
)
|
||||
|
||||
common_required = ("name", "display_name", "agent_url")
|
||||
if kind == "elastic":
|
||||
required = common_required + ("elastic_url", "elastic_apikey")
|
||||
else: # fibratus — no extra fields, just the Whiskers agent URL
|
||||
required = common_required
|
||||
missing = [k for k in required if not data.get(k)]
|
||||
if missing:
|
||||
raise EdrProfileError(f"missing required field(s): {', '.join(missing)}")
|
||||
|
||||
if data["elastic_apikey"].startswith("REPLACE_ME"):
|
||||
if kind == "elastic" and data["elastic_apikey"].startswith("REPLACE_ME"):
|
||||
raise EdrProfileError(
|
||||
"elastic_apikey is still the example placeholder — fill it in"
|
||||
)
|
||||
@@ -67,8 +91,9 @@ class EdrProfile:
|
||||
name=data["name"],
|
||||
display_name=data["display_name"],
|
||||
agent_url=data["agent_url"].rstrip("/"),
|
||||
elastic_url=data["elastic_url"].rstrip("/"),
|
||||
elastic_apikey=data["elastic_apikey"],
|
||||
kind=kind,
|
||||
elastic_url=(data.get("elastic_url") or "").rstrip("/") or None,
|
||||
elastic_apikey=data.get("elastic_apikey"),
|
||||
elastic_verify_tls=bool(data.get("elastic_verify_tls", False)),
|
||||
wait_seconds_for_alerts=int(data.get("wait_seconds_for_alerts", 90)),
|
||||
av_block_wait_seconds=int(data.get("av_block_wait_seconds", 60)),
|
||||
|
||||
@@ -20,6 +20,7 @@ import threading
|
||||
from typing import Callable, Dict, List, Optional, Tuple
|
||||
|
||||
from .elastic_edr_analyzer import ElasticEdrAnalyzer
|
||||
from .fibratus_edr_analyzer import FibratusEdrAnalyzer
|
||||
from .profile import EdrProfile, load_profiles
|
||||
|
||||
|
||||
@@ -30,6 +31,14 @@ _PROFILES: Dict[str, EdrProfile] = {}
|
||||
_LOADED = False
|
||||
|
||||
|
||||
def _make_analyzer(profile: EdrProfile, config: dict):
|
||||
"""Pick the right analyzer for a profile's `kind`. New analyzer types
|
||||
plug in here — keep this the single dispatch site."""
|
||||
if profile.kind == "fibratus":
|
||||
return FibratusEdrAnalyzer(config, profile)
|
||||
return ElasticEdrAnalyzer(config, profile)
|
||||
|
||||
|
||||
def init(config: dict, profiles_dir: Optional[str] = None) -> None:
|
||||
"""Load profiles from disk. Called once at app startup. Idempotent —
|
||||
re-calling reloads the registry, which is useful for tests but
|
||||
@@ -49,7 +58,9 @@ def init(config: dict, profiles_dir: Optional[str] = None) -> None:
|
||||
|
||||
def list_profiles() -> List[dict]:
|
||||
"""Public-facing profile list for the UI. Intentionally omits secrets
|
||||
(apikey) — only the operator-facing identity + agent URL is returned.
|
||||
(apikey, ingest_token) — only the operator-facing identity + agent URL
|
||||
is returned. `kind` is included so the UI can render kind-specific
|
||||
affordances when needed.
|
||||
"""
|
||||
return [
|
||||
{
|
||||
@@ -57,6 +68,7 @@ def list_profiles() -> List[dict]:
|
||||
"display_name": p.display_name,
|
||||
"agent_url": p.agent_url,
|
||||
"elastic_url": p.elastic_url,
|
||||
"kind": p.kind,
|
||||
}
|
||||
for p in _PROFILES.values()
|
||||
]
|
||||
@@ -80,7 +92,7 @@ def dispatch(profile_name: str, payload_path: str, config: dict) -> dict:
|
||||
if profile is None:
|
||||
raise KeyError(f"unknown EDR profile: {profile_name!r}")
|
||||
|
||||
analyzer = ElasticEdrAnalyzer(config, profile)
|
||||
analyzer = _make_analyzer(profile, config)
|
||||
analyzer.analyze(payload_path)
|
||||
try:
|
||||
return analyzer.get_results()
|
||||
@@ -115,7 +127,7 @@ def dispatch_split(
|
||||
if profile is None:
|
||||
raise KeyError(f"unknown EDR profile: {profile_name!r}")
|
||||
|
||||
analyzer = ElasticEdrAnalyzer(config, profile)
|
||||
analyzer = _make_analyzer(profile, config)
|
||||
phase_1, continuation = analyzer.run_exec(payload_path, executable_args)
|
||||
|
||||
if continuation is None:
|
||||
|
||||
+48
-64
@@ -158,78 +158,62 @@ def api_system_scanners():
|
||||
return jsonify({'scanners': rows, 'counts': counts})
|
||||
|
||||
|
||||
@api_bp.route('/api/edr/fibratus/<profile>/alerts/since', methods=['GET'])
|
||||
@error_handler
|
||||
def api_fibratus_alerts_passthrough(profile):
|
||||
"""Test/debug passthrough — query the Whiskers agent's
|
||||
`/api/alerts/fibratus/since` endpoint for a registered profile.
|
||||
|
||||
Lets operators verify the Fibratus-on-VM → event-log → Whiskers wire
|
||||
is healthy without dispatching a payload. Same query params (`from`,
|
||||
`until` — ISO8601). Returns whatever the agent returned, or 404 if
|
||||
the named profile isn't registered or isn't kind=fibratus.
|
||||
"""
|
||||
from ..analyzers.edr.agent_client import AgentClient, AgentError, AgentUnreachable
|
||||
|
||||
deps = _deps()
|
||||
p = deps.edr_registry.get_profile(profile)
|
||||
if p is None:
|
||||
return jsonify({'error': f'Unknown EDR profile: {profile}'}), 404
|
||||
if p.kind != 'fibratus':
|
||||
return jsonify({
|
||||
'error': f'Profile {profile!r} is kind={p.kind!r}, not fibratus',
|
||||
}), 400
|
||||
|
||||
since = request.args.get('from')
|
||||
if not since:
|
||||
return jsonify({'error': 'missing required `from` query param (ISO8601)'}), 400
|
||||
from datetime import datetime, timezone
|
||||
until = request.args.get('until') or datetime.now(timezone.utc).isoformat()
|
||||
|
||||
agent = AgentClient(p.agent_url)
|
||||
try:
|
||||
return jsonify(agent.get_fibratus_alerts(since, until))
|
||||
except AgentUnreachable as exc:
|
||||
return jsonify({'error': f'agent unreachable: {exc}'}), 502
|
||||
except AgentError as exc:
|
||||
return jsonify({'error': f'agent error: {exc}'}), 502
|
||||
|
||||
|
||||
@api_bp.route('/api/edr/agents/status', methods=['GET'])
|
||||
@error_handler
|
||||
def api_edr_agents_status():
|
||||
"""Live probe of every registered EDR profile.
|
||||
"""Latest reachability snapshot for every registered EDR profile.
|
||||
|
||||
For each profile we hit the agent's /api/info + /api/lock/status and
|
||||
the Elastic stack's `GET /` ping in parallel. Each probe is bounded
|
||||
by a tight timeout so an unreachable agent doesn't stall the whole
|
||||
page. Used by /whiskers to render the inventory + live status.
|
||||
Reads the cached probe result (refreshed in the background by
|
||||
`services.edr_health`'s poller, with a 30s TTL on the cache itself).
|
||||
Cold-start cache misses fall through to a synchronous probe — first
|
||||
request after boot waits one probe cycle, every subsequent dashboard
|
||||
fetch is instant.
|
||||
|
||||
Pass `?refresh=1` to force a synchronous re-probe (debug aid).
|
||||
"""
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from ..analyzers.edr.agent_client import AgentClient, AgentError, AgentUnreachable
|
||||
from ..analyzers.edr.elastic_client import ElasticClient, ElasticError, ElasticUnreachable
|
||||
from ..services import edr_health
|
||||
|
||||
deps = _deps()
|
||||
profiles = list(deps.edr_registry._PROFILES.values()) # internal accessor — same module
|
||||
|
||||
def probe(p):
|
||||
agent = AgentClient(p.agent_url, timeout=4)
|
||||
elastic = ElasticClient(
|
||||
p.elastic_url, p.elastic_apikey,
|
||||
verify_tls=p.elastic_verify_tls, timeout=5,
|
||||
)
|
||||
agent_info, agent_err, lock = None, None, None
|
||||
try:
|
||||
agent_info = agent.get_info()
|
||||
try:
|
||||
lock = agent.lock_status()
|
||||
except (AgentUnreachable, AgentError):
|
||||
pass
|
||||
except AgentUnreachable as e:
|
||||
agent_err = f"unreachable: {e}"
|
||||
except AgentError as e:
|
||||
agent_err = f"error: {e}"
|
||||
|
||||
elastic_info, elastic_err = None, None
|
||||
try:
|
||||
elastic_info = elastic.ping()
|
||||
except ElasticUnreachable as e:
|
||||
elastic_err = f"unreachable: {e}"
|
||||
except ElasticError as e:
|
||||
elastic_err = f"error: {e}"
|
||||
|
||||
return {
|
||||
"name": p.name,
|
||||
"display_name": p.display_name,
|
||||
"type": "elastic-defend",
|
||||
"agent_url": p.agent_url,
|
||||
"elastic_url": p.elastic_url,
|
||||
"agent": {
|
||||
"reachable": agent_info is not None,
|
||||
"error": agent_err,
|
||||
"hostname": (agent_info or {}).get("hostname"),
|
||||
"os_version": (agent_info or {}).get("os_version"),
|
||||
"agent_version": (agent_info or {}).get("agent_version"),
|
||||
},
|
||||
"lock": lock,
|
||||
"elastic": {
|
||||
"reachable": elastic_info is not None,
|
||||
"error": elastic_err,
|
||||
"cluster_name": (elastic_info or {}).get("cluster_name"),
|
||||
"version": ((elastic_info or {}).get("version") or {}).get("number"),
|
||||
},
|
||||
}
|
||||
|
||||
if not profiles:
|
||||
return jsonify({"agents": []})
|
||||
|
||||
# Probe in parallel — total wall time is the slowest probe, not sum of all.
|
||||
with ThreadPoolExecutor(max_workers=min(8, len(profiles))) as pool:
|
||||
results = list(pool.map(probe, profiles))
|
||||
return jsonify({"agents": results})
|
||||
force = request.args.get('refresh') == '1'
|
||||
return jsonify(edr_health.get_status_snapshot(profiles, force_refresh=force))
|
||||
|
||||
|
||||
@api_bp.route('/api/results/edr/<profile>/<target>', methods=['GET'])
|
||||
|
||||
@@ -0,0 +1,202 @@
|
||||
"""Cached EDR-profile reachability probe.
|
||||
|
||||
The dashboard renders one card per profile and shows agent + backend
|
||||
reachability. Probing every profile on every page load makes the
|
||||
dashboard slow — especially when one of the targets is offline and
|
||||
we wait the full timeout. This module fixes that with two layers:
|
||||
|
||||
1. A short TTL cache keyed by registered-profiles set. Repeat reads
|
||||
within the TTL window return the cached snapshot instantly.
|
||||
|
||||
2. A background daemon thread that pre-warms the cache every
|
||||
`REFRESH_INTERVAL` seconds, so even the first dashboard load
|
||||
after app boot lands on a warm cache (after ~one initial probe
|
||||
cycle).
|
||||
|
||||
The endpoint in app/blueprints/api.py reads `get_status_snapshot()`,
|
||||
which serves the cache or triggers a synchronous probe on cold start.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from typing import List, Optional
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Per-call timeouts. Tighter than the previous defaults (agent=4s,
|
||||
# elastic=5s): when a service is reachable it answers in milliseconds,
|
||||
# so 2s is plenty of grace; when it's unreachable we fail fast and the
|
||||
# poller's next cycle picks up recovery within REFRESH_INTERVAL.
|
||||
AGENT_TIMEOUT_S = 2.0
|
||||
ELASTIC_TIMEOUT_S = 2.0
|
||||
|
||||
# How long a cached snapshot is considered fresh. Background poller
|
||||
# refreshes more often than this, so reads under healthy operation
|
||||
# always hit a fresh cache.
|
||||
CACHE_TTL_S = 30.0
|
||||
|
||||
# Background poller cadence. Faster than CACHE_TTL_S so the cache is
|
||||
# perpetually warm; slower than the dashboard's auto-refresh (60s) so
|
||||
# we don't probe more than necessary.
|
||||
REFRESH_INTERVAL_S = 15.0
|
||||
|
||||
|
||||
_lock = threading.Lock()
|
||||
_cached: Optional[dict] = None
|
||||
_cached_at: float = 0.0
|
||||
_poller_started = False
|
||||
|
||||
|
||||
def get_status_snapshot(profiles: List, *, force_refresh: bool = False) -> dict:
|
||||
"""Return the latest agent-status snapshot.
|
||||
|
||||
Cached for `CACHE_TTL_S` seconds against the current registered-
|
||||
profiles tuple. Cold reads (no cache, or expired) trigger a
|
||||
synchronous probe; warm reads return the cache instantly.
|
||||
"""
|
||||
global _cached, _cached_at
|
||||
|
||||
if not profiles:
|
||||
return {"agents": [], "cache_age_seconds": 0}
|
||||
|
||||
profile_key = _profile_key(profiles)
|
||||
|
||||
with _lock:
|
||||
cached = _cached
|
||||
cached_at = _cached_at
|
||||
cache_valid = (
|
||||
cached is not None
|
||||
and not force_refresh
|
||||
and (time.monotonic() - cached_at) < CACHE_TTL_S
|
||||
and cached.get("_profile_key") == profile_key
|
||||
)
|
||||
|
||||
if cache_valid:
|
||||
age = time.monotonic() - cached_at
|
||||
return _public_snapshot(cached, age)
|
||||
|
||||
snapshot = _probe_all(profiles)
|
||||
snapshot["_profile_key"] = profile_key
|
||||
with _lock:
|
||||
_cached = snapshot
|
||||
_cached_at = time.monotonic()
|
||||
return _public_snapshot(snapshot, 0.0)
|
||||
|
||||
|
||||
def start_poller(deps) -> None:
|
||||
"""Kick off the background pre-warming thread. Idempotent — second
|
||||
call is a no-op so reload-aware test setups can't spawn duplicates.
|
||||
`deps` is the litterbox extension namespace; we pull profiles off
|
||||
`deps.edr_registry._PROFILES` on each tick so YAML-edit-then-restart
|
||||
flows pick up new profiles automatically (a restart re-creates deps,
|
||||
which re-runs start_poller)."""
|
||||
global _poller_started
|
||||
with _lock:
|
||||
if _poller_started:
|
||||
return
|
||||
_poller_started = True
|
||||
|
||||
def _loop():
|
||||
# Initial delay so app startup isn't gated on probe latency.
|
||||
time.sleep(0.5)
|
||||
while True:
|
||||
try:
|
||||
profiles = list(deps.edr_registry._PROFILES.values())
|
||||
if profiles:
|
||||
get_status_snapshot(profiles, force_refresh=True)
|
||||
except Exception:
|
||||
logger.exception("EDR health poller tick failed")
|
||||
time.sleep(REFRESH_INTERVAL_S)
|
||||
|
||||
t = threading.Thread(target=_loop, name="edr-health-poller", daemon=True)
|
||||
t.start()
|
||||
logger.info("EDR health poller started (interval=%ss)", REFRESH_INTERVAL_S)
|
||||
|
||||
|
||||
# ---- internals ---------------------------------------------------------
|
||||
|
||||
|
||||
def _profile_key(profiles: List) -> tuple:
|
||||
"""Stable key for the cache so a profile add/remove forces a refresh
|
||||
rather than serving stale entries."""
|
||||
return tuple(sorted((p.name, p.kind, p.agent_url) for p in profiles))
|
||||
|
||||
|
||||
def _public_snapshot(snapshot: dict, age: float) -> dict:
|
||||
"""Strip internal cache-bookkeeping and stamp `cache_age_seconds`."""
|
||||
out = {k: v for k, v in snapshot.items() if not k.startswith("_")}
|
||||
out["cache_age_seconds"] = round(age, 1)
|
||||
return out
|
||||
|
||||
|
||||
def _probe_all(profiles: List) -> dict:
|
||||
"""Probe every profile in parallel. Wall time is dominated by the
|
||||
slowest single probe (per-probe timeout is bounded above)."""
|
||||
with ThreadPoolExecutor(max_workers=min(8, len(profiles))) as pool:
|
||||
results = list(pool.map(_probe_one, profiles))
|
||||
return {"agents": results}
|
||||
|
||||
|
||||
def _probe_one(p) -> dict:
|
||||
"""One profile's reachability check. Lazy-imports the EDR clients
|
||||
to keep top-level import cost off the request hot path on cold
|
||||
boots that don't touch this module."""
|
||||
from ..analyzers.edr.agent_client import AgentClient, AgentError, AgentUnreachable
|
||||
from ..analyzers.edr.elastic_client import ElasticClient, ElasticError, ElasticUnreachable
|
||||
|
||||
agent = AgentClient(p.agent_url, timeout=AGENT_TIMEOUT_S)
|
||||
agent_info, agent_err, lock = None, None, None
|
||||
try:
|
||||
agent_info = agent.get_info()
|
||||
try:
|
||||
lock = agent.lock_status()
|
||||
except (AgentUnreachable, AgentError):
|
||||
pass
|
||||
except AgentUnreachable as e:
|
||||
agent_err = f"unreachable: {e}"
|
||||
except AgentError as e:
|
||||
agent_err = f"error: {e}"
|
||||
|
||||
elastic_info, elastic_err = None, None
|
||||
type_label = "elastic-defend"
|
||||
if p.kind == "elastic":
|
||||
elastic = ElasticClient(
|
||||
p.elastic_url, p.elastic_apikey,
|
||||
verify_tls=p.elastic_verify_tls, timeout=ELASTIC_TIMEOUT_S,
|
||||
)
|
||||
try:
|
||||
elastic_info = elastic.ping()
|
||||
except ElasticUnreachable as e:
|
||||
elastic_err = f"unreachable: {e}"
|
||||
except ElasticError as e:
|
||||
elastic_err = f"error: {e}"
|
||||
elif p.kind == "fibratus":
|
||||
type_label = "fibratus"
|
||||
|
||||
return {
|
||||
"name": p.name,
|
||||
"display_name": p.display_name,
|
||||
"type": type_label,
|
||||
"kind": p.kind,
|
||||
"agent_url": p.agent_url,
|
||||
"elastic_url": p.elastic_url,
|
||||
"agent": {
|
||||
"reachable": agent_info is not None,
|
||||
"error": agent_err,
|
||||
"hostname": (agent_info or {}).get("hostname"),
|
||||
"os_version": (agent_info or {}).get("os_version"),
|
||||
"agent_version": (agent_info or {}).get("agent_version"),
|
||||
"telemetry_sources": (agent_info or {}).get("telemetry_sources") or [],
|
||||
},
|
||||
"lock": lock,
|
||||
"elastic": {
|
||||
"reachable": elastic_info is not None if p.kind == "elastic" else None,
|
||||
"error": elastic_err,
|
||||
"cluster_name": (elastic_info or {}).get("cluster_name"),
|
||||
"version": ((elastic_info or {}).get("version") or {}).get("number"),
|
||||
},
|
||||
}
|
||||
@@ -35,13 +35,21 @@ function applyStatus(agent) {
|
||||
|
||||
const a = agent.agent || {};
|
||||
const e = agent.elastic || {};
|
||||
// Fibratus profiles have no Elastic backend — alerts arrive via the
|
||||
// /api/edr/fibratus/ingest push endpoint. Health = agent reachable.
|
||||
const isFibratus = agent.kind === 'fibratus';
|
||||
|
||||
// Aggregate dot: green if both sides reachable, yellow if only one,
|
||||
// red if neither.
|
||||
const reachable = (a.reachable ? 1 : 0) + (e.reachable ? 1 : 0);
|
||||
if (reachable === 2) setDot(p, 'ok', 'Agent + Elastic reachable');
|
||||
else if (reachable === 1) setDot(p, 'partial', 'Partial — see status fields');
|
||||
else setDot(p, 'down', 'Agent + Elastic unreachable');
|
||||
if (isFibratus) {
|
||||
setDot(p, a.reachable ? 'ok' : 'down',
|
||||
a.reachable ? 'Agent reachable (Fibratus push model)' : 'Agent unreachable');
|
||||
} else {
|
||||
// Aggregate dot: green if both sides reachable, yellow if only one,
|
||||
// red if neither.
|
||||
const reachable = (a.reachable ? 1 : 0) + (e.reachable ? 1 : 0);
|
||||
if (reachable === 2) setDot(p, 'ok', 'Agent + Elastic reachable');
|
||||
else if (reachable === 1) setDot(p, 'partial', 'Partial — see status fields');
|
||||
else setDot(p, 'down', 'Agent + Elastic unreachable');
|
||||
}
|
||||
|
||||
// Agent side
|
||||
if (a.reachable) {
|
||||
@@ -68,8 +76,13 @@ function applyStatus(agent) {
|
||||
setColor(`agentLock-${p}`, 'var(--lb-text-mute)');
|
||||
}
|
||||
|
||||
// Elastic side
|
||||
if (e.reachable) {
|
||||
// Backend side. Elastic profiles: probe the cluster. Fibratus
|
||||
// profiles: no remote backend — show the push-buffer label.
|
||||
if (isFibratus) {
|
||||
setText(`agentElastic-${p}`, 'Push-mode (no remote backend)');
|
||||
setColor(`agentElastic-${p}`, 'var(--lb-text-mute)');
|
||||
setText(`agentCluster-${p}`, '—');
|
||||
} else if (e.reachable) {
|
||||
const v = e.version ? ` v${e.version}` : '';
|
||||
setText(`agentElastic-${p}`, `Reachable${v}`);
|
||||
setColor(`agentElastic-${p}`, 'var(--lb-sev-low)');
|
||||
|
||||
@@ -80,11 +80,19 @@ function applyAgentRow(agent) {
|
||||
const p = agent.name;
|
||||
const a = agent.agent || {};
|
||||
const e = agent.elastic || {};
|
||||
// Fibratus profiles have no Elastic backend — alerts arrive via the
|
||||
// /api/edr/fibratus/ingest push endpoint. Treat agent-up alone as healthy.
|
||||
const isFibratus = agent.kind === 'fibratus';
|
||||
|
||||
const reachable = (a.reachable ? 1 : 0) + (e.reachable ? 1 : 0);
|
||||
if (reachable === 2) setDot(p, 'ok', 'Agent + Elastic reachable');
|
||||
else if (reachable === 1) setDot(p, 'partial', 'Partial — see status fields');
|
||||
else setDot(p, 'down', 'Agent + Elastic unreachable');
|
||||
if (isFibratus) {
|
||||
setDot(p, a.reachable ? 'ok' : 'down',
|
||||
a.reachable ? 'Agent reachable (Fibratus push)' : 'Agent unreachable');
|
||||
} else {
|
||||
const reachable = (a.reachable ? 1 : 0) + (e.reachable ? 1 : 0);
|
||||
if (reachable === 2) setDot(p, 'ok', 'Agent + Elastic reachable');
|
||||
else if (reachable === 1) setDot(p, 'partial', 'Partial — see status fields');
|
||||
else setDot(p, 'down', 'Agent + Elastic unreachable');
|
||||
}
|
||||
|
||||
if (a.reachable) {
|
||||
const v = a.agent_version ? ` v${a.agent_version}` : '';
|
||||
@@ -93,7 +101,9 @@ function applyAgentRow(agent) {
|
||||
setTag(`dashAgentSide-${p}`, 'high', 'agent down');
|
||||
}
|
||||
|
||||
if (e.reachable) {
|
||||
if (isFibratus) {
|
||||
setTag(`dashElasticSide-${p}`, 'info', 'fibratus · push');
|
||||
} else if (e.reachable) {
|
||||
const v = e.version ? ` v${e.version}` : '';
|
||||
setTag(`dashElasticSide-${p}`, 'low', `elastic${v}`);
|
||||
} else {
|
||||
|
||||
@@ -86,17 +86,19 @@
|
||||
</div>
|
||||
<div class="lb-agent-divider"></div>
|
||||
<div class="lb-agent-row">
|
||||
<span class="lb-eyebrow">Elastic</span>
|
||||
<span class="lb-eyebrow">Backend</span>
|
||||
<span id="agentElastic-{{ profile.name }}" class="lb-muted">Probing…</span>
|
||||
</div>
|
||||
<div class="lb-agent-row">
|
||||
<span class="lb-eyebrow">Cluster</span>
|
||||
<span id="agentCluster-{{ profile.name }}" class="lb-mono lb-muted">—</span>
|
||||
</div>
|
||||
{% if profile.elastic_url %}
|
||||
<div class="lb-agent-row">
|
||||
<span class="lb-eyebrow">Elastic URL</span>
|
||||
<span class="lb-mono" style="font-size: 11px; word-break: break-all;">{{ profile.elastic_url }}</span>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div id="agentError-{{ profile.name }}" class="lb-agent-error hidden"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Reference in New Issue
Block a user