Files
BlackSnufkin fb52b1432e Add Fibratus EDR profile + dashboard cache + GrumpyCats package split
Fibratus EDR profile (kind: fibratus). Pull-from-event-log model, same
shape DetonatorAgent's FibratusEdrPlugin.cs uses: operator configures
Fibratus on the EDR VM with alertsenders.eventlog: {enabled: true,
format: json}; rule matches land in the Application log. Whiskers gains
GET /api/alerts/fibratus/since which wevtutil-queries the log,
extracts <TimeCreated SystemTime> + <EventID> + <Data>, ships the raw
JSON blobs back. The new FibratusEdrAnalyzer mirrors Elastic's
two-phase shape — Phase 1 exec, Phase 2 polls Whiskers — and normalizes
Fibratus's actual schema (events[].proc.{name,exe,cmdline,parent_name,
parent_cmdline,ancestors} + bare tactic.id/technique.id/subtechnique.id
labels) into the saved-view renderer's dict.

Whiskers /api/info now reports telemetry_sources: ['fibratus'] when
fibratus.exe is at C:\Program Files\Fibratus\Bin\, so the
orchestrator can preflight before dispatching. wevtutil's single-quoted
attribute output is parsed correctly.

Dashboard reachability cache (services.edr_health). 30s TTL +
background poller every 15s. Per-probe timeouts dropped 4s/5s -> 2s.
First load post-boot waits at most one probe cycle; every subsequent
load <5ms (cache hit).

GrumpyCats package split: 1085-line monolith into:
  grumpycat.py      — orchestrator (14 lines)
  cli/              — parser, handlers, runner
  litterbox_client/ — base + per-domain mixins (files, analysis,
                       doppelganger, results, edr, reports, system)
                       composed into LitterBoxClient.
LitterBoxMCP.py rewires its one import. New CLI subcommand
fibratus-alerts and matching MCP tool fibratus_alerts_since pull
Fibratus alerts via a LitterBox passthrough endpoint
(/api/edr/fibratus/<profile>/alerts/since) for wire-checking the agent
without dispatching a payload.

CHANGELOG updated.
2026-04-30 05:28:54 -07:00

88 lines
3.4 KiB
Python

"""Report retrieval — inline HTML, save-to-disk, open-in-browser."""
import os
import re
import tempfile
import webbrowser
from datetime import datetime
from typing import Optional, Union
import requests
from .exceptions import LitterBoxError
class ReportsMixin:
def get_report(self, target: str, download: bool = False) -> Union[str, bytes]:
"""Fetch the analysis report. Returns the HTML string by default,
or raw bytes when `download=True` (useful for piping)."""
params = {"download": "true" if download else "false"}
response = self._make_request("GET", f"/api/report/{target}", params=params)
return response.content if download else response.text
def download_report(self, target: str, output_path: Optional[str] = None) -> str:
"""Download the report and save to disk. Returns the path written.
Streams chunks to disk so multi-MB reports don't sit in memory.
"""
response = self._make_request(
"GET", f"/api/report/{target}",
params={"download": "true"}, stream=True,
)
filename = self._extract_filename_from_response(response, target)
# If `output_path` is a directory, save inside it; else use it as-is.
if output_path:
save_path = (
os.path.join(output_path, filename)
if os.path.isdir(output_path)
else output_path
)
else:
save_path = filename
try:
with open(save_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk: # filter out keep-alive chunks
f.write(chunk)
self.logger.info(f"Report saved to {save_path}")
return save_path
except Exception as e:
raise LitterBoxError(f"Failed to save report: {str(e)}")
def open_report_in_browser(self, target: str) -> bool:
"""Render the report and open it in the default browser via a
temp file. Returns False on any failure (logged)."""
try:
report_content = self.get_report(target, download=False)
fd, path = tempfile.mkstemp(suffix=".html", prefix="litterbox_report_")
try:
with os.fdopen(fd, "w", encoding="utf-8") as tmp:
tmp.write(report_content)
webbrowser.open("file://" + path)
self.logger.info(f"Report opened in browser from {path}")
return True
except Exception as e:
self.logger.error(f"Failed to open report in browser: {str(e)}")
return False
except Exception as e:
self.logger.error(f"Failed to generate report: {str(e)}")
return False
# ---- internal -------------------------------------------------------
@staticmethod
def _extract_filename_from_response(response: requests.Response, target: str) -> str:
"""Pull the filename from the Content-Disposition header, or
synthesize one with a timestamp for fallback."""
content_disposition = response.headers.get("Content-Disposition", "")
if "filename=" in content_disposition:
match = re.search(r'filename="([^"]+)"', content_disposition)
if match:
return match.group(1)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
return f"LitterBox_Report_{target[:8]}_{timestamp}.html"