Audit export to Elastic
Pipe Syncropel's security-relevant record stream into Elasticsearch or another SIEM. JSONL in, indexed and searchable out.
Problem
Security-relevant events — auth failures, AITL decisions, dispatch outcomes, governance denials, permission-rule refusals — need to land in your SIEM so they're searchable alongside everything else: firewall logs, auth provider logs, Kubernetes audit. You don't want to write a custom integration. You want to take what Syncropel already exports and pipe it.
Recipe
spl audit export emits a JSONL stream on stdout. Point it at your SIEM's ingestion endpoint.
# Hourly export via cron. The --since flag ensures overlap-free pulls.
# Run this every hour; it pulls the last 65 minutes to give a 5-minute safety margin.
spl audit export --since 65m | \
curl -s -X POST "https://elastic.example.com:9200/syncropel-audit/_bulk" \
-H "Content-Type: application/x-ndjson" \
-H "Authorization: ApiKey $ELASTIC_API_KEY" \
--data-binary @-Elasticsearch's bulk API accepts JSONL natively — no format translation needed. Each line in the export becomes one indexed document. The _bulk endpoint expects alternating action and document lines, but spl audit export already emits in that shape when the --bulk-format elastic flag is set:
spl audit export --since 65m --bulk-format elastic | \
curl -s -X POST "https://elastic.example.com:9200/syncropel-audit/_bulk" \
-H "Content-Type: application/x-ndjson" \
-H "Authorization: ApiKey $ELASTIC_API_KEY" \
--data-binary @-For Splunk HEC, Datadog, or a generic webhook, replace the curl target — the JSONL stream is the same.
Drop it in cron:
# /etc/cron.d/syncropel-audit-export
0 * * * * syncropeluser /usr/local/bin/export-audit.shWhere export-audit.sh wraps the command above with error handling + logging.
What's in the stream
The audit export filters the record log to security-relevant records. By default it includes:
system— records fromdid:sync:system:engineand related system actors (config reloads, notifications, governance events).aitl— pending and decided AITL records.dispatch— dispatch outcomes, budget exhausted, timeout.governance— namespace narrowing rejections, permission rule denials, kill-switch records.
Narrow further if your SIEM ingestion has cost implications:
# Only AITL + governance — the decisions-that-matter slice.
spl audit export --since 65m --categories aitl,governanceThe trade-off
Export is pull-based, not streaming. You run spl audit export on your cron schedule; the command queries the record log and emits everything matching since the watermark. If your SIEM needs real-time alerting (sub-minute latency from event to alert), an hourly cron is too slow. For that, tail the daemon log or subscribe via SSE to the broadcast channel — but you lose the structured audit categorisation and have to filter in your pipeline.
--since is wall-clock, not per-record. Records ingested in a brief burst and then quiet for an hour can straddle the window boundary. The 5-minute overlap (--since 65m on an hourly cron) gives the SIEM's dedupe (by record ID) a chance to drop duplicates. Every record's ID is stable and in the export — dedupe is safe.
Permission-rule denials are currently tracing events, not records. They appear in the daemon log but don't (yet) land in the record log. So spl audit export picks them up only if your SIEM pipeline is also tailing the daemon log. This gap is tracked — making denials first-class audit records is a polish task on the roadmap.
See also
- SIEM integration guide — full pipeline shapes for Elastic, Splunk, Datadog, and generic webhooks.
- CLI reference — audit export — every flag the command takes.
- Governance — what the kernel considers security-relevant.
Scheduled daily digest
Fire a cron trigger every morning at 09:00 that dispatches an agent to summarise what closed in the last 24 hours. One trigger, one target.
Summarize a research paper — 3-LLM consensus
Three LLMs independently summarise a paper, and the substrate picks the most-agreed-upon summary. CLI, TypeScript SDK, and Python SDK variants.