SSyncropel Docs

CLI Reference

Complete reference for the spl command-line interface.

Global Options

All commands support:

FlagDescription
-o json or --jsonOutput in JSON format
-o stream-jsonOutput as newline-delimited JSON stream
-h or --helpShow help

Server

spl serve [--daemon] [--port 9100] [--memory] [--echo] [--store sqlite:///path]
spl serve --stop
spl serve --restart
spl serve --status
spl serve --logs [-n 100] [--no-follow]
spl serve --insecure-localhost
spl status
spl version
CommandDescription
serveStart the Syncropel server
serve --daemonStart in background mode
serve --memoryUse ephemeral in-memory store (for development)
serve --echoEcho mode — no AI provider needed
serve --stopStop the background daemon
serve --restartStop the daemon if running, then start in background mode
serve --statusPrint daemon status without starting the server
serve --logsTail the daemon log file. -n controls the initial line count; --no-follow prints once and exits
serve --insecure-localhostDev escape hatch: skip bearer-token authentication. Forces the daemon to bind to 127.0.0.1 regardless of --host and emits a WARN log on startup. Intended for local dev daemons and E2E tests — never pass in production
statusShow server health, uptime, record count
versionShow version

Run as a system service

On Windows, register the daemon with the Service Control Manager so it survives logoff and starts at boot. See the Windows service guide for the full runbook.

spl serve --install-service [--service-name <name>]
spl serve --uninstall-service [--service-name <name>]
FlagDescription
--install-serviceRegister spl serve with the Windows Service Control Manager. Must be run from an elevated shell. Idempotent: if a service with this name already points at the current binary, succeeds without changes
--uninstall-serviceRemove the SCM registration. Stops the service first if running. Safe on a host that does not have the service installed
--service-name <name>Override the default service name (SyncropelDaemon). Useful on multi-instance hosts
--serviceInternal — invoked by the SCM when the service starts. Operators should not pass this directly

On Linux and macOS these flags return a clear error and point at systemd / launchd. See the operator runbook for the systemd unit template.

Records

spl intend "goal" [--thread ID] [--actor DID]
spl do "action" [--thread ID]
spl know "observation" [--thread ID]
spl learn "insight" [--thread ID]
CommandCreatesPurpose
intendINTEND recordState a goal or plan
doDO recordRecord an action
knowKNOW recordRecord an observation
learnLEARN recordRecord an insight or decision

Semantic search over the record log. Embeds the query through a configured provider and returns the top K records ranked by cosine similarity. See the semantic search guide for setup and patterns.

spl search "authentication failure logs" -k 10
spl search "query" --thread th_abc --actor did:sync:agent:dev --kind core.task.record --after-clock 100
spl search "query" --json | jq '.hits[].record.id'

Requires an embedding provider configured first — see the Configuration → Embedding provider section below.

Threads

spl thread list
spl thread show <id>
spl thread records <id>

Tasks

spl task add "goal" [--priority P] [--label L] [--assign @actor] [--alias]
spl task list [--all]
spl task show <id|alias>
spl task start <id|alias>
spl task done <id|alias> --summary "..." [--domain D]
spl task fail <id|alias> --reason "..."
spl task approve <id|alias> [--domain D] [--notes "..."] [--verify]
spl task reject <id|alias> --reason "..." [--domain D]
spl task block <id|alias> --reason "..."
spl task defer <id|alias> --reason "..."
spl task cancel <id|alias> [--reason "..."] [--superseded-by ID]    # --reason optional (default "removed by user")
spl task reopen <id|alias> --reason "..."
spl task comment <id|alias> "text"
spl task edit <id|alias> --set field=value [--reason "..."]
spl task plan <parent> "sub-goal"
spl task handoff <id|alias>
spl task dispatch <id|alias> [--worktree]
spl task tree <id|alias>

Workspaces

A workspace is a content-addressed bundle of fold, projection, and policy components that you author, test, publish, and share. See the workspace concept for the full picture.

spl workspace init <name> [--template <id>] [--target <path>]
spl workspace test [--fixture <name>] [--update] [--watch] [-o json]
spl workspace publish [--draft | --release] [--catalog <thread-ref>] [--version <label>] [--skip-tests]
spl workspace subscribe <workspace-ref> [--version-pin <sha-or-label>] [--grant <scope>]
spl workspace unsubscribe <workspace-ref> [--reason <text>]
spl workspace migrate <workspace.json> --using <migration.json> [--output <path>]
CommandDescription
workspace init <name>Scaffold a starter workspace from a template. Default template is tracker. Available: tracker, multi-page, newsletter, course, recipe-collection, solo-tracker, catalog
workspace testRun the native test runner over tests/fixtures/ and compare against tests/expected/. Deterministic — same input bytes yield same output bytes
workspace test --updateWrite expected files from current fold output. Use after intentional fixture changes
workspace test --watchRe-run on file save (manual verification mode)
workspace publishDefault: emit as draft. Drafts are namespace-local and not federated
workspace publish --releaseEmit as published — federates per workspace policy
workspace publish --release --catalog <ref>Also emits a catalog listing record signed by your publisher_did
workspace publish --version <label>Writes a human-readable semver label into the manifest before emit
workspace publish --skip-testsBypass the structural test phase (schema validation still runs)
workspace subscribe <ref>Record your subscription to a workspace, optionally pinning a publisher and version
workspace unsubscribe <ref>Re-emit subscription with active: false (records are immutable; no delete)
workspace migrateApply a declared upgrade-path record to transform a workspace from one major version to the next

Workspaces are walked in Build your first workspace.

Share

One-shot bug-repro bundles. Share a thread (with explicit consent) so a developer can replay the exact records that triggered an issue.

spl share send <thread-id> --to <recipient-did> [--include-deps] [--depth <n>] [--expires <duration>] [--out <path>]
spl share receive <bundle-path> --from <sender-did> [--dry-run]
CommandDescription
share send <thread> --to <did>Bundle the thread's records as JSON Lines, emit a consent record granting the recipient read access, write a single-file bundle
share send --include-depsWalk parent edges recursively so cross-thread parent chains land in the same bundle. Bounded by --depth (default 8, hard cap 10K records)
share send --expires <duration>Time-bound the consent. Format: 7d, 24h, 45m, 30s
share send --out <path>Write the bundle to a specific path. Default: <thread>.bundle.jsonl
share receive <bundle> --from <did>Verify signatures and content hashes, replay records into the local kernel
share receive --dry-runVerify the bundle without ingesting (good for sanity-checking before replay)

The bundle format is just records. The receiver kernel ingests them through the same path it uses for any record. Tampering is detectable — re-hashing the seven-field record envelope catches any mutation.

Secrets

Operator-facing surface for managing credential handles. The substrate stores the handle + audit metadata; the value lives in your OS keychain (or the backend you reference). See Secrets concepts for the four-layer enforcement design.

spl secret set <handle>          # interactive — TTY echo disabled, or piped stdin
spl secret get <handle>          # returns metadata only (no value)
spl secret get <handle> --reveal # prints value (warns to stderr)
spl secret list                  # tabular: handle, backend, lifecycle, last_accessed
spl secret promote <ENV_VAR>     # graduate env var to OS keychain after confirmation
spl secret delete <handle>       # tombstone descriptor + audit record
CommandDescription
secret set <handle>Reads value from a TTY (tcsetattr echo disabled) or piped stdin. Never accepts a --value flag. Writes a core.credential.v1 descriptor + sets the value at the configured backend
secret get <handle>Returns the descriptor metadata. Audit record emitted before the response returns (record-before-result invariant)
secret get <handle> --revealPrints the value to stdout. Audit record marks outcome: allowed with purpose: revealed_to_operator
secret listLists every credential descriptor visible to the caller. Backend resolution is lazy — listing does not unlock values
secret promote <ENV_VAR>Reads the named environment variable, prompts confirmation, writes the value to the OS keychain, emits the descriptor + a secret.access.v1 audit record
secret delete <handle>Marks the descriptor lifecycle: erased and removes the value from the backend. The descriptor record stays in the audit log

Backend URI schemes shipped today:

  • keychain://syncropel/<handle> — platform-native (macOS Keychain, Linux Secret Service, Windows DPAPI/Credential Manager)
  • env://VAR_NAME — read-only env-var resolver (compiles + tests on every platform)

Future schemes: 1password://, vault://, kms://.

Trust

spl trust

Shows trust scores per actor per domain with success count, total observations, and current zone.

Actors

spl actor list
spl actor show <did|name>
spl actor register <did> [--display-name NAME] [--category agent]
spl actor use <did|name>
spl actor export <did|name>
spl actor import <path> [--trust-discount 0.5]

Memory

spl memory list [--actor DID]
spl memory add "name" --type TYPE --description "..."
spl memory remove "name"
spl memory search "keyword"

Memory types: user, feedback, project, reference, skill, insight.

Adapters

spl adapter list
spl adapter add <did> --family cli --binary <path> [--timeout 600]
spl adapter remove <name>

Configuration

spl config show
spl config list-rules
spl config add-rule --name N --domain D --act A --target DID
spl config add-trigger --name N --match "expr" --target DID [--cooldown 60]
spl config list-triggers
spl config set-key <provider> <key>
spl config model <name>
spl config path

Indexed field registry

Declare which body.<field> paths for a given body.kind should be backed by SQLite expression indexes. See the body-kind manifest guide for when and why.

spl config add-body-kind-manifest \
  --kind music.catalog.track \
  --indexed-field body.title \
  --indexed-field body.artist_id \
  [--description "..."] [--disable]

spl config list-body-kind-manifests [-o json]

The daemon applies manifests at config-reload time: each declared field becomes a CREATE INDEX ... ON records(json_extract(body, '$.<field>')) so rich-query filter predicates on that path use the index transparently. Re-adding the same --kind replaces the manifest (latest LEARN wins).

Embedding provider

Configure the provider used for semantic search. Writes an embedding_provider LEARN record on th_engine_config; the daemon rebuilds its embedder on broadcast — no restart. The ollama provider ships today; hosted adapters land in a subsequent release.

# Enable semantic search against a local Ollama (nomic-embed-text is the default model).
spl config embedding-provider set ollama \
  [--endpoint http://localhost:11434] \
  [--model nomic-embed-text]

spl config embedding-provider show
spl config embedding-provider clear             # disable semantic search

Without a configured provider, spl search and POST /v1/records/search return 503 and both SDKs surface a disabled: true flag instead of raising.

Async federation relay

Configure relay mailboxes for peers that aren't always online. See the async federation guide and the relay runbook for setup and operation.

spl config relay set <url> [--for-pair <did|all>]
spl config relay show
spl config relay status [--for-pair <did>]
spl config relay clear [--for-pair <did|all>]
spl config relay push --peer <did> --thread <thread_id>
CommandDescription
relay setConfigure a relay URL for a peer. --for-pair defaults to all — the catch-all applied to any peer without a more specific mapping
relay showList configured relay entries
relay statusProbe configured relays against their /health endpoint. With --for-pair <did>, probes only the relay that would be used for that peer (specific match or catch-all fallback)
relay clearRemove a relay entry. --for-pair defaults to all to drop the catch-all; pass a DID to clear a specific mapping
relay pushDeposit one thread's records into the configured peer's relay mailbox. Used when a direct pair is known to be offline. Returns an envelope ID on success

Relay entries are stored as federation.relay LEARN records on th_engine_config. Outbound set / clear / push take effect immediately; enabling a new relay URL for inbound receive requires a daemon restart, since the receive loops are spawned at startup.

# Catch-all: route to the hosted relay for any peer without a specific mapping
spl config relay set https://relay.syncropel.com

# Override for one peer
spl config relay set https://my-relay.example.com --for-pair did:sync:peer:alice

# Deposit a thread now
spl config relay push --peer did:sync:peer:alice --thread th_abc123...

Responder manifest overrides

Curate the responder list surfaced on /v1/capabilities and /.well-known/syncropel. The manifest auto-populates from actor/adapter records; these commands let operators add custom entries, override auto-populated values, or suppress specific DIDs.

The responder manifest is dormant in this release — the schema is persisted and surfaced on discovery endpoints so peers can already read it, but the daemon does not yet consume these fields for its own routing decisions. A subsequent release will make them load-bearing. Configure now if you're already coordinating cross-peer discovery; otherwise there's no rush.

spl config responders-add --did <did> [--kind <kind>] [--capability <tag>]... \
  [--trust-floor <json>] [--cost-model <json>] [--availability <json>] [--metadata <json>]
spl config responders-list [--json]
spl config responders-remove --did <did>
FlagApplies toDescription
--did <did>add, removeTarget responder DID (actor or synthetic). Required
--kind <kind>addactor, llm, pattern, or system. Omitted defers to the auto-populated value; when none exists, defaults to actor
--capability <tag>addRepeated. Passing this replaces the entire capability list for this DID
--trust-floor <json>addJSON trust-floor object, e.g. {"code": 0.75}
--cost-model <json>addJSON cost-model object, e.g. {"per_query_usd": 0.04}
--availability <json>addJSON availability object, e.g. {"timezone": "PST"}
--metadata <json>addFree-form, non-normative JSON object

responders-add overrides matched auto-populated entries; unmatched DIDs are appended as operator-authored. responders-remove tombstones a DID so it is suppressed from the manifest even if an auto-populated entry exists.

# Advertise a custom responder with an explicit cost model
spl config responders-add \
  --did did:sync:agent:reviewer \
  --kind actor \
  --capability code \
  --capability review \
  --cost-model '{"per_query_usd": 0.02}'

# Hide an auto-populated entry
spl config responders-remove --did did:sync:agent:deprecated

Dispatch

spl dispatch <actor> "goal" [--thread ID] [--budget 1.0] [--timeout 600]

Inference — spl infer

Emit an infer.query.v1 INTEND and (by default) wait for the KNOW. See the Inference guide for what every field means.

spl infer <goal> --responder <KIND:SELECTOR> [--responder ...] \
  [--kind <body.kind>] \
  [--fold consensus|best_of|waterfall_first|ensemble_weighted|expression] \
  [--fold-expression <CEL>] \
  [--orchestration single_shot|verify|waterfall|retry_on_low_confidence|escalate|ensemble_with_audit] \
  [--dial <0..1>] \
  [--budget <USD>] [--timeout <seconds>] \
  [--min-quorum <int>] [--top-k <int>] [--relevance-threshold <0..1>] \
  [--answer-schema <URI>] \
  [--reversible] \
  [--obligation-resolution lwd|fulfills_wins|validation_error] \
  [--metadata k=v] [--thread <id>] [--actor <did>] \
  [--wait | --no-wait] [--poll-interval-ms 500] [--poll-timeout-secs 300]

Responder selector grammar

FormMaps to
llm:<model>{ kind: "llm", model: "<model>" }
llm:<model>:<did>{ kind: "llm", model: "<model>", did: "<did>" }
pattern:<capability>{ kind: "pattern", capability: "<capability>" }
actor:<did>{ kind: "actor", did: "<did>" }
system:<domain>{ kind: "system", domain: "<domain>" }
capability:<domain>:<kind>{ kind: "any", domain: "<domain>", capability: "<kind>" }
cel:<expression>{ kind: "any", expression: "<expression>" }

Three flag combinations

Simple — two responders, default consensus fold, budget ceiling:

spl infer "summarise this paper" \
  --responder llm:sonnet \
  --responder pattern:summary-v2 \
  --fold consensus \
  --budget 0.50 \
  --wait

Verify orchestration with human tiebreaker:

spl infer "review this diff" \
  --responder llm:sonnet-4-6 \
  --responder actor:did:sync:user:alice \
  --orchestration verify \
  --budget 0.40 \
  --timeout 1800

No-wait fire-and-forget for long-running queries:

spl infer "long research task" \
  --responder llm:opus \
  --budget 2.00 \
  --timeout 3600 \
  --no-wait
# prints: correlation_id: <sha256>; thread_id: th_<sha256>
# poll: spl thread show <thread>

For queries more complex than the CLI can build, pipe a JSON file:

spl infer --query-file query.json --wait

Namespaces

5-level hierarchy for tenancy, governance, and policy composition. See spec §08-governance/04.

spl namespace create <id> [--description X] [--policy X]
spl namespace list [--json]
spl namespace show <id> [--json]
spl namespace archive <id>
spl namespace delete <id>

Namespace IDs are slash-separated, max 4 segments under DEFAULT, lowercase ASCII + digits + - + _:

spl namespace create acme-corp                              # ORG
spl namespace create acme-corp/payments                     # PROJECT
spl namespace create acme-corp/payments/staging             # ENV
spl namespace create acme-corp/payments/staging/job-42      # JOB

Records that set body.namespace to a namespace that doesn't exist (or whose ancestor chain contains an archived/deleted entry) are rejected with 403 NAMESPACE_REJECTED. Records without body.namespace fall through to the implicit DEFAULT and are always accepted.

Developer Tools

spl doctor [--json]                         # Top-down diagnostic (7 checks)
spl debug replay <task-or-thread> [--json]  # Walk records, show status transitions
spl debug thread-diff <a> <b> [--json]      # Structural diff of two threads
spl audit export [--since 24h]              # JSONL export for SIEM
                 [--categories system,aitl,dispatch,governance]
                 [--actor X] [--thread X]
spl completion bash|zsh|fish|powershell|elvish   # Shell completion script

spl doctor runs 7 read-only checks and prints PASS/WARN/FAIL with a short reason for each. Exit code 0 = all pass, 1 = any warn, 2 = any fail. Suitable for cron/monitoring.

spl debug replay walks every record on a task thread in clock order and runs the same derive_status fold that spl task show uses. Status transitions are highlighted with an arrow and yellow highlight, so you can see exactly which record moved the task from inbox → active → review → approved.

spl debug thread-diff prints a side-by-side structural comparison of two threads: record count, participants, fold status, act distribution.

spl audit export emits one JSON object per line for security-relevant records (system actor writes, AITL decisions, dispatch outcomes, governance events). --since accepts relative durations (24h, 7d), ISO 8601 timestamps, or unix seconds. Output is directly ingestable into Splunk, Elastic, Loki, or Datadog.

spl completion generates shell completion scripts via clap_complete. Pipe to the shell's completion directory:

spl completion bash > ~/.local/share/bash-completion/completions/spl
spl completion zsh  > ~/.zfunc/_spl
spl completion fish > ~/.config/fish/completions/spl.fish

Fleet

This section covers multi-instance deployments.

Multi-instance coordination primitives (instance registry + kill switch) and the task fan-out + barrier. See Parallel Dev Tutorial for a hands-on walkthrough and Operator Runbook: Fleet Operations for day-2 procedures.

Fleet lifecycle

spl fleet start [--workers N]        # Boot coordinator + N worker instances
spl fleet stop [--workers | --all]   # Stop workers only, or everything

spl fleet start --workers 2 boots two worker daemons on auto-assigned SYNCROPEL_HOME paths (~/.syncro-worker-{a,b}) and auto-assigned ports (9201, 9202), with the current ~/.syncro prod daemon acting as coordinator. Each worker inherits SPL_FLEET_COORDINATOR_URL pointing at the coordinator and begins emitting heartbeats within ~15 seconds.

Worker state roots are persistent across fleet restarts. Next spl fleet start --workers 2 reuses them. To start fresh, delete ~/.syncro-worker-* before booting.

Fleet observability

spl fleet list                       # Snapshot: DID, endpoint, status, role, uptime
spl fleet show <did>                 # Detailed view of one instance
spl fleet ping <did>                 # HTTP /health reachability + latency check
spl fleet status [--live]            # Aggregate: live/stale/archived counts + frozen ns

spl fleet list reads the coordinator's th_instance_registry fold. Each instance is classified as live (heartbeat within last N seconds), stale (2-3 N windows old), or archived (older than 3N).

spl fleet status --live opens a continuously-refreshing view showing active dispatches per instance, pending fan-outs, and fleet-wide frozen namespaces.

Kill switch

spl kill --namespace <ns> --level soft [--reason "..."]
spl kill --namespace <ns> --level hard [--grace <secs>] [--reason "..."]
spl kill --emergency [--reason "..."]

spl unkill --namespace <ns>
spl unkill --emergency

Soft (level 1, reversible, preferred): denies new INTEND/DO/CALL records in the namespace; allows KNOW/LEARN so in-flight dispatches drain cleanly. Common use: release-window lockdown, one-namespace debugging.

Hard (level 2, reversible with data loss): same as soft plus after --grace <secs> (default 60) the engine refuses all record ingest in that namespace. Use when the drain-cleanly semantics of soft are not safe.

Emergency (level 3, reversible, fleet-wide): every namespace's effective dial drops to 0. No new CALL or DO records land with non-trivial effects. GETs and KNOWs/LEARNs still pass so state remains inspectable. Use when you don't know what's wrong and need everything to stop immediately.

All kill records are LEARN records on th_fleet_control with full audit trail in spl audit export.

Task fan-out

spl task fan-out <task-alias> \
  --subtask 'goal=...,target=worker-a,budget=2.00,timeout=1800' \
  --subtask 'goal=...,target=worker-b,budget=2.00,timeout=1800' \
  --subtask 'goal=...,target=least-loaded-worker,depends=0+1' \
  --join all

spl task join-status <parent-alias>             # Current join state
spl task retry <parent-alias> --subtask <index> # Re-dispatch one failed child

spl task fan-out creates a parent INTEND on the coordinator with the subtask list derived from the --subtask flags. The fan-out reconciler spawns one child INTEND per descriptor and POSTs each to its target worker's /v1/records endpoint via the instance registry.

Each --subtask flag is a comma-separated list of key=value pairs. Supported keys:

  • goal (required) — the child subtask's INTEND goal, passed verbatim to the worker
  • target (required) — one of worker-<alias>, specific:<full-did>, least-loaded-worker, round-robin (see §"Routing strategies" below)
  • budget (optional) — advisory per-subtask USD budget
  • timeout (optional) — worker-side timeout in seconds, default 600
  • depends (optional) — plus-separated list of 0-based subtask indices this child depends on. depends=0+1 means "wait for subtasks 0 and 1 to accept before spawning this one." The reconciler validates depends indices for cycles at parse time.

Declaring subtasks via a task body file isn't supported yet. The --subtask CLI flag form is the only shape shipped. A future release may add body-file parsing as an alternative form.

The --join flag accepts shorthands (all, any, k_of_n:K) or a custom CEL expression evaluating against the children binding. Examples:

all                                               # all subtasks must accept
any                                               # any one subtask accepting is enough
k_of_n:3                                          # at least 3 of N subtasks accept
children.all(c, c.verdict == "accept" && c.cost_usd < 5.00)   # custom CEL

spl task join-status prints which children have reported, which are pending, the current verdict-so-far, and whether the join predicate currently evaluates true.

spl task retry --subtask <index> re-dispatches a single failed subtask (e.g. to a different worker after the original went stale) without disturbing the rest of the fan-out.

Routing strategies

Fan-out descriptors set target to one of:

  • worker-<alias> — direct lookup by instance DID last-segment alias
  • specific:<full-did> — exact DID match
  • least-loaded-worker — live worker with lowest active_dispatches, tie-break on longest uptime
  • round-robin — deterministic pick based on hash(parent_id + descriptor_index) mod live_worker_count

Custom strategies (e.g. trust-weighted selection).

Federation — Pair Sync

Continuous record sync between two daemons. See the federation guide for walk-through and semantics.

spl fleet sync add [--peer-url URL] <PEER_DID> <THREAD_ID>   # Create a pair
spl fleet sync list [--json]                                 # List all pairs
spl fleet sync status <PAIR_ID> [--json]                     # Detail on one pair
spl fleet sync trace <PAIR_ID>                               # Recent sync events
spl fleet sync pause <PAIR_ID>                               # Stop pulling
spl fleet sync resume <PAIR_ID>                              # Resume a paused pair
spl fleet sync kick <PAIR_ID>                                # Force immediate poll
spl fleet sync remove <PAIR_ID>                              # Delete pair

Pair direction: a pair on X with peer_did=Y means X pulls from Y (records flow Y→X). For bidirectional sync, create one pair on each side. --peer-url overrides DID resolution — useful when the peer's DID isn't in a shared directory.

One-shot sync (bulk-fetch a thread from a peer without a continuous pair):

spl sync <THREAD_ID> from <PEER>   # PEER is URL or DID

Uses thread snapshot/restore under the hood. Useful for onboarding a new instance with existing history.

Federation — Peer Discovery

Find other daemons without hand-configured peer URLs. See the federation guide for when each method fits.

spl fleet sync peers discover                       # Try all available methods
spl fleet sync peers discover --method mdns         # LAN broadcast (mDNS)
spl fleet sync peers discover --method did-web --domain alice.dev
spl fleet sync peers discover --method did-sync --namespace acme-corp
spl fleet sync peers discover --method did-sync --directory https://directory.myorg.com
spl fleet sync peers discover --method transitive --via <peer-did>
spl fleet sync peers discover --method static       # Peers from local config

Output groups results by method and deduplicates peers seen via multiple methods. Use --json for programmatic consumption.

Discovery never creates pairs automatically. Review the output, then run spl fleet sync add for the peers you want.

Manifest-based discovery

Query a peer's signed federation manifest directly. spl discover fetches /.well-known/syncropel from the given URL, extracts the federation manifest, verifies its Ed25519 signature against the advertised daemon DID, and prints a ranked result. See the federation discovery guide for the manifest format and how it fits with the other discovery methods.

spl discover --peer-url <url> [--capability <tag>] [--kind <body.kind>] [--strict] [--json]
FlagDescription
--peer-url <url>Peer URL to query. Accepts a bare host (https://alice.dev) or a full /.well-known/syncropel URL
--capability <tag>Filter the manifest by an advertised capability tag. Matches against advertises.kinds
--kind <body.kind>Filter by an advertised body.kind
--strictReject stale (expired) manifests entirely. Without this flag, expired manifests are surfaced with a warning
--jsonEmit the verified manifest as JSON for programmatic consumption
# Fetch and verify a peer's manifest
spl discover --peer-url https://alice.dev

# Only return a result if the peer advertises a matching capability
spl discover --peer-url https://alice.dev --capability code

# Filter by record kind and fail on stale manifests
spl discover --peer-url https://alice.dev --kind music.catalog.track --strict --json

Direct-peer fetch with did:key verification is supported today. Directory aggregation and did:web / did:sync resolution arrive in a follow-up release.

Identity — publish a did:web document

Generate a W3C-conformant DID document for operators who control a domain. Upload the resulting file to https://<domain>/.well-known/did.json via your existing web hosting.

spl identity publish-did-web --domain alice.dev [--output did.json]

The document includes the daemon's current public key and a service entry of type syncropel pointing at the daemon's advertised endpoint. Anyone who knows alice.dev can then resolve did:web:alice.dev without a directory service.

Grant and manage cross-namespace record-sharing. See the consent guide.

spl consent grant --to-namespace <NS> [--hash-levels L0,L1,L2,L3] \
                  [--threads IDS] [--purpose TEXT] [--expires ISO8601]
spl consent list [--namespace NS]
spl consent revoke <GRANT_ID>

Source namespace is default unless written directly. Without L0 in --hash-levels, records are downgraded to {"redacted": true} when they cross the boundary.

Emergency Recovery

spl config permissions-unlock [--force]    # Unlock from permission lockout

permissions-unlock writes a permissions_enabled=false LEARN record directly to the SQLite store, bypassing the HTTP middleware that would otherwise deny the disable (the classic lockout trap). Refuses to run while the daemon is up unless --force is passed — you still need to restart the daemon for the change to take effect. Only sqlite stores are supported.

See Permission Enforcement for the pre-flight check that prevents the trap from firing in the first place.

Authentication

Service accounts, bearer tokens, and device pairing. See the Authentication & Service Accounts guide for the full story — when to use each, security model, federation composition.

Service accounts

spl service-account create --name "<label>" [--scopes <csv>] [--actors <csv>] [--with-token] [--env-tag <tag>] [--namespace <ns>] [--bootstrap]
spl service-account list [--namespace <ns>]
spl service-account describe <sa_id> [--namespace <ns>]
spl service-account revoke <sa_id> [--reason "<text>"] [--namespace <ns>]
FlagApplies toDescription
--name <label>createHuman-readable label (required)
--scopes <csv>createCapability scopes (default records:read,records:write). Valid: records:read, records:write, threads:write, federation:manage, config:read, config:write, admin
--actors <csv>createDIDs allowed to claim this SA via X-Syncropel-Actor. Empty ⇒ SA derives its own DID
--with-tokencreateMint an initial bearer token atomically with the SA
--env-tag <tag>createEnv tag on the minted token (default prod). Only meaningful with --with-token
--bootstrapcreateUse the privileged bootstrap endpoint (no auth required). One-shot per namespace — closes after first SA exists
--reason "<text>"revokeRevocation reason recorded in the audit trail
--namespace <ns>allNamespace scope (default default)

Plaintext bearer tokens are printed once to stdout only. They never appear in --json output. Save the token immediately on creation; you cannot recover it later (rotate to replace).

Tokens

spl token create --sa <sa_id> [--env-tag <tag>] [--namespace <ns>]
spl token list [--sa <sa_id>] [--namespace <ns>]
spl token info <bearer_token>
spl token rotate <sa_id> [--env-tag <tag>] [--reason "<text>"] [--namespace <ns>]
spl token revoke --sa <sa_id> --token-id <token_id> [--reason "<text>"] [--namespace <ns>]
CommandDescription
createMint a new bearer token for an existing SA. Prints plaintext token once
listList tokens (all, or filtered by --sa). Shows status active or REVOKED
infoParse a token client-side — no network. Reports env_tag, sa_id, token_id (SHA-256 of secret)
rotateMint a fresh token FIRST, then revoke old ones. Safe against partial failure
revokeRevoke a single token. SA and its other tokens keep working

Device pairing

spl pair --device "<label>" --url "<remote_url>" [--scopes <csv>] [--namespace <ns>] [--env-tag <tag>] [--actors <csv>]

Creates a dedicated service account, mints a token, and renders pairing payload three ways:

  1. One-click URL — first thing printed, of the form https://syncropel.com/local/pair#<encoded-payload>. Click it (or copy into a browser) to auto-pair without typing or scanning. The token rides in the URL fragment (after the #), which never reaches any server.
  2. QR code — for phones / camera-scan flows.
  3. Plain-text payload — for paste flows when QR scanning isn't an option.

Used to connect phones, browsers, and other devices to a spl serve instance.

FlagDescription
--device "<label>"Device name (e.g. "iPhone 17", "browser-chrome") — required. The browser-* prefix defaults to scope admin so the dashboard's privileged panels render; viewer-* defaults to records:read; emitter-* defaults to records:read,records:write.
--url "<remote_url>"URL the device will call — required. Must be reachable from the device
--scopes <csv>Token scopes (default depends on device prefix — see --device above)
--env-tag <tag>Env tag on the token (default prod)
--namespace <ns>Namespace (default default)
--actors <csv>Allowed claimable DIDs (empty ⇒ SA derives its own DID)

Examples:

# Bootstrap the first service account on a fresh install
spl service-account create --bootstrap --name "First admin" --scopes admin --with-token

# Mint a read-only token for a dashboard
spl service-account create --name "Grafana" --scopes records:read --with-token

# Pair a phone
spl pair --device "iPhone" --url "https://your-host.example.com:9100" --scopes records:write

# Rotate a token (device stays connected; old token killed last)
spl token rotate sa_abc123def456ghi7

# Revoke just the lost-phone token
spl token revoke --sa sa_abc123def456ghi7 --token-id 7a2936eccf...

# Nuke the whole service account
spl service-account revoke sa_abc123def456ghi7 --reason "laptop stolen"

Other

spl init [--force]
spl run "goal" [--timeout 600]
spl aitl list
spl aitl approve <id>
spl aitl reject <id>

On this page