Federation
How two or more Syncropel instances share records while keeping trust boundaries intact — the model, the pieces, and the choices behind them.
What federation is
Federation is how two or more Syncropel instances — your laptop, a shared server, a teammate's laptop, a hosted service — share records without any of them becoming a single point of failure.
If you've only ever run a single spl serve daemon, federation is optional. You get all the records, threads, trust, and task tracking without it. If you want more than one instance — for collaboration, for redundancy, for the hybrid hosted+local pattern — federation is the layer that ties them together.
The mental model
Three ideas shape how federation works in Syncropel. Everything else follows from these.
1. Records are immutable and content-addressed
Every record has a content hash derived from its canonical JSON form. The same record on your laptop and on a teammate's server has the same hash. You can't edit a record — only emit new ones that reference it. This turns sync into a much simpler problem: do these two instances have the same set of hashes on this thread?
2. Federation is pull-based
Each instance decides what to fetch, from which peer, at what cadence. Nobody pushes into you uninvited. The "pair" relationship is literally instance B says it will pull changes for thread T from instance A — unidirectional and thread-scoped. For bidirectional sync you create two pairs.
This choice is deliberate. Push-based federation requires the source to know its subscribers; pull-based means each instance is responsible for its own state. The source just responds to GET /v1/sync/changes?thread=T&since=<cursor> queries.
3. Identity and consent are explicit
Every record is signed by the actor who emitted it. The signature is verified on ingest using the actor's Decentralized Identifier (DID). Cross-namespace record sharing requires an explicit consent grant — without it, records stay in their namespace even when peers are paired.
This means: a peer finding you on the network doesn't grant them any access. Pairing with a peer doesn't leak records across namespaces. The security model is layered; discovery and pairing are separate from authorization.
The pieces
Identity — who a peer is
Each Syncropel instance has an identity: an Ed25519 keypair and a DID. Three DID methods are supported, each suiting a different audience:
did:key— the DID is the public key itself. Zero infrastructure, zero signup. Generated byspl initin under a second. Ideal for first-time users, ephemeral agents, or anyone who doesn't need a human-readable identifier.did:web— the DID points to a domain you control (e.g.,did:web:alice.dev). Resolution is a standard HTTPS GET tohttps://alice.dev/.well-known/did.json. No directory service required; if you own a domain, you own your identity.did:sync— the DID points to a directory service that stores the public key and service endpoint. Supports key rotation and handle lookup. The default directory is operated by Syncropic; you can also self-host.
All three methods produce the same downstream primitive: a public key that verifies signatures, plus an endpoint that serves the Syncropel API. The difference is in how the DID is resolved and whether it supports rotation.
Pairs — how instances subscribe to each other
A pair is a record-bound persistent relationship between two daemons. Pairs are first-class records on the reserved thread th_federation_pairs — they are not config, not session state, not ambient configuration. They survive snapshot and restore, they are auditable, and their state is fold-derived from the record log.
A pair carries:
peer_did— the peer's identity (the load-bearing handle; URLs change, DIDs persist)peer_url— where to reach them on the network (cached, refreshed)state—establishing,active,paused,degraded,revokedtokens— reciprocal bearer tokens minted during handshake (each side holds the other's; bound to peer DID)cursors— per-thread sync position (one per thread the pair syncs)
State machine transitions are themselves records. Pause emits an update record; revoke emits a terminal revoke record. Operator-facing audit lives on a sibling thread th_audit_federation_pairs so subscribing to lifecycle events is cheap and decoupled from the substrate fold.
Establishing a pair is a single command: spl federation pair <peer-url>. Behind the scenes it discovers the peer manifest, runs a 3-roundtrip nonce-protected handshake, mints reciprocal service accounts + bearer tokens (90-day TTL with auto-renew at 83 days), and persists the genesis record on both sides. spl sync then consults the pair store automatically — no manual token plumbing.
Stewards as peers
Federation in Syncropel is not a fleet (intra-org coordination under one operator) and not a hub (centralized broker). It is a network of peers, where every spl serve instance can be a citizen with one command.
This matters because the hybrid hosted+local model is foundational: a user signs up at syncropel.com, gets a hosted steward in under 30 seconds, and can leave any time to run spl serve on their own laptop or server with all their records intact. That only works if every steward — hosted or local, ephemeral or long-lived — is a real peer in the same protocol. The pair primitive is what makes "real peer" mean something operationally: any two stewards can establish trust, exchange records, and dissolve the relationship via the same primitive, with no central coordinator.
Topology is emergent, not configured:
- Pair (2-node) — the base case. One pair record on each side; transport handles the wire.
- Star (1+N) — most common shape at small scale. Hub pairs with N peers individually; each pair is independent.
- Mesh (full) — practical at N<10. Above that, pair count + revocation cascades become prohibitive.
- Hierarchical — rarely the right shape. Delegate by namespace, not by federation topology.
Operators reason about pairs, not graph shapes. The protocol does not impose a topology.
Legacy sync-only pairs
Before the pair primitive, "pair" referred narrowly to a unidirectional thread-scoped subscription configured ad-hoc via spl sync with --peer-token. That mechanism still works for backward compatibility, but it is no longer the recommended path. The legacy fields:
peer_did— the peer's identitypeer_url— where to reach them on the networkthread_id— which thread to synccursor— how far we've read (an opaque token the peer provides)
Pairs are created via POST /v1/sync/pairs or spl fleet sync add. A pull loop runs in the background: every few seconds, ask the peer for changes since the last cursor, fetch them, verify signatures, pass through the consent filter, ingest. If the peer is unreachable, exponential backoff kicks in; the pair shows state: failing with a classified error code (CONNECT_REFUSED, TIMEOUT, DNS_FAILURE, HTTP_4XX, HTTP_5XX) so the operator knows what to fix.
Consent — what's allowed to cross namespaces
Syncropel has a 5-level namespace hierarchy for tenancy and policy composition. When federation crosses namespace boundaries — your default namespace syncing to a partner team's partner-team namespace — the consent filter decides what passes.
The default is strict: no consent grant, no records cross the boundary. Grants are records on a reserved thread th_consent and specify:
- source namespace (where records originate)
- target namespace (who's allowed to receive)
- hash levels (L0 = full body, L1/L2/L3 = progressively redacted structural information)
A grant without L0 means records still arrive on the other side but their bodies are replaced with {"redacted": true} and their signatures stripped. Metadata shapes propagate; payload does not. Useful for telemetry collection, audit archives, or any case where you want presence signals without content.
Discovery — how peers find each other
You don't have to hand-configure every peer URL. Four ways to find peers (all optional, each suiting different deployments):
- LAN broadcast (mDNS) — daemons on the same local network announce themselves; other daemons discover them in under a second. Magic for home or office setups.
did:webdomain lookup — if you know a peer's domain, resolve their DID over HTTPS. Works globally, no directory required.did:syncdirectory query — ask a directory (either the default Syncropic-hosted one atdiscovery.syncropel.comor a self-hosted one) for peers in a namespace. Works across networks.- Transitive introduction — ask a peer you're already paired with for the peers they know. Opt-in per operator.
Discovery is purely "here's a DID and where to reach them." It never automatically creates pairs. The operator always confirms before pairing.
What federation is not
Three common misconceptions worth heading off.
Federation is not a mesh overlay. There's no distributed hash table, no routing, no multi-hop delivery. Every pair is a direct HTTPS connection between two daemons. Discovery helps them find each other; data flows point-to-point.
Federation is not full replication. A pair syncs a specific thread, not everything on the peer. Want to sync five threads? Five pairs. Want a "replicate everything" mode? That's a deliberate non-goal for now — it encourages the wrong mental model.
Federation is not a trust mechanism. Pairing with a peer doesn't mean you trust them. Every record they send is signed, verified, consent-filtered, and trust-weighted independently. The pair just says "I'm willing to receive records from this source."
How the pieces fit together
A typical end-to-end flow:
- Alice runs
spl initon her laptop. Adid:keyidentity is generated. She startsspl serve --daemon. - Bob runs
spl initon his workstation, also on the same network. Same thing — his daemon is up. - Alice runs
spl fleet sync peers discover. mDNS finds Bob's daemon on the LAN. - Alice runs
spl fleet sync add <bob's-did> th_shared_workspace, confirming she wants to pull records from Bob's shared workspace thread. - Bob does the same in reverse, creating a pair pulling from Alice.
- Alice emits a record:
spl intend --thread th_shared_workspace "review the design". - Within ~5 seconds Bob's pair has pulled it, signature-verified it, and ingested it into his local store.
- If Alice and Bob are in different namespaces, the consent filter runs. If there's no matching grant, nothing propagates. If there's a grant with L0, records arrive intact. If the grant excludes L0, records arrive with
{"redacted": true}bodies.
The whole flow is about two minutes of setup. Once a pair exists, propagation is automatic and continuous.
What's enforced today vs the spec
The federation substrate covers pair creation, signature verification, consent filtering, discovery (mDNS, DNS-based federation manifests, and did:web + did:sync actor identity), async relay for offline peers, and the pull loop with classified error codes. That's enough for "run two daemons, pair them, share a thread" as a real workflow.
One piece is designed but not yet shipped:
- MLS end-to-end encryption for pairs — opt-in per pair. Today, pair traffic is TLS-secured point-to-point; MLS would add forward-secrecy and post-compromise security to the record bodies themselves. Opt-in per pair; on the near-term roadmap pending upstream library stabilization.
What's enforced today is the strict default: no consent, no crossing. An operator who pairs two instances and emits records on a shared thread sees them propagate; an operator who wants to sync across namespaces without an explicit consent grant sees nothing propagate. That's the intended behaviour, and it holds.
What's next
- Federation — pairing two instances — hands-on quickstart with two local daemons
- Consent management — the full consent grant flow for cross-namespace sharing
- Records, Threads, Actors — the primitives federation composes over
- Namespaces — the unit of tenancy + governance that consent grants operate on
Governance
Permissions live in the protocol, not bolted on. Every act produces a record. Without a matching allow, the answer is no.
Secrets
Records hold handles. Backends hold values. The substrate enforces the separation at four independent layers — a deliberate "no plaintext secrets in records, ever" invariant baked into the protocol.