SSyncropel Docs

The Engine

Four loops at four timescales — from sub-millisecond record ingest through minute-level drift detection to daily maintenance. This is the kernel that turns records into action, action into evidence, and evidence into learning.

Overview

The Syncropel engine is not a monolithic processor. It's four loops, each running at a different timescale, each solving a different kind of problem, all cooperating through the record log as a shared data structure.

This separation isn't incidental. Coordination work spans timescales that range over seven orders of magnitude — an INGEST decision takes under 100µs, a daily pattern-crystallization sweep takes hours. Trying to handle all of it in one loop either slows the fast path to match the slow one, or loses the slow work in the fast one's noise. Four loops running independently on their natural tempos avoids both.

The kernel itself is small — roughly 200 lines per loop, with the rest being data (records on th_engine_config, CEL expressions, CRUD over the store). Most of what looks like kernel "logic" is actually config. Change a routing rule, a trigger, a fold rule, a permission predicate — you emit a record, the engine reloads, the new behaviour takes effect on the next ingest. No restart, no redeploy.

The four loops

INGEST (microseconds)

The hot path. One record comes in, the loop runs to completion, the next record is ready to be received. Budget: under 100 microseconds in the steady state.

INGEST:  receive → validate → canonical_json → SHA-256 → store → broadcast

Steps:

  • receive the record from HTTP, Unix socket, or internal emit.
  • validate the 8-field shape and namespace narrowing. Narrowing is the only semantic check in the hot path; everything else is deferred.
  • canonical_json serialize for hashing.
  • SHA-256 compute the content-addressed ID.
  • store insert into SQLite. Duplicate IDs are silent no-ops (idempotent retry).
  • broadcast publish the record to in-process subscribers (RECONCILE, TICK, CRON, federation pairs, SSE listeners).

INGEST does not reason. It does not route. It does not call out to anything. It validates, hashes, stores, and publishes. This is what keeps p99 ingest latency flat regardless of downstream load — a blocked reconciler cannot back-pressure into ingest.

RECONCILE (seconds)

The work loop. When a record broadcast arrives, the reconciler decides what should happen next. If it needs to dispatch work — to a human, an agent, a CI pipeline, a service — this is the loop that does it.

RECONCILE:  for each broadcast record:
              match_rules → intelligence (if unmatched) → execute → ingest results

The reconciler is a fixed-point runner. It keeps running until every record either matches a rule or is explicitly left for human review. Rules are checked in priority order; intelligence (the kernel's reasoning path) only runs when no rule matches.

Most records don't trigger reconcile work — system records skip it entirely to avoid cascades, already-handled records (body.dispatch_handled: true) skip it, AITL responses (records with body.fulfills or body.cancels) skip it. What does trigger reconcile is typically a user record or a triggered record that asks for something.

The reconciler uses a semaphore to cap concurrent dispatches (default 4). This is what prevents a burst of incoming records from spawning a corresponding burst of subprocesses. Back-pressure in the reconciler is a queue wait, not a rejection.

TICK (minutes)

The drift-detection loop. Runs on a fixed interval (default 60 seconds) and handles work that needs periodic attention but doesn't need to run per-record:

  • Process trust outcomes. Verdict records (KNOW with body.verdict) arrived during the previous tick; this loop folds them into the (actor, domain, judged_by) evidence table.
  • CUSUM drift detection. Check whether an actor's success rate has changed meaningfully from its baseline. Spikes or drops emit alerts.
  • Stall detection. Identify threads that have an open INTEND but no new records for longer than the stall window. These become candidates for an operator alert or automatic escalation.

TICK isn't latency-sensitive. A 30-second delay on trust updates is fine. What matters is that it runs reliably and the work it does can catch up if the host is busy — all of TICK's work is idempotent over its window.

CRON (hours / daily)

The maintenance loop. Runs on a cron schedule (configurable per job) and handles things that compound slowly:

  • Trust decay. Apply the 90-day half-life to the evidence table. Old approvals count less.
  • Pattern crystallization sweep. Check whether patterns that were close to the C1-C4 criteria have now met them. Promote qualifying patterns.
  • De-crystallization check. Demote crystallized patterns that have lost quality (failing replays, trust drops, compression regression).
  • Snapshots. Produce daily thread snapshots for backup / federation seeding.
  • Cost and health reports. Aggregate the day's dispatch costs and publish the summary.

CRON work is typically offloaded to spawned tasks rather than running inline in the daemon process, so a long cron job doesn't block ingest or reconcile. The daemon acts as the scheduler; the actual work runs in short-lived subprocesses that emit their results back as records.

Config is data

The engine's behaviour is configured through records on a reserved thread, th_engine_config. Every routing rule, every trigger, every fold rule, every CEL permission predicate, every auth toggle — all of them are LEARN records on this thread. Three properties follow:

  1. Hot reload. When a new LEARN lands on th_engine_config, the engine's broadcast handler immediately reloads the relevant config slice. Changes take effect on the next record, no restart.
  2. Version control is free. Config history is the thread's record log. You can answer "when was this rule added?" by reading the records. You can answer "what rules were active on 2025-12-15?" by folding the thread with a reference clock.
  3. Config mistakes are records. A bad rule doesn't delete the good one — it adds a new record that the loader rejects. The warning appears in logs, but the system keeps running with the previous working config. Remove the bad record or supersede it with a correct one.
spl config show                       # the current loaded config
spl config list-rules                 # routing rules
spl config list-triggers              # event + schedule triggers
spl config list-fold-rules            # task/thread fold customisations
spl config list-health                # health check predicates
spl config list-permission-rules      # permission middleware rules
spl expr eval '<CEL>' --context <ctx> # preview an expression before storing

Most "change the engine's behaviour" requests turn out to be "author a CEL config record". See CEL expressions for the authoring workflow and Routing rules for the routing-specific form.

Routing cascade

When reconcile decides a record needs to be acted on, the routing cascade runs in priority order, cheapest first:

Record needs routing

  ▼ 1. Explicit target (body.to)          → dispatch there, done.

  ▼ 2. Pattern match (L1 → L2 → L3)       → REPLAY the matched outcome.

  ▼ 3. Routing rule match                 → dispatch to rule's target.

  ▼ 4. Trust-based fallback               → pick the highest-trust actor
  │                                          in the record's domain.
  ▼ 5. Intelligence reasoning             → propose a routing decision,
  │                                          emit as AITL for approval.
  ▼ 6. Default handler                    → notify, don't dispatch.

Each step is faster and cheaper than the next. Explicit targets are a constant-time dereference. Pattern matches are a hash lookup. Rule matches are a compiled-CEL eval (~10µs for a warm cache, compile costs amortise). Trust fallback is a registry lookup. Intelligence is a full LLM call and costs accordingly.

The cascade falls forward. When step 2 misses, step 3 runs; when 3 misses, step 4. A high-traffic deployment with crystallized patterns should see most records resolved by step 2, with 3 and 4 covering the long tail and 5 only for genuine novelty.

Self-improvement

The engine gets better as records accumulate, by design:

  • Intelligence decisions become AITL proposals. The kernel's reasoning doesn't silently execute — it proposes a routing rule and asks an operator to approve. Approved proposals become rules.
  • Rules that work repeatedly become patterns. Every successful dispatch via a rule is evidence that the rule's shape is a pattern worth crystallizing.
  • Patterns that meet C1-C4 crystallize into REPLAY. Once crystallized, the same shape of work no longer needs rule matching — it replays the outcome directly.
  • Meta-trust tracks the reasoning path. The intelligence actor itself has a trust profile. Proposals it makes that get approved contribute to its trust; rejected ones detract.

Over time, the fraction of records handled by steps 1-2 (fastest, cheapest) grows, and the fraction needing step 5 (slowest, most expensive) shrinks. This is the compounding property — nothing in the system fights against it; every successful completion is evidence that next time can be cheaper.

What the engine doesn't do

Worth enumerating because the loop structure is narrow on purpose.

The engine doesn't own the adapter subprocess. Dispatch starts a process; the reconciler's role is to emit the INTEND and register a callback. The subprocess runs independently, emits records back via HTTP, and the reconciler picks those up on the broadcast channel like any other record.

The engine doesn't centralise state. Every piece of derived state — trust, patterns, configs, namespace registry, instance registry — is a fold of a thread's records. Lose the derived caches, the engine rebuilds from the record log on the next reload. The derived state isn't a separate database.

The engine doesn't gate federation. Federation is a peer of the engine, not a consumer. The engine broadcasts records; the federation loop pulls from peers, runs its signature and consent checks, and emits into INGEST like any other writer. From the engine's perspective, a federated record is just another record.

What's next

On this page