Docs

Training Signals

Every memory lifecycle action can generate training data for the SIU v2 pipeline. Store, delete, reclassify, pin, boost — each action teaches the quality gate and type classifier to improve over time. The pipeline has four subsystems: SIVU (utility scoring), SICU (type classification), SILU (entity extraction via GPT-5.4-nano), and SITU (trigger evaluation). Your agents get smarter by using their memory.

Sulcus v2.9 · SIU v2 Training Reference · 2026

Overview

A continuous feedback loop between your agents and the intelligence unit

Most memory systems are write-and-forget. You store data, you query it, and the system never learns whether what it stored was useful or what it surfaced was relevant. Sulcus is different.

Every lifecycle action — storing a memory, deleting junk, correcting a misclassified type, pinning something important, boosting a critical fact — can generate a training signal. These signals accumulate in the training_signals table and feed back into the SIU models during retraining.

The result is a self-improving memory system. The more your agents use their memory — and especially the more they correct it — the better the quality gate and type classifier become for all memories in that namespace.

SIVU — Quality Gate

Scores base_utility (0–1) on every store. Learns to accept good memories and reject noise from store/delete/pin signals.

SICU — Type Classifier

Classifies memory type (episodic, semantic, procedural, fact, preference). Respects explicit types; acts as fallback. Learns from reclassify signals.

SILU — Entity Extractor

Extracts entities and relationships via GPT-5.4-nano on every store. Builds triples for the AGE knowledge graph. Fires automatically.

SITU — Trigger Unit

Evaluates reactive triggers server-side on every memory event. Fires actions based on event type, memory type, namespace, and heat.

Signal Sources

Six lifecycle actions that generate training data

ActionSignalSource TagConfidenceRequires
Store + train_on_this=trueaccepttrain_on_thisExplicitPlugin ≥ 3.9.0
Delete + train=truerejectagent_deleteHighPlugin ≥ 3.11.0
Reclassify + train_on_this=truereclassifytrain_on_thisExplicitAny version
Pin (is_pinned=true)acceptpinHighServer-side only
Manual Boost (heat change)acceptboostMediumServer-side only
Update + train_on_this=trueaccepttrain_on_thisReinforcementAny version

Note: Auto recall boost (heat increase on search hit) intentionally does not generate training signals — it would flood the table with low-value data.

How It Works

Two models, two jobs — quality gate and type classifier

SIVU— Store Intelligence Validator Unit

SIVU scores base_utility(0–1) on every store. This determines how "useful" a memory is, which influences its effective starting heat in the thermodynamic engine. It learns from two signal types:

  • accept Content like this should be stored (high utility)
  • reject Content like this should not be stored (low utility/noise)

Higher-confidence signals (pin, explicit delete) are weighted more heavily during retraining. A pinned memory is a strong "yes, this matters" signal. A deleted memory with train=trueis a strong "no, this was junk."

SICU— Store Intelligence Classifier Unit

SICU classifies each memory into its correct type (episodic, semantic, procedural, preference, fact). It respects explicit agent types and acts as a fallback classifier when no type is provided. It learns from reclassifysignals — explicit corrections where an agent or user says "this was labeled episodic but should be procedural."

These are the highest-value signalsin the entire training pipeline because they represent direct human/agent corrections to the model's output. SICU intentionally does not reclassify preference-like content stored with explicit types — conservative and correct.

SILU— Store Intelligence Labeling Unit

SILU runs entity extraction via GPT-5.4-nano on every store. It extracts entities and relationships from memory content, building entity–relation–entity triples that populate the Apache AGE knowledge graph. This happens automatically — no configuration required.

The AGE graph is self-healing: every store, recall, and entity extraction writes to AGE automatically. SILU is the bridge between raw text memories and structured graph relationships.

Automatic vs Explicit Signals

Automatic (no flag needed)

Pin — always generates signal
Boost — always generates signal

Explicit (opt-in required)

Storetrain_on_this=true
Deletetrain=true
Reclassifytrain_on_this=true
Updatetrain_on_this=true

Code Examples

Python and Node.js SDK usage for every training action

Store with Training

Teaches SIVU that content like this should be accepted.

Python

from sulcus import Sulcus
client = Sulcus(api_key="sk-...")

# Store a memory AND generate a training signal
client.remember("Deploy: push to ACR then az containerapp update",
    memory_type="procedural",
    train=True)          # ← generates an 'accept' signal for SIVU
python

Node.js

import { Sulcus } from "@digitalforgestudios/sulcus";
const client = new Sulcus({ apiKey: "sk-..." });

// Store with training signal
await client.remember("Deploy: push to ACR then az containerapp update", {
  memoryType: "procedural",
  train: true,          // generates 'accept' signal for SIVU
});
typescript

Delete with Training

Snapshots the content before deletion. Teaches SIVU to reject similar content in future.

Python

# Delete a memory AND teach SIVU to reject similar content
client.delete("node_01J...", train=True)
# Snapshots content before deletion, records a 'reject' signal
python

Node.js

// Delete with training — teaches SIVU to reject similar content
await client.delete("node_01J...", { train: true });
typescript

Reclassify with Training

The highest-value signal — corrects the type classifier with explicit human/agent feedback.

Python

# Correct a misclassified memory — highest-value signal for SICU
client.update("node_01J...",
    memory_type="procedural",    # was 'episodic', should be 'procedural'
    train=True)                  # ← generates a 'reclassify' signal
python

Node.js

// Correct a type — highest-value signal for SICU
await client.update("node_01J...", {
  memoryType: "procedural",    // correction
  train: true,                 // generates 'reclassify' signal
});
typescript

Pin (Auto-Trains)

No flag needed. Pinning always generates a high-confidence accept signal.

Python

# Pin a memory — auto-generates a high-confidence 'accept' signal
client.pin("node_01J...")
# No train flag needed — pinning always trains
python

Node.js

// Pin — auto-generates high-confidence 'accept' signal
await client.pin("node_01J...");
// No train flag needed
typescript

Boost (Auto-Trains)

No flag needed. Manual heat changes always generate a medium-confidence accept signal.

Python

# Manually boost heat — auto-generates a medium-confidence 'accept' signal
client.boost("node_01J...", heat=0.95)
# No train flag needed — manual heat changes always train
python

Node.js

// Manual boost — auto-generates medium-confidence 'accept' signal
await client.boost("node_01J...", { heat: 0.95 });
// No train flag needed
typescript

REST API Reference

Raw HTTP calls for every training action

POST/api/v1/agent/nodes
DELETE/api/v1/agent/nodes/:id?train=true
PATCH/api/v1/agent/nodes/:id
PATCH/api/v1/agent/nodes/:id (pin)
PATCH/api/v1/agent/nodes/:id (heat)
GET/api/v2/siu/training-data

Store with training signal

# Store with training signal
curl -X POST https://api.sulcus.ca/api/v1/agent/nodes \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{
    "label": "Deploy: push to ACR then az containerapp update",
    "memory_type": "procedural",
    "train_on_this": true
  }'
bash

Delete with training signal

# Delete with training signal (snapshots content, records 'reject')
curl -X DELETE "https://api.sulcus.ca/api/v1/agent/nodes/node_01J...?train=true" \
  -H "Authorization: Bearer sk-..."
bash

Reclassify with training signal

# Reclassify with training signal
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{
    "memory_type": "procedural",
    "train_on_this": true
  }'
bash

Pin (auto-generates signal)

# Pin (auto-generates training signal — no train flag needed)
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{"is_pinned": true}'
bash

Manual boost (auto-generates signal)

# Manual heat boost (auto-generates training signal)
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{"current_heat": 0.95}'
bash

Signal Table Schema

Where training signals accumulate before retraining

All training signals land in the training_signals table. Each row captures the memory content at time of signal, the signal type, the source action, and — for reclassify signals — both the predicted and corrected types.

CREATE TABLE training_signals (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    memory_id UUID,
    tenant_id TEXT NOT NULL,
    signal_type TEXT NOT NULL,        -- 'accept', 'reject', 'reclassify'
    corrected_store BOOLEAN,          -- true=should store, false=should reject
    corrected_type TEXT,              -- for reclassify: the correct type
    predicted_type TEXT,              -- what the model predicted (if available)
    content_snapshot TEXT,            -- content at time of signal
    source TEXT NOT NULL,             -- 'train_on_this', 'agent_delete', 'pin', 'boost'
    created_at TIMESTAMPTZ DEFAULT NOW()
);
sql
ColumnPurpose
signal_typeaccept, reject, or reclassify — determines which model consumes it
corrected_storeFor SIVU: true = should store, false = should reject
corrected_typeFor SICU: the correct memory type (set during reclassify)
predicted_typeWhat the model originally predicted (if available)
content_snapshotFull content at time of signal — survives deletion
sourceWhat generated this signal: train_on_this, agent_delete, pin, boost

Retraining Pipeline

From accumulated signals to improved models

Training signals accumulate in the database. When enough have built up, the SIU models can be retrained to incorporate the new corrections. The pipeline is currently manual — automation is planned.

# 1. Export accumulated signals
curl https://api.sulcus.ca/api/v2/siu/training-data \
  -H "Authorization: Bearer sk-..." > signals.json

# 2. Train the quality gate (SIVU)
python scripts/train_sivu.py --data signals.json

# 3. Train the type classifier (SICU)
python scripts/train_sicu.py --data signals.json

# 4. Deploy new ONNX models
cp models/*.onnx /opt/sulcus/models/siu-v2/

# 5. Server picks up new models on restart
bash
Signals Accumulate

Every store, delete, reclassify, pin, and boost adds a row to the training table. No action needed — they build up naturally as agents use their memory.

Manual Retrain

Export signals via the API, run training scripts for SIVU and SICU, deploy new ONNX models. Server picks them up on restart. Automated retraining is on the roadmap.

Version History

When each training capability was introduced

3.9.0train_on_this on store, update, and reclassify
3.10.0SIU v2 junk filter, autoCapture quality gate
3.11.1 (current)memory_delete tool with SIVU reject training; openclaw-sulcus v3.11.1
Server v2.0.0Pin and boost auto-generate training signals (no plugin update needed)
Server v2.9SILU entity extraction via GPT-5.4-nano, Apache AGE graph, SITU trigger evaluation, age_graph capability

OpenClaw Plugin Tools

Which tools generate training signals and which don't

ToolParametersTraining
memory_storecontent, memory_type, traintrain=true → accept signal
memory_deleteid, traintrain=true (default) → reject signal
memory_recallquery, limit, namespaceNo training signal
consolidatemin_heatNo training signal
evaluate_triggersevent, context_jsonNo training signal

Your memory, self-improving.

Training signals are available in the Sulcus SDK, OpenClaw plugin, and REST API. Start with train=true on your next store call — one flag, and your quality gate starts learning from your agents.