Training Signals
Every memory lifecycle action can generate training data for the SIU v2 pipeline. Store, delete, reclassify, pin, boost — each action teaches the quality gate and type classifier to improve over time. The pipeline has four subsystems: SIVU (utility scoring), SICU (type classification), SILU (entity extraction via GPT-5.4-nano), and SITU (trigger evaluation). Your agents get smarter by using their memory.
Sulcus v2.9 · SIU v2 Training Reference · 2026
Overview
A continuous feedback loop between your agents and the intelligence unit
Most memory systems are write-and-forget. You store data, you query it, and the system never learns whether what it stored was useful or what it surfaced was relevant. Sulcus is different.
Every lifecycle action — storing a memory, deleting junk, correcting a misclassified type, pinning something important, boosting a critical fact — can generate a training signal. These signals accumulate in the training_signals table and feed back into the SIU models during retraining.
The result is a self-improving memory system. The more your agents use their memory — and especially the more they correct it — the better the quality gate and type classifier become for all memories in that namespace.
Scores base_utility (0–1) on every store. Learns to accept good memories and reject noise from store/delete/pin signals.
Classifies memory type (episodic, semantic, procedural, fact, preference). Respects explicit types; acts as fallback. Learns from reclassify signals.
Extracts entities and relationships via GPT-5.4-nano on every store. Builds triples for the AGE knowledge graph. Fires automatically.
Evaluates reactive triggers server-side on every memory event. Fires actions based on event type, memory type, namespace, and heat.
Signal Sources
Six lifecycle actions that generate training data
| Action | Signal | Source Tag | Confidence | Requires |
|---|---|---|---|---|
| Store + train_on_this=true | accept | train_on_this | Explicit | Plugin ≥ 3.9.0 |
| Delete + train=true | reject | agent_delete | High | Plugin ≥ 3.11.0 |
| Reclassify + train_on_this=true | reclassify | train_on_this | Explicit | Any version |
| Pin (is_pinned=true) | accept | pin | High | Server-side only |
| Manual Boost (heat change) | accept | boost | Medium | Server-side only |
| Update + train_on_this=true | accept | train_on_this | Reinforcement | Any version |
Note: Auto recall boost (heat increase on search hit) intentionally does not generate training signals — it would flood the table with low-value data.
How It Works
Two models, two jobs — quality gate and type classifier
SIVU— Store Intelligence Validator UnitSIVU scores base_utility(0–1) on every store. This determines how "useful" a memory is, which influences its effective starting heat in the thermodynamic engine. It learns from two signal types:
- accept Content like this should be stored (high utility)
- reject Content like this should not be stored (low utility/noise)
Higher-confidence signals (pin, explicit delete) are weighted more heavily during retraining. A pinned memory is a strong "yes, this matters" signal. A deleted memory with train=trueis a strong "no, this was junk."
SICU— Store Intelligence Classifier UnitSICU classifies each memory into its correct type (episodic, semantic, procedural, preference, fact). It respects explicit agent types and acts as a fallback classifier when no type is provided. It learns from reclassifysignals — explicit corrections where an agent or user says "this was labeled episodic but should be procedural."
These are the highest-value signalsin the entire training pipeline because they represent direct human/agent corrections to the model's output. SICU intentionally does not reclassify preference-like content stored with explicit types — conservative and correct.
SILU— Store Intelligence Labeling UnitSILU runs entity extraction via GPT-5.4-nano on every store. It extracts entities and relationships from memory content, building entity–relation–entity triples that populate the Apache AGE knowledge graph. This happens automatically — no configuration required.
The AGE graph is self-healing: every store, recall, and entity extraction writes to AGE automatically. SILU is the bridge between raw text memories and structured graph relationships.
Automatic vs Explicit Signals
Automatic (no flag needed)
Explicit (opt-in required)
train_on_this=truetrain=truetrain_on_this=truetrain_on_this=trueCode Examples
Python and Node.js SDK usage for every training action
Store with Training
Teaches SIVU that content like this should be accepted.
Python
from sulcus import Sulcus
client = Sulcus(api_key="sk-...")
# Store a memory AND generate a training signal
client.remember("Deploy: push to ACR then az containerapp update",
memory_type="procedural",
train=True) # ← generates an 'accept' signal for SIVUNode.js
import { Sulcus } from "@digitalforgestudios/sulcus";
const client = new Sulcus({ apiKey: "sk-..." });
// Store with training signal
await client.remember("Deploy: push to ACR then az containerapp update", {
memoryType: "procedural",
train: true, // generates 'accept' signal for SIVU
});Delete with Training
Snapshots the content before deletion. Teaches SIVU to reject similar content in future.
Python
# Delete a memory AND teach SIVU to reject similar content
client.delete("node_01J...", train=True)
# Snapshots content before deletion, records a 'reject' signalNode.js
// Delete with training — teaches SIVU to reject similar content
await client.delete("node_01J...", { train: true });Reclassify with Training
The highest-value signal — corrects the type classifier with explicit human/agent feedback.
Python
# Correct a misclassified memory — highest-value signal for SICU
client.update("node_01J...",
memory_type="procedural", # was 'episodic', should be 'procedural'
train=True) # ← generates a 'reclassify' signalNode.js
// Correct a type — highest-value signal for SICU
await client.update("node_01J...", {
memoryType: "procedural", // correction
train: true, // generates 'reclassify' signal
});Pin (Auto-Trains)
No flag needed. Pinning always generates a high-confidence accept signal.
Python
# Pin a memory — auto-generates a high-confidence 'accept' signal
client.pin("node_01J...")
# No train flag needed — pinning always trainsNode.js
// Pin — auto-generates high-confidence 'accept' signal
await client.pin("node_01J...");
// No train flag neededBoost (Auto-Trains)
No flag needed. Manual heat changes always generate a medium-confidence accept signal.
Python
# Manually boost heat — auto-generates a medium-confidence 'accept' signal
client.boost("node_01J...", heat=0.95)
# No train flag needed — manual heat changes always trainNode.js
// Manual boost — auto-generates medium-confidence 'accept' signal
await client.boost("node_01J...", { heat: 0.95 });
// No train flag neededREST API Reference
Raw HTTP calls for every training action
/api/v1/agent/nodes/api/v1/agent/nodes/:id?train=true/api/v1/agent/nodes/:id/api/v1/agent/nodes/:id (pin)/api/v1/agent/nodes/:id (heat)/api/v2/siu/training-dataStore with training signal
# Store with training signal
curl -X POST https://api.sulcus.ca/api/v1/agent/nodes \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{
"label": "Deploy: push to ACR then az containerapp update",
"memory_type": "procedural",
"train_on_this": true
}'Delete with training signal
# Delete with training signal (snapshots content, records 'reject')
curl -X DELETE "https://api.sulcus.ca/api/v1/agent/nodes/node_01J...?train=true" \
-H "Authorization: Bearer sk-..."Reclassify with training signal
# Reclassify with training signal
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{
"memory_type": "procedural",
"train_on_this": true
}'Pin (auto-generates signal)
# Pin (auto-generates training signal — no train flag needed)
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{"is_pinned": true}'Manual boost (auto-generates signal)
# Manual heat boost (auto-generates training signal)
curl -X PATCH https://api.sulcus.ca/api/v1/agent/nodes/node_01J... \
-H "Authorization: Bearer sk-..." \
-H "Content-Type: application/json" \
-d '{"current_heat": 0.95}'Signal Table Schema
Where training signals accumulate before retraining
All training signals land in the training_signals table. Each row captures the memory content at time of signal, the signal type, the source action, and — for reclassify signals — both the predicted and corrected types.
CREATE TABLE training_signals (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
memory_id UUID,
tenant_id TEXT NOT NULL,
signal_type TEXT NOT NULL, -- 'accept', 'reject', 'reclassify'
corrected_store BOOLEAN, -- true=should store, false=should reject
corrected_type TEXT, -- for reclassify: the correct type
predicted_type TEXT, -- what the model predicted (if available)
content_snapshot TEXT, -- content at time of signal
source TEXT NOT NULL, -- 'train_on_this', 'agent_delete', 'pin', 'boost'
created_at TIMESTAMPTZ DEFAULT NOW()
);| Column | Purpose |
|---|---|
| signal_type | accept, reject, or reclassify — determines which model consumes it |
| corrected_store | For SIVU: true = should store, false = should reject |
| corrected_type | For SICU: the correct memory type (set during reclassify) |
| predicted_type | What the model originally predicted (if available) |
| content_snapshot | Full content at time of signal — survives deletion |
| source | What generated this signal: train_on_this, agent_delete, pin, boost |
Retraining Pipeline
From accumulated signals to improved models
Training signals accumulate in the database. When enough have built up, the SIU models can be retrained to incorporate the new corrections. The pipeline is currently manual — automation is planned.
# 1. Export accumulated signals
curl https://api.sulcus.ca/api/v2/siu/training-data \
-H "Authorization: Bearer sk-..." > signals.json
# 2. Train the quality gate (SIVU)
python scripts/train_sivu.py --data signals.json
# 3. Train the type classifier (SICU)
python scripts/train_sicu.py --data signals.json
# 4. Deploy new ONNX models
cp models/*.onnx /opt/sulcus/models/siu-v2/
# 5. Server picks up new models on restartEvery store, delete, reclassify, pin, and boost adds a row to the training table. No action needed — they build up naturally as agents use their memory.
Export signals via the API, run training scripts for SIVU and SICU, deploy new ONNX models. Server picks them up on restart. Automated retraining is on the roadmap.
Version History
When each training capability was introduced
3.9.0train_on_this on store, update, and reclassify3.10.0SIU v2 junk filter, autoCapture quality gate3.11.1 (current)memory_delete tool with SIVU reject training; openclaw-sulcus v3.11.1Server v2.0.0Pin and boost auto-generate training signals (no plugin update needed)Server v2.9SILU entity extraction via GPT-5.4-nano, Apache AGE graph, SITU trigger evaluation, age_graph capabilityOpenClaw Plugin Tools
Which tools generate training signals and which don't
| Tool | Parameters | Training |
|---|---|---|
| memory_store | content, memory_type, train | train=true → accept signal |
| memory_delete | id, train | train=true (default) → reject signal |
| memory_recall | query, limit, namespace | No training signal |
| consolidate | min_heat | No training signal |
| evaluate_triggers | event, context_json | No training signal |
Your memory, self-improving.
Training signals are available in the Sulcus SDK, OpenClaw plugin, and REST API. Start with train=true on your next store call — one flag, and your quality gate starts learning from your agents.