Local Control Panel
When sulcus-local runs in server mode it starts a web control panel at http://localhost:4203. The panel gives you a real-time view into your local memory graph — browse nodes, inspect context, manage triggers, and tune thermodynamic settings — without writing any code.
Sulcus · Local Control Panel Reference · 2026
Launching the Panel
The panel is only available in server mode — not stdio mode
sulcus-local has two run modes. In server mode (sulcus-local serve), it starts the MCP server, the embedded Postgres instance, and the web control panel on port 4203. In stdio mode — used by the OpenClaw plugin — it communicates over stdin/stdout only. No HTTP server. No control panel.
# Server mode — panel available at http://localhost:4203
sulcus-local serve
# stdio mode (used by OpenClaw plugin) — panel NOT available
sulcus-local stdioIf you are using the OpenClaw plugin (stdio mode) the panel will not start. Use the cloud dashboard at sulcus.ca/dashboard instead — it has the same five tabs and feature parity with the local panel.
Database Backend
Integral PostgreSQL — no external database required
Sulcus Local uses pg-embed — an embedded PostgreSQL 17 instance compiled into the binary. When you run sulcus-local serve, it automatically downloads (first run only) and starts a local Postgres process. Your data is stored in ~/.sulcus/data/ and persists across restarts.
Why Postgres (not SQLite)? Sulcus Local uses the exact same schema as the cloud server. Every query, every migration, every column type is identical. This gives you:
- ›Schema parity — Same tables, same columns, same SQL — no translation layer between local and cloud.
- ›pgvector support — Vector similarity search with HNSW indexing works identically in both environments.
- ›Full SQL — CTEs, window functions, JSONB, TIMESTAMPTZ, UUID — no SQLite workarounds.
- ›Seamless sync — The CRDT sync protocol expects Postgres-native types. No type coercion needed.
Bring your own Postgres: Set SULCUS_DATABASE_URL to connect to an external Postgres instance instead. Sulcus Local will skip the embedded instance entirely. Migrations run automatically.
Overview Tab
System-wide health metrics and memory graph at a glance
The Overview tab is the landing page. It answers: what is the current state of my memory graph? No individual memory details — just the shape and health of the whole system.
Six Stat Cards
Each card refreshes on page load. Together they give a top-level health snapshot.
Total Nodes
Count of all memory nodes in the database regardless of heat or type. The raw size of the graph.
Edges
Number of explicit relationships created via memory_relate. A higher edge count means a more interconnected graph with stronger associative links.
Avg Heat
Mean heat across all non-pinned nodes. A falling average means decay is outpacing recall and boost activity. A stable or rising average is healthy.
Pinned
Count of nodes with is_pinned=true. Pinned nodes never decay below min_heat. Watch this: too many pins crowds context; too few risks losing critical procedures.
Operations
Total MCP tool calls processed since the process started (store, recall, boost, relate, etc.). A proxy for overall agent activity in this session.
Storage
Disk used by the embedded Postgres data directory, with a capacity bar. If the bar turns yellow or red, consider pruning cold memories via memory_deprecate.
Charts
Memory Types Distribution
Breakdown of nodes by type. If episodic dominates, the graph may be noisy. If procedures are absent, the agent may have no how-to knowledge. Use this to spot imbalance and decide what to store more deliberately.
Heat Distribution
Histogram bucketing all nodes by heat range. Healthy graphs tend to be bimodal: a cluster of hot active memories and a tail of cooling episodic noise. A completely flat distribution suggests the decay system may be misconfigured.
Memory Tables
Recent Memories
The most recently created or updated nodes. Columns: Content (truncated ~60 chars), Type, Namespace, Heat, Updated. Use this to confirm that a just-called memory_store landed correctly and what type and heat it was assigned.
Hottest Nodes
Top nodes ranked by heat descending. Same columns as Recent Memories. These are the memories that will appear first in context — what the agent effectively "knows" right now. If something important is missing here it may need a boost or a pin.
Browse Tab
Explore, create, search, edit, and delete individual memory nodes
The Browse tab is the primary memory management interface. It combines a creation form, filter controls, full-text search, and a paginated table with inline editing. Everything you can do with MCP tools you can also do here manually.
Create Memory
A collapsible form at the top of the tab. Expand it to add a memory node without calling the MCP tool — useful for manually injecting knowledge or testing decay configurations.
Content
textarea
The memory text. This is what the agent sees in context. Be concise — long memories consume context budget faster.
Memory Type
dropdown
episodic · semantic · procedural · preference · fact · moment. Controls the decay half-life. Procedural decays slowest; episodic fastest.
Namespace
text
Agent identifier (e.g. "icarus"). Namespaces isolate memories between agents sharing the same store. Defaults to the configured plugin namespace.
Initial Heat
0.0 – 1.0
Starting heat value. New memories typically start at 0.9. Lower values simulate partially-faded older knowledge.
Pin
checkbox
Sets is_pinned=true immediately on creation. The memory will never decay below min_heat.
Filters
Two rows of pills narrow the table immediately — no submit button needed.
Namespace Pills
Auto-detected from the data. One pill per unique namespace in the store. Click to show only that namespace. Select multiple to combine. In single-agent setups this will be a single pill; in shared stores (e.g., icarus + daedalus) it lets you isolate each agent's memories.
Type Pills
All · Episodic · Semantic · Procedural · Preference · Fact · Moment. Useful for auditing: "show me all procedures" or "how many preferences exist?" Selecting a type filters the table and updates the count shown above pagination.
Search
Full-text search across memory content. Results update as you type (debounced ~300 ms). This is a lexical substring match — not semantic search. For semantic / vector search (finding conceptually related memories), use memory_recall via the MCP tool or CLI. The panel search is for quick manual lookup when you know a keyword in the content.
Memory Table
| Column | What it shows |
|---|---|
| Content | Truncated to ~80 chars. Hover or click Edit to see the full text. |
| Type | Colored badge: episodic (dim), semantic (cyan), procedural (gold), preference (green), fact (blue), moment (purple). |
| Namespace | The owning agent namespace. Distinguishes memories from different agents in a shared store. |
| Heat | Color-coded bar with exact float value. Cyan = hot (0.8+), yellow-green = warm (0.5–0.79), amber = cool (0.3–0.49), red = cold (<0.3). |
| Created | Timestamp when the node was first stored. Hover to see the full ISO-8601 datetime. |
Heat colour reference
0.8 – 1.0 hot bright cyan bar recently active / pinned
0.5 – 0.79 warm yellow-green bar moderately active
0.3 – 0.49 cool amber bar fading from context
0.0 – 0.29 cold red/dim bar near expiryEdit / Delete
Edit
Opens an inline form pre-filled with current values. You can update content, type, heat, pin status, or namespace. Changes are persisted immediately and the HNSW index is updated — the edited memory is semantically searchable without restarting the process.
Delete
Permanently removes the node and all its edges from Postgres and the HNSW index. A confirmation dialog is shown first. Deletion is irreversible — the panel has no undo. For bulk pruning of cold memories, prefer the MCP memory_deprecate tool.
Pagination
Results are paginated at 25 nodes per page. Controls appear at the bottom of the table with total count and current page. Filters and search are applied before pagination — the count reflects filtered results, not total nodes.
Context Tab
Live preview of the XML block injected into the LLM system prompt
The Context tab renders exactly what build_contextreturns — the XML block that Sulcus prepends to every LLM system prompt. This is the ground truth of what the agent currently "knows" through its memory system. The preview updates on each page load; refreshing the tab always shows the current state.
Use this tab to answer: "Is my preference actually being injected?", "Why does the agent keep referring to that old procedure?", or "How much context budget is Sulcus using?"
<cheatsheet>Short instructional text for the agent on how to use Sulcus tools. Rendered once, at the top. Comes from the plugin configuration — not editable from the panel.
<preferences>All preference-type nodes ordered by heat descending. The agent reads these to recall user-stated preferences. Each item includes its ID so the agent can reference or update it.
<facts>Fact-type nodes. Stable knowledge points that don't change often — dates, constants, known truths. Ordered by heat.
<procedures>Procedural memories — how-to guides, deploy instructions, runbooks. Typically the most verbose section. Ordered by heat. These are what the agent reaches for when it needs to know how to do something.
<active_triggers>All enabled triggers with their event, action, fire count, and active filters. The agent reads this to understand what reactive rules are in play without calling list_triggers.
<recent_trigger_fires>Log of the most recent trigger firings — which trigger, on what node, and when. Gives the agent real-time awareness of what the memory system just did automatically.
Example output
<sulcus_context>
<cheatsheet>
You have Sulcus — persistent memory with reactive triggers.
STORE: record_memory | FIND: search_memory | RECALL: page_in
...
</cheatsheet>
<preferences>
<item id="...">Dooley prefers local builds on M4 — no remote builds.</item>
</preferences>
<facts>
<item id="...">Survival clock: ~150K Azure credits expire April 2026.</item>
</facts>
<procedures>
<item id="...">## Deploy procedure (local build, 2026-03-16) ...</item>
</procedures>
<active_triggers>
<trigger name="auto-pin-preferences" event="on_store" action="pin" fires="4" />
<trigger name="notify-on-recall" event="on_recall" action="notify" fires="423" />
</active_triggers>
<recent_trigger_fires>
<fire event="on_threshold" action="boost" node="Strategy: icarus" at="2026-03-19T09:56:32Z" />
</recent_trigger_fires>
</sulcus_context>If context output looks empty or shorter than expected, check the Browse tab to confirm memories actually exist. The context block only includes memories above the minimum heat threshold (default 0.1) — cold nodes are excluded to keep context size manageable.
Triggers Tab
Create and manage reactive rules that fire on memory events
Triggers are rules that fire automatically when a memory event occurs — a node is stored, recalled, boosted, linked, or its heat crosses a boundary. This tab lets you manage triggers visually without code. For full documentation on events, actions, and filters, see the Reactive Triggers docs.
Create Trigger
A form at the top of the tab. Required fields are Name, Event, and Action. All others are optional and scope or modify the trigger's behaviour.
Name
text
Human-readable identifier. Shown in the active triggers list, notifications, and trigger history.
Event
dropdown
on_store · on_recall · on_boost · on_decay · on_threshold · on_relate. When to fire.
Action
dropdown
notify · boost · pin · tag · deprecate · webhook. What to do when the trigger fires.
Filter: memory_type
dropdown
Restrict to a specific type: episodic, semantic, procedural, preference, fact, moment.
Filter: namespace
text
Only fire for memories in this namespace. Leave blank to match all namespaces.
Filter: label_pattern
text
Case-insensitive substring match on the memory label. "deploy" matches any label containing "deploy".
Filter: heat_above
0.0 – 1.0
Only fire when the memory heat is strictly above this value at the time of the event.
Filter: heat_below
0.0 – 1.0
Only fire when heat is strictly below this value. Combine with on_decay or on_threshold for cooling alerts.
cooldown_seconds
number
Minimum seconds between consecutive firings. Prevents high-frequency events (e.g., on_recall) from flooding notifications.
max_fires
number / blank
Maximum total firings allowed. Leave blank for unlimited. Useful for one-shot triggers.
Enabled
toggle
Whether the trigger is active. Disabled triggers are kept in the list but never fire. Use this to pause without deleting.
Active Triggers List
A table showing all triggers in the store, whether enabled or disabled. Columns: Name, Event, Action, Filter (summary), Fires (total count), Enabled toggle, and Edit / Delete actions. The fire count increments in real time as the agent uses memory. A count of zero usually means the filter conditions have never been met yet.
Fires counter
Cumulative count since trigger creation. Not reset on process restart unless you call update_trigger with reset_fire_count=true.
Enabled toggle
Click to pause or resume the trigger instantly. The trigger stays in the list; it just stops firing until re-enabled.
Edit
Opens the create form pre-filled with current values. Save overwrites in place.
Delete
Removes the trigger and its full history. Confirmed by dialog. Irreversible.
Trigger History
A chronological log of recent trigger firings. Each entry shows: which trigger fired, the event type, the action taken, the node it fired on (truncated label), and the timestamp. Useful for debugging — if a trigger isn't firing when you expect it to, check here to see the last time it actually ran and on what node. The history is paginated and persisted in Postgres; it survives process restarts.
Settings Tab
Tune the thermodynamic engine that governs memory decay
The Settings tab exposes the thermodynamic configuration that controls how fast memories decay, when the decay tick runs, and how heat spreads between connected nodes. All settings are adjustable via form inputs and saved with a single Save button. Changes take effect on the next decay tick.
Per-type Half-lives
Each memory type has an independent half-life — the time it takes for heat to drop by 50% from its current value (assuming no recall or boost activity). Shorter half-lives = faster forgetting. Defaults are calibrated for typical agent workloads but can be tuned per deployment.
episodic ~6 hours Events, conversations — fast fade
semantic ~7 days Concepts, relationships — slow fade
procedural ~30 days How-tos, runbooks — very slow fade
preference ~90 days User settings — near-permanent
fact ~14 days Data points — moderate fadeHalf-lives are in wall-clock time and depend on the tick interval. A procedural memory at heat 0.9 with a 30-day half-life will reach heat ~0.45 after 30 days of no activity, assuming hourly ticks.
Tick Mode
The tick mode determines when the decay engine runs. Three modes are supported:
fixed Decay runs on a fixed wall-clock interval (e.g., every 60 s)
activity Decay tick fires on memory operations (store / recall / boost)
hybrid Both: fixed interval + activity trigger, whichever fires firstRecommendation: Use hybrid for agents that have bursty activity with long idle gaps. Use fixed for predictable, clock-aligned decay. Use activity for minimal background CPU usage.
Other Configuration
Base interval
The fixed tick interval in seconds (used in fixed and hybrid modes). Default: 3600 s (1 hour). Lower values = more frequent decay at higher CPU cost.
Resonance / heat spread factor
How much heat propagates to neighbour nodes via edges when a node is recalled or boosted. 0.0 = no spread. 0.1–0.2 = gentle resonance. Higher values cause rapid co-activation of related memories.
Cold threshold
Heat value below which a node is considered "cold" for consolidation purposes. Default: 0.15. Nodes below this threshold become candidates for automatic pruning.
Cold count trigger
Number of cold nodes required to trigger a consolidation pass. When this count is reached the engine runs a cleanup sweep, removing or merging cold nodes to keep the graph size manageable.
Changes to thermodynamic settings affect all memories in the store regardless of when they were created. If you lower half-lives dramatically on a populated store, expect a large drop in average heat on the next tick. Test changes on a staging store before applying to production.
Local vs Cloud
Same five tabs, different deployment contexts
Both the local panel and the cloud dashboard at sulcus.ca/dashboard expose the same five tabs with the same features. The differences are operational, not functional.
| Local Panel | Cloud Dashboard | |
|---|---|---|
| URL | http://localhost:4203 | https://sulcus.ca/dashboard |
| Auth | None — local access only | Keycloak SSO |
| Agents | Single-agent (one local process) | Multi-agent (team shared) |
| Data | Embedded Postgres on disk | Managed cloud Postgres |
| Real-time | Yes — same process | Yes — WebSocket sync |
| Availability | Only while sulcus-local is running | Always on |
| Mode required | sulcus-local serve | Any mode (cloud account) |
| Use case | Local dev, single-agent, no auth overhead | Team collaboration, cross-agent shared memory |
When to use Local
You are running a single agent on your machine and want instant feedback with no auth setup. Local panel is ideal during development, prompt engineering, and debugging trigger configurations. No account required.
When to use Cloud
You have multiple agents sharing a memory store, or need team members to inspect agent memory without SSH access. Cloud dashboard adds Keycloak auth and multi-namespace visibility across all connected agents.
Memory you can see.
The local panel is a debug and inspection tool, not a production dashboard. If you find yourself using it constantly, consider what trigger or MCP tool call would give your agent the same visibility automatically.