Signals: Sentiment and Intent
Signals are a typed side-channel that flows through the same ChatMessage.parts[] array as text. They let the host app react to emotional state and inferred intent without inventing a parallel event stream or hand-rolling classifiers.
The two part variants are SentimentSignalPart and IntentSignalPart. Both carry score, confidence, source (which adapter produced the signal), polarity, end-to-end latencyMs, and an attributedMessageId pointing back at the message that caused the shift. The default renderer is sr-only — signals are infrastructure, not presentation. Hosts override the renderer to drive overlays, banners, or sentiment meters.
For the integration walkthrough — wiring adapters, escalation rules, and React hooks — see Sentiment and Intent Signals.
Why signals are message parts
Putting signals in message.parts keeps one normalization pipeline. Every consumer of the chat (renderer, debug bundle, governance export, analytics, recording) already knows how to walk parts. A separate "events" channel would have required every consumer to subscribe twice and would have made replay-from-history lossy.
Each emission also gets a unique id, so renderers can reconcile updates and the SDK can replace an interim sentiment estimate when a more confident one arrives.
Adapter tiers
Signals are produced by adapters that implement the SignalAdapter interface. The SDK ships several with deliberate latency/accuracy trade-offs:
| Tier | Adapter | Inference site | Latency | Strengths | Trade-offs |
|---|---|---|---|---|---|
| Rules | ruleAdapter (+ defaultRulePatterns) | In-process, deterministic | <1 ms | Zero dependencies, fully offline, predictable | Brittle on long-form prose, English-leaning |
| Browser ML | tfjsToxicityAdapter | Browser (TF.js) | 30-200 ms | Local, no API key, decent quality | Bundle weight, no CSP unsafe-eval, English only |
| Model side-channel | modelToolAdapter | Backend LLM, structured output | 100-400 ms | Reuses the live model's understanding | Couples to model availability |
| LLM judge | geminiAdapter, claudeAdapter, openaiAdapter | Backend LLM call | 400-1500 ms | High quality across languages | Cost, latency, requires API key |
Adapters are pluggable. Hosts pick one or several based on the surface — a kiosk might use rules only, a high-touch support copilot might combine browser ML with an LLM judge for high-confidence escalations.
Vendor SDKs (@google/generative-ai, @anthropic-ai/sdk, openai, @tensorflow-models/toxicity, @tensorflow/tfjs) are optional peer dependencies. Adapters dynamic-import them and degrade gracefully when absent — a host that never installs @tensorflow/tfjs simply does not get tfjsToxicityAdapter capabilities.
SignalRunner vs SignalEscalator
Two pure objects with disjoint responsibilities:
SignalRunnerowns adapter dispatch. Given a new message, it calls every configured adapter, collects emissions, and synthesizessignal.updatetransport events. These flow through the sameprocessTransportEventloop as server-emitted parts.SignalEscalatorowns rule evaluation. It receives the running signal history and matches against declarative rules likefrustration > 0.7 across two turns. When a rule fires, the escalator calls the existingHandoffController.requestTransfer()with a stable idempotency key and a per-rule cooldown.
The split keeps detection separate from action. A host that wants signal-aware UI but does not want automatic escalation can configure the runner without the escalator.
Why escalation reuses HandoffController
Sentiment-triggered handoff is just another caller of the existing handoff API. The state machine in HandoffController stays a pure FSM. No new transfer types, no parallel queues, no separate audit pipeline — escalations show up in the same HandoffStatus history, with the same requestTransfer() semantics, and the same TransferContextBundle carrying conversation context to the human agent.
This is also why escalation rules know nothing about how the handoff happens. They emit a request and let the existing FSM enforce throttling, deduplication, and acceptance.
How it fits together
┌──────────────────┐
user/agent →│ ChatSession │ ──► message_added / message_updated events
└────────┬─────────┘
│
┌──────────┴──────────┐
▼ ▼
┌─────────────┐ ┌────────────────┐
│ SignalRunner│ │ SignalEscalator│
│ (adapters) │ │ (rules) │
└──────┬──────┘ └────────┬───────┘
│ │
signal.update events requestTransfer()
│ │
▼ ▼
processTransportEvent HandoffController (pure FSM)
│
▼
ChatMessage.parts[]
│
▼
useSentiment() / useIntent()
Adapters synthesize signal.update events; they do not call into the normalizer directly. This preserves the normalizer's invariant: it is a pure reducer over TransportEvent, indifferent to whether the event came from the wire or from a local adapter.
Quick example
import { createChatClient, ruleAdapter, defaultRulePatterns } from 'gecx-chat';
const client = createChatClient({
// ...auth, transport
signals: {
adapters: [ruleAdapter({ patterns: defaultRulePatterns })],
escalation: [
{
id: 'frustration-2-turns',
signal: 'sentiment',
match: { category: 'frustration' },
operator: '>',
threshold: 0.7,
windowTurns: 2,
cooldownMs: 60_000,
},
],
},
});
In React, useSentiment(messages) and useIntent(messages) return memoized snapshots of the latest emission, the recent history, and per-category/per-intent rollups.
Where to go next
- Sentiment and Intent Signals — integration guide, adapter setup, escalation patterns.
- Messages and Parts — the part-types catalog. Signals live alongside text, citations, tools, and rich content.
- Handoff — the controller that escalation rules call into.
- Analytics — signal-derived analytics events flow through the same
ProductAnalyticsEventstream.
docs/concepts/signals.md