Applied AI Retail Demo

Applied AI Retail is the polished commerce demo for gecx-chat: a design-forward storefront where the entire retail journey lives inside the chat. A shopper can browse, learn about a product, add to cart, build an A2UI gift bundle, check out, track an order, and start a return without ever leaving the chat panel.

Use this demo when you want to show what the SDK looks like as a headless runtime, not a hosted widget — the app owns the UI, the SDK owns the chat/session/tool plumbing, and an SDK Inspector drawer makes the internals visible while you talk.

Run it

From the repo root:

pnpm install --frozen-lockfile
pnpm build:packages          # builds the SDK from source
pnpm dev:applied             # starts the storefront at http://localhost:3002

By default the demo runs in mock mode. No network calls, API keys, service accounts, or environment variables are required. The scripted mock transport is rich enough to exercise the entire retail flow end-to-end.

What to try

If you only have five minutes, walk through these in order. Steps 1–7 cover the in-chat journey; step 8 surfaces the analytics layer those events feed; step 9 shows how agents extend the demo.

1. Land on the storefront

Open http://localhost:3002. You see a premium storefront shell with a bottom-right chat launcher. The launcher restores focus correctly and the chat composer auto-focuses on open.

2. Open the chat

Four starter chips appear in the empty state. Pick "Tell me about the Gradient Descent Hoodie." A product-detail rich card streams in with image, copy, colorway, size picker, and an in-chat Add to cart button.

3. Approve an add-to-cart

Click Add to cart. An approval surface appears with a shopper-readable summary on top and the raw tool-call JSON behind expandable details. Approve it. Variant choices (size, colorway) flow through the approval into cart state and persist into checkout and order summaries.

4. Build a gift bundle (A2UI)

Ask the agent "build a gift bundle around the Embedding Throw Blanket." The agent emits an A2UI surface with a Card, two sliders, and a generated Add button. Move a slider — the host preference readout updates live. Click Add and approve the resulting add_to_cart tool call. The giftBundle metadata flows through with the approval.

5. Check out from chat

Type "show my cart", then click Checkout inside the cart-summary message. Approve the checkout to render the order summary and tracking timeline directly in the chat panel.

6. Start a return

Type "start a return on ORD-10042", pick a reason chip, approve the return, and click the local return-label link. The label opens at /returns/[rmaId]/label inside the demo — no external placeholders.

7. Open the SDK Inspector

Click the bug icon in the chat header. The chat slides left and the Inspector drawer exposes four tabs:

  • Parts — normalized message parts (text, product-detail, cart-summary, tracking-timeline, rma-summary, a2ui-surface, tool-call, suggestion-chips).
  • Tools — approval-gated tool calls with inputs, status, and results.
  • Transport — the active mock transport, event counts, session id, and trace events including A2UI frames.
  • Errors — simulated SDK errors. Send broken tool to populate TOOL_TIMEOUT, then click the pill to open the canonical error code reference.

This is the fastest way to explain that the SDK stays observable even when the UI is fully custom.

8. Open /analytics

Click Analytics in the site nav (or visit /analytics directly). The route is the Conversational Commerce Intelligence dashboard — every chart and table on the page is wired to the SDK's real ProductAnalyticsEvent stream from useChatSession. On first load it seeds a deterministic ~7-day backlog so charts are populated for screenshots and walk-throughs; live events from the chat dock then trickle in on top.

Key things to point out:

  • Hero strip — Sessions, Conversion rate, Avg CSAT, p50 TTFT in serif numerals with per-bucket sparklines and a delta-vs-prior-period chip.
  • Funnelsession_started → user_message_sent → tool_approval_resolved → tool_executed:checkout → resolution_without_escalation with step-over-step conversion %.
  • Activity — stacked area of session_started, user_message_sent, tool_executed per bucket.
  • Tools table — per-tool approval rate (inline cell bar) and median runtime.
  • Latency ridgeline — p50/p90/p99 for assistant_response_first_token, tool_executed.durationMs, file_upload_completed.durationMs.
  • Engagement bars — impressions vs clicks per rich-content type with CTR set in serif italic.
  • Errors — sorted by frequency, each row deep-links to the canonical error code reference.
  • Transport health — disconnect timeline + uptime ring.
  • Live stream — the last 14 events with category-colored type pills; toggle Stream to pause for inspection.

Every section header carries a small monospace built from: caption naming the event type(s) it consumes, so the page doubles as living documentation. Use the 24h / 7d / 30d / 90d chips and the Segment dropdown (All / Converted / Abandoned / Escalated) to slice the same in-memory buffer — no refetch. Click Reset to live only to clear the demo seed and watch the page redraw from purely live events as you chat.

This is the visual counterpart to the SDK Inspector: the Inspector is the raw observability surface, /analytics is the aggregated, outcome-oriented one.

9. Visit /lab

Open /lab for the agentic-iteration playground: curated prompt cards plus a recipe registry generated from recipes/registry.json. Copy a prompt into Claude Code, Codex, or Antigravity and let the agent extend the demo. See the Vibe Coding Guide for the full workflow.

10. Open /support

Open /support for a production-style triage agent graph. The page uses createAgentGraphTransport({ graph: supportGraph, ... }) with three mock A2A specialists: returns, billing, and order. A heuristic intent classifier picks the right specialist for each turn. The page renders the live graph topology with active-node highlighting and a scrolling event feed. The implementation lives at apps/applied-ai-retail/src/lib/supportGraph.ts.

11. Open /computer-use

Open /computer-use for the retail-context computer-use flow. The session boots in mock provider mode (COMPUTER_USE_PROVIDER=mock) so it runs end-to-end without BROWSERBASE_API_KEY. The flow shows the consent UX, the signed SSE screenshot stream, the action log, and an Abort button. Set the environment variables documented in Computer-use to switch to a real Browserbase session.

12. Memory drawer

The chat surface integrates a server-side memory store (src/lib/serverMemoryStore.ts) and an in-chat memory drawer (src/components/chat/MemoryDrawer.tsx). The model can save, update, recall, and delete user facts across sessions; the drawer is the user-facing surface for inspecting and editing what's remembered.

What this demo proves

CapabilityHow the demo shows it
Chat as the primary commerce surfaceBrowse → detail → add → cart → checkout → track → return all happen inside one chat panel.
Custom ChatTransportsrc/lib/retailTransport.ts reads from a shared cart store and emits scripted rich-content events.
Approval-gated tool executionState-changing tools (add_to_cart, checkout, initiate_return) require approval and fail closed when denied.
Variant persistenceSize, colorway, and A2UI gift-bundle preferences flow through approval, cart state, checkout, and order summaries.
A2UI generative UIThe gift-bundle moment uses the basic A2UI catalog: createSurfaceupdateComponentsupdateDataModel, with actions routed back through normal chat.
Agent graph routing/support runs three A2A specialists (returns / billing / order) behind a heuristic intent classifier, with live inspector.
Computer-use/computer-use exercises the sandboxed browsing flow with mock-by-default provider and Browserbase opt-in.
Long-term memoryA server-side MemoryStore plus an in-chat memory drawer let the assistant remember user facts across sessions.
Live-backend connect, no env varsClick the gear icon → Connect GECX, paste credentials, and the server encrypts them into an httpOnly session cookie. No .env.local needed.
SDK InspectorA debug drawer wrapper around createDebugBundle() that demonstrates parts, tools, transport, and error introspection.
Conversational Commerce IntelligenceThe /analytics route renders the SDK's real ProductAnalyticsEvent stream as a polished dashboard — funnel, latency percentiles, tool approval rates, rich-content engagement, error breakdown, transport uptime, and a live event feed. See Analytics.

Optional: connect a live GECX backend

Click the gear icon in the chat header, then choose Connect GECX. Paste:

  • Company ID
  • Tenant
  • Host
  • Menu ID
  • Company secret

The company secret never lands in browser storage. The modal posts once to the server, where the app encrypts the details into an httpOnly session cookie. Subsequent token and proxy calls read credentials server-side per request.

There is intentionally no .env.local switch for live mode — this keeps the demo portable for customer calls and prevents credentials from leaking into browser code, logs, or git.

Verification

pnpm --filter applied-ai-retail-demo typecheck
pnpm --filter applied-ai-retail-demo e2e --reporter=list
pnpm e2e:applied

The Playwright suite covers storefront rendering, product detail, the full in-chat journey, A2UI surfaces, approval denial, chat focus behavior, Inspector error links, and the /lab page.

Source: docs/demos/applied-ai-retail.md