Built-in Chat Engine
The Built-in engine is Conduit's default AI chat mode. It uses direct API calls to AI providers (Anthropic, OpenAI) via the Conduit backend proxy.
Features
- Streaming responses with stop/cancel
- Full access to all 42 MCP tools
- Model selection and switching per conversation
- Persistent default model preference
- Markdown rendering in responses
How It Works
Messages are sent to the Conduit backend, which proxies to the selected AI provider. Responses stream back in real-time. Token usage is tracked per-conversation.
Tool Calls
When the AI uses a tool, it's displayed as a compact styled chip with live status — a spinner while running, a checkmark on success, or an error icon on failure. Tool calls are visually separated from AI text.
Context Management
Conversations persist in the encrypted vault. Cloud sync is available on paid tiers. Auto-compaction manages the context window on Pro and Team tiers, summarizing older messages to keep conversations flowing without hitting context limits.
Available Models
The models available to you depend on your subscription tier:
- Free — Haiku 4.5, GPT-5 Mini
- Pro — adds Sonnet 4.6, Opus 4.5, Opus 4.6, GPT-5.2
- Team — all models including beta releases