User documentation

LocalLightChat

“Click here, paste that, press save.” From first login to ComfyUI image generation, web tools, templates, and exporting chats.

Scope: UI usage Format: single HTML file Goal: first successful chat → advanced features

1) Quick start (first successful chat)

Goal: model selected · message sent · answer received
1
Sign in and open the main chat UI (sidebar left, messages center). An empty state is normal if you have no chats yet.
2
Open the top-right menu Settings.
3
Go to Connections+ New Connection. Add a provider (OpenAI-compatible endpoint), then TestSave.
4
Close settings. In the header click Select model and pick a model.
5
Create a chat: sidebar → + New. Type a message and press Enter. New line: Shift+Enter.
Reality check: If the model list is empty, you don’t have a working Connection yet. Fix that first.

2) UI tour (what is where)

mental map in 60 seconds

Left sidebar

Chats + folders. Create folders, search chats, drag & drop chats into folders. Each chat has a menu (rename, clone, export…).

Top header

Model picker + menus. The button opens Chat Controls. The menu has Settings, Import/Export, Docs, Sign out.

Chat area

Your conversation. Assistant messages can show “Reasoning” (collapsible), attachments (files/images/text), and action buttons (copy, regen…).

Composer

Where you type. Attach files/links/text, toggle web tools (🌐), use voice (if enabled), stop generation, send.

ActionHow
Send messageEnter
New lineShift + Enter
Close menus/modalsEsc

3) Settings (core setup)

top-right → Settings

General

Max tool-call iterations controls how many tool calls the AI may chain in one answer before the backend stops it. (Usually only admins change this.)

Users

Update your email, change password. Admins may see user management (create users, permissions).

Good default: Don’t touch the “limit knobs” unless you know why. Most users only need Connections, Web/Search, and Image.

4) Connections & models

Connections define where requests go

A Connection is where LocalLightChat sends requests (OpenAI-compatible endpoints). Models are loaded from your connections.

1
Settings → Connections+ New Connection.
2
Fill Name, Base URL, API Key. Optional: extra headers as JSON.
3
Click Test. If it succeeds, click Save.
4
Back in the header, open Select model and pick your model.
Pricing (optional): If you fill token prices (Settings → Pricing), the status bar and message stats can show $ cost.

5) Chats, folders, templates

organize + reuse

Create chats

Sidebar → + New. A fresh chat starts with your selected model and defaults.

Create folders

Sidebar → + Folder. You can nest folders and move them around (drag & drop or menu).

Move chats

Drag a chat onto a folder, or open the chat menu → Move to….

Chat menu actions

Each chat has a menu: Rename, Clone, Save as template, Compress & Clone, Export, Delete.

Templates: Save a chat as a template (chat menu → “Save as template”). Then create a new chat from it via + New ▾ → pick a template.

6) Files, links, and text sources

composer “+”

In the composer, click the + button to attach sources. Attachments appear as chips above the input. You can remove them before sending.

1
Click + → choose Add files, Add link, or Add text.
2
Attachments show up as chips. Click a chip to remove it if you changed your mind.
3
Press Enter to send. The model can use those sources in the response.
TypeWhat it’s forNotes
FilesPDFs, images, docs, sheets, textLarge docs may be truncated to stay within limits.
LinkProvide a URL as a sourceYou can also enable Web tools to let the model fetch pages directly.
TextPaste raw text as a “document”Great for snippets you don’t want as an upload.
Pro tip: For long web pages, using “Prefer saving web sources as attachments” (Settings → Web) makes URLs become stored sources automatically.

7) Web tools (Web + Web search)

🌐 per-chat toggle

Web

Lets the model open URLs and read page content (tool: web_read). Useful when you want grounded answers from specific pages.

Web search

Lets the model search via your configured provider (Serper/SearchNGX/Custom). Good for “find sources” tasks.

1
Settings → Web: optionally enable “Activate web automatically” so new chats start with Web on.
2
Settings → Search: pick a provider and add the API key / base URL.
3
In any chat: click 🌐 → toggle Web and/or Web search.
Local network safety switch: “Allow local/private IPs” exists for Web and for Search custom endpoints. Turn it on only if you intentionally use LAN endpoints.

8) Image generation (OpenAI + ComfyUI)

🌐 → Image generation

OpenAI Image API

Fast setup: enter API key + pick model/size/quality. Best if you want “prompt → image” with minimal fuss.

ComfyUI

Power setup: you control the full workflow JSON, and LLC injects prompt/seed/steps/size via bindings. Ideal for custom pipelines.

Important: If you enable “Always attach last image to messages”, every new message includes the last generated image — meaning you must use a model that can see images.

9) Configure ComfyUI (step-by-step)

zero theory · just working config
1
Run ComfyUI on a machine reachable by your LocalLightChat server. Note the URL (e.g. http://192.168.0.50:8188).
2
Settings → Image. Set Provider to ComfyUI.
3
Paste your ComfyUI server URL into ComfyUI server URL.
4
In ComfyUI: build (or load) the workflow. Export API JSON: File → Export (API).
5
Back in Settings → Image: paste exported JSON into Workflow JSON (Template).
6
Click Auto-detect next to Bindings JSON (find prompt + params paths).
7
Set defaults (optional): steps, guidance, width/height, seed, batch size, Max wait.
8
Decide precedence: enable Defaults override workflow values if LLC should overwrite JSON values. Leave off if workflow stays authoritative.
9
In a chat: click 🌐 → enable Image generation. Ask for an image.
If it fails: usually (a) wrong URL/reachability, (b) wrong bindings, or (c) ComfyUI queue/timeouts.
Minimal sanity test prompt (copy/paste):
Generate an image: a clean studio photo of a black sneaker on a reflective surface, soft rim light, 3:2 composition.
If you use ComfyUI: keep it simple and don’t stack 5 different LoRAs in the first test.

10) Voice (dictation + read aloud)

speech→text + text→speech

Voice features depend on your settings. There are two separate things: dictation (speech → text) and read aloud (text → speech).

1
Settings → Voice → paste your Voice API Key (OpenAI).
2
Turn on Activate voice in every chat if you want the voice button visible by default.
3
For dictation: toggle send automatically after transcription.
4
For read aloud: choose Browser (built-in TTS) or OpenAI (requires key). Then pick the voice.
Where to use it: On any assistant message, use the “speaker” action button to read it aloud.

11) Chat Controls (system prompt + advanced params)

header

System prompt

Defines the “rules” for the assistant in this chat. Think: role, tone, constraints, formatting preferences.

Advanced params

They affect generation behavior (sampling, repetition control, limits, etc.). For an MVP, you can ignore them.

Reset: Use “Reset All” to return everything to defaults if responses got weird.

12) Nice extras (Mermaid, HTML preview, message actions)

quality-of-life stuff

Message actions

On each message: Edit, Copy, Read aloud, view Stats, Regenerate, Resume (if stopped), and Delete.

Reasoning panel

If a model returns reasoning, you’ll see a “🧠 Reasoning” box. Click the header to expand/collapse.

Mermaid diagrams

If the assistant outputs a ```mermaid block, it renders as a diagram automatically (and you can still copy/download).

HTML/SVG previews

For ```html or SVG code blocks, you can open a preview directly from the code block controls.

Example Mermaid you can try:
```mermaid
graph TD
  A[User] -->|sends message| B(LocalLightChat)
  B --> C{Tools enabled?}
  C -->|Web/Search/Image| D[Tool calls]
  C -->|No tools| E[Model only]
  D --> F[Answer]
  E --> F[Answer]
```

13) Import / Export / Backup

backups matter

Export all chats (ZIP)

Top-right Export. Downloads all chats as ZIP (JSON inside).

Export one chat

Chat menu → Export → choose OpenAI JSON, JSON, or TXT.

Import (file picker)

Top-right Import, select one or multiple .json exports.

Import (drag & drop)

Drag exported JSON files onto the chat list. Drop onto a folder to import directly into that folder.

Best practice: Use “Export all chats” regularly. Fastest way to migrate or recover after an update.

Troubleshooting

common failures, blunt fixes
A
Model list is empty
No working connection (or it can’t list models). Settings → Connections → Test.
B
Web search toggle exists, but searches do nothing
Configure Settings → Search (provider + key/base URL). Then enable 🌐 → Web search in the chat.
C
ComfyUI: timeout / no image returned
Increase “Max wait (seconds)”. Also ensure the ComfyUI URL is reachable from the server (not just your laptop).
D
ComfyUI: images generate, but prompt/size/steps don’t change
Bindings wrong, or “Defaults override workflow values” is off (workflow hardcodes values). Auto-detect again, re-test.
E
Send button disabled
Need either text in input or at least one attachment. Also: if a request is generating, send is disabled until it finishes or you press Stop.
F
Voice button missing
Enable in Settings → Voice (“Activate voice in every chat”) and ensure a Voice API key is set.
When in doubt: go minimal: one Connection, one model, no tools, no templates. Then add features back one by one.