Troubleshooting
Common issues and solutions for AgentiBridge.
Postgres password authentication failed
FATAL: password authentication failed for user "agentibridge"
Cause: The POSTGRES_PASSWORD env var only sets the password when the Postgres data volume is first initialized. If you change the password in agentibridge.env after the volume already exists, the running Postgres still uses the old password.
Fix (preserve data): Update the password inside Postgres to match your agentibridge.env:
docker exec agentibridge-postgres psql -U agentibridge \
-c "ALTER USER agentibridge PASSWORD 'your-new-password';"
Fix (fresh start): Delete the volume and recreate (loses all stored data):
agentibridge stop
docker volume rm sb_postgres_data
agentibridge install
Semantic search returns “not available”
{"success": false, "error": "Embedding backend not available..."}
Cause: Semantic search is opt-in. Three things must all be configured:
AGENTIBRIDGE_EMBEDDING_ENABLED=true— the feature flag (defaults tofalse)- LLM embedding config —
LLM_API_BASE,LLM_API_KEY,LLM_EMBED_MODEL - Postgres with pgvector —
POSTGRES_URLpointing to a pgvector-enabled database
Fix: Add all three to your agentibridge.env (or .env for native mode):
AGENTIBRIDGE_EMBEDDING_ENABLED=true
LLM_API_BASE=https://your-llm-endpoint/v1
LLM_API_KEY=your-key
LLM_EMBED_MODEL=text-embedding-3-small
Then recreate the container (see next section).
Config changes not taking effect after editing agentibridge.env
Cause: agentibridge restart restarts the same containers with the same environment. It does not reload agentibridge.env.
Fix: Stop and recreate the containers:
agentibridge stop && agentibridge install
Or with Docker Compose directly:
docker compose -f ~/.agentibridge/docker-compose.yml up -d --force-recreate
LLM endpoint unreachable from Docker
httpx.ConnectError: [Errno -2] Name or address not found
Cause: The container cannot reach your LLM endpoint. Common reasons:
- DNS resolution — Docker containers use their own DNS.
localhostinside the container refers to the container itself, not the host. - Cloudflare Access — If your LLM proxy is behind Cloudflare Access, the container needs service-token headers.
Fix (host services): Use host.docker.internal instead of localhost:
LLM_API_BASE=http://host.docker.internal:4000/v1
Fix (Cloudflare Access): Add service-token credentials to agentibridge.env:
CF_ACCESS_CLIENT_ID=your-client-id.access
CF_ACCESS_CLIENT_SECRET=your-client-secret
The LLM client sends these as CF-Access-Client-Id / CF-Access-Client-Secret headers automatically when set.
Connection refused / 401 Unauthorized
See Connecting Clients — Troubleshooting for connection and authentication issues.
No semantic search results (0 chunks)
{"success": true, "results": [], "total": 0}
Cause: Sessions haven’t been embedded yet. Embedding happens automatically during each collector poll cycle when AGENTIBRIDGE_EMBEDDING_ENABLED=true is set.
Diagnose with the CLI:
agentibridge embeddings # shows config, chunk counts, coverage
agentibridge embeddings --check-llm # also tests LLM endpoint connectivity
Common reasons for 0 chunks:
- Collector hasn’t completed its first cycle yet (wait ~60s after startup)
AGENTIBRIDGE_EMBEDDING_ENABLEDis not set totrue- LLM endpoint is unreachable from the container (see “LLM endpoint unreachable” above)
- Using
agentibridge install --testbut embedding config is only inagentibridge.env(not the repo root.env) — see the env file table
Fix: Trigger an immediate collection:
# Via MCP tool
collect_now
# Or wait for the next poll cycle (default: 60 seconds)
Check the chunk count directly:
docker exec agentibridge-postgres psql -U agentibridge \
-c "SELECT count(*) FROM transcript_chunks;"