Open WebUI Deployment & Usage Guide
1. Prerequisites
Required
- Docker & Docker Compose (recommended) or Kubernetes cluster (kubectl, kustomize, or helm)
- LLM Backend (choose one):
- Ollama running locally or remotely
- OpenAI-compatible API endpoint (e.g., OpenAI, GroqCloud, Mistral, OpenRouter, LMStudio)
- Hardware: Minimum 4GB RAM (more recommended for RAG/vector databases)
Optional
- PostgreSQL database (for production scaling instead of SQLite)
- Vector database (Chroma, Qdrant, Weaviate, etc.) if using advanced RAG features
- Reverse proxy (nginx, Traefik) for SSL termination and production routing
2. Installation
Docker (Recommended)
# Create a dedicated directory
mkdir -p ~/open-webui && cd ~/open-webui
# Create docker-compose.yml with minimal configuration
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
open-webui:
image: ghcr.io/open-webui/open-webui:ollama # or :cuda for GPU support
container_name: open-webui
restart: unless-stopped
ports:
- "3000:8080"
volumes:
- ./data:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434 # adjust if Ollama runs elsewhere
# - OPENAI_API_BASE_URL=https://api.openai.com/v1 # uncomment for OpenAI
# - OPENAI_API_KEY=sk-... # uncomment and set your key
networks:
- open-webui-network
networks:
open-webui-network:
driver: bridge
EOF
# Start the service
docker-compose up -d
# Access at http://localhost:3000
Kubernetes
# Using kubectl (basic manifest example)
kubectl create namespace open-webui
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: open-webui
namespace: open-webui
spec:
replicas: 1
selector:
matchLabels:
app: open-webui
template:
metadata:
labels:
app: open-webui
spec:
containers:
- name: open-webui
image: ghcr.io/open-webui/open-webui:ollama
ports:
- containerPort: 8080
env:
- name: OLLAMA_BASE_URL
value: "http://ollama-service:11434" # adjust to your Ollama service
volumeMounts:
- name: data
mountPath: /app/backend/data
volumes:
- name: data
persistentVolumeClaim:
claimName: open-webui-pvc
EOF
# Expose via Service and Ingress as needed
Helm Chart (if available)
helm repo add open-webui https://helm.openwebui.com
helm install open-webui open-webui/open-webui -n open-webui --create-namespace
3. Configuration
Environment Variables
Set these in your docker-compose.yml or Kubernetes manifest:
| Variable | Description | Default | Required |
|---|---|---|---|
OLLAMA_BASE_URL | URL of Ollama API | http://localhost:11434 | Yes (if using Ollama) |
OPENAI_API_BASE_URL | OpenAI-compatible API endpoint | - | No |
OPENAI_API_KEY | API key for OpenAI-compatible service | - | No |
WEBUI_AUTH | Enable authentication (true/false) | false | No |
WEBUI_SECRET_KEY | Secret key for session encryption | Random generated | Recommended for auth |
ENABLE_OAUTH_SIGNUP | Enable OAuth signup (true/false) | false | No |
OPENID_PROVIDER_URL | OpenID Connect provider URL | - | No |
ENABLE_LDAP | Enable LDAP authentication | false | No |
DATABASE_URL | Database connection string | sqlite:///./data/open-webui.db | No |
VECTOR_DB | Vector database for RAG (chroma, qdrant, etc.) | chroma | No |
RAG_EMBEDDING_MODEL | Embedding model for RAG | all-MiniLM-L6-v2 | No |
AIOHTTP_CLIENT_TIMEOUT | Timeout for external API calls (seconds) | 300 | No |
ENABLE_FORWARD_USER_INFO_HEADERS | Forward user headers to LLM | false | No |
Configuration File
Advanced settings can be adjusted via ~/.open-webui/config.json (mounted volume) or environment variables. Example:
{
"ui": {
"theme": "light",
"default_model": "llama2"
},
"features": {
"enable_rag": true,
"enable_web_search": true,
"enable_image_generation": true
}
}
4. Build & Run
Using Docker (Production)
# Pull the latest image
docker pull ghcr.io/open-webui/open-webui:ollama
# Start with docker-compose (as shown in Installation)
docker-compose up -d
# View logs
docker-compose logs -f open-webui
# Stop
docker-compose down
Local Development (Python)
# Clone repository
git clone https://github.com/open-webui/open-webui.git
cd open-webui
# Install backend dependencies
cd backend
pip install -r requirements.txt
# Run backend (default port 8080)
python -m open_webui
# Frontend is served by the backend in development mode.
# Access at http://localhost:8080
Note: The project uses a monorepo structure. The backend (backend/) is a FastAPI application. The frontend is built into the Docker image; for local frontend development, refer to the project's frontend repository (if separated).
5. Deployment
Recommended Platforms
- Docker Compose – Single-server deployments (VPS, bare metal)
- Kubernetes – Scalable cluster deployments (EKS, GKE, AKS, self-managed)
- Cloud Container Services:
- AWS ECS/Fargate
- Google Cloud Run
- Azure Container Instances
- Railway/Render – Simplified PaaS for containers
Production Checklist
- ✅ Use
:cudatag for GPU acceleration (NVIDIA GPU + CUDA drivers) - ✅ Mount persistent volume for
./data(chat history, user data, uploaded files) - ✅ Set
WEBUI_SECRET_KEYto a strong random string - ✅ Enable authentication (
WEBUI_AUTH=true) for public-facing instances - ✅ Configure reverse proxy (nginx/Traefik) with SSL termination
- ✅ Set resource limits (CPU/RAM) in Docker/K8s manifests
- ✅ Backup the SQLite database (
data/open-webui.db) or use PostgreSQL
Example: Nginx Reverse Proxy
server {
listen 80;
server_name your-domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name your-domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
6. Troubleshooting
Common Issues & Solutions
| Issue | Cause | Solution |
|---|---|---|
| Cannot connect to Ollama | Wrong OLLAMA_BASE_URL or Ollama not running | Verify Ollama is running (curl http://localhost:11434/api/tags). Set correct URL in env. If using Docker, use host.docker.internal for Mac/Windows or host network for Linux. |
| Models not appearing | Ollama API inaccessible or model not pulled | Check Ollama logs. Pull model manually: ollama pull llama2. Ensure network connectivity between Open WebUI and Ollama. |
| Authentication fails | Missing/invalid WEBUI_SECRET_KEY or session issues | Set a strong WEBUI_SECRET_KEY. Clear browser cookies. Check container logs for auth errors. |
| RAG/vector DB errors | Missing vector DB or embedding model | Install required vector DB (e.g., chromadb). Set VECTOR_DB correctly. Ensure embedding model downloads on first run (requires internet). |
| File upload fails | Volume permission issues | Ensure Docker volume ./data is writable by container user (UID 1000). On Linux: sudo chown -R 1000:1000 ./data. |
| Slow responses/timeouts | Default timeouts too short for large models | Increase AIOHTTP_CLIENT_TIMEOUT (e.g., 600). For OpenAI API, check rate limits. |
| Web search not working | Missing API keys for search providers | Configure provider-specific keys (e.g., GOOGLE_PSE_API_KEY, SEARXNG_URL) in environment or UI settings. |
| Image generation fails | ComfyUI/Automatic1111 not reachable | Set IMAGE_GENERATION_ENGINE and corresponding COMFYUI_BASE_URL or AUTOMATIC1111_BASE_URL. Ensure those services are running and accessible. |
Log Inspection
# Docker
docker-compose logs open-webui
docker-compose logs --tail=100 open-webui
# Kubernetes
kubectl logs -n open-webui deployment/open-webui
Database Issues
- SQLite locked/readonly: Check volume permissions. Ensure no other process is accessing the DB file.
- PostgreSQL connection refused: Verify
DATABASE_URLformat:postgresql://user:password@host:port/dbname. Ensure network connectivity and credentials.
Reset/Recover
- To reset admin password: Use the
reset-adminCLI command inside the container (if available) or manually update the database. - To clear all data: Stop container, delete
./datavolume, restart (⚠️ data loss).
For more details, visit the Open WebUI Documentation.