← Back to open-webui

How to Deploy & Use open-webui

Open WebUI Deployment & Usage Guide

1. Prerequisites

Required

  • Docker & Docker Compose (recommended) or Kubernetes cluster (kubectl, kustomize, or helm)
  • LLM Backend (choose one):
    • Ollama running locally or remotely
    • OpenAI-compatible API endpoint (e.g., OpenAI, GroqCloud, Mistral, OpenRouter, LMStudio)
  • Hardware: Minimum 4GB RAM (more recommended for RAG/vector databases)

Optional

  • PostgreSQL database (for production scaling instead of SQLite)
  • Vector database (Chroma, Qdrant, Weaviate, etc.) if using advanced RAG features
  • Reverse proxy (nginx, Traefik) for SSL termination and production routing

2. Installation

Docker (Recommended)

# Create a dedicated directory
mkdir -p ~/open-webui && cd ~/open-webui

# Create docker-compose.yml with minimal configuration
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:ollama  # or :cuda for GPU support
    container_name: open-webui
    restart: unless-stopped
    ports:
      - "3000:8080"
    volumes:
      - ./data:/app/backend/data
    environment:
      - OLLAMA_BASE_URL=http://host.docker.internal:11434  # adjust if Ollama runs elsewhere
      # - OPENAI_API_BASE_URL=https://api.openai.com/v1  # uncomment for OpenAI
      # - OPENAI_API_KEY=sk-...  # uncomment and set your key
    networks:
      - open-webui-network

networks:
  open-webui-network:
    driver: bridge
EOF

# Start the service
docker-compose up -d

# Access at http://localhost:3000

Kubernetes

# Using kubectl (basic manifest example)
kubectl create namespace open-webui
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: open-webui
  namespace: open-webui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: open-webui
  template:
    metadata:
      labels:
        app: open-webui
    spec:
      containers:
      - name: open-webui
        image: ghcr.io/open-webui/open-webui:ollama
        ports:
        - containerPort: 8080
        env:
        - name: OLLAMA_BASE_URL
          value: "http://ollama-service:11434"  # adjust to your Ollama service
        volumeMounts:
        - name: data
          mountPath: /app/backend/data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: open-webui-pvc
EOF

# Expose via Service and Ingress as needed

Helm Chart (if available)

helm repo add open-webui https://helm.openwebui.com
helm install open-webui open-webui/open-webui -n open-webui --create-namespace

3. Configuration

Environment Variables

Set these in your docker-compose.yml or Kubernetes manifest:

VariableDescriptionDefaultRequired
OLLAMA_BASE_URLURL of Ollama APIhttp://localhost:11434Yes (if using Ollama)
OPENAI_API_BASE_URLOpenAI-compatible API endpoint-No
OPENAI_API_KEYAPI key for OpenAI-compatible service-No
WEBUI_AUTHEnable authentication (true/false)falseNo
WEBUI_SECRET_KEYSecret key for session encryptionRandom generatedRecommended for auth
ENABLE_OAUTH_SIGNUPEnable OAuth signup (true/false)falseNo
OPENID_PROVIDER_URLOpenID Connect provider URL-No
ENABLE_LDAPEnable LDAP authenticationfalseNo
DATABASE_URLDatabase connection stringsqlite:///./data/open-webui.dbNo
VECTOR_DBVector database for RAG (chroma, qdrant, etc.)chromaNo
RAG_EMBEDDING_MODELEmbedding model for RAGall-MiniLM-L6-v2No
AIOHTTP_CLIENT_TIMEOUTTimeout for external API calls (seconds)300No
ENABLE_FORWARD_USER_INFO_HEADERSForward user headers to LLMfalseNo

Configuration File

Advanced settings can be adjusted via ~/.open-webui/config.json (mounted volume) or environment variables. Example:

{
  "ui": {
    "theme": "light",
    "default_model": "llama2"
  },
  "features": {
    "enable_rag": true,
    "enable_web_search": true,
    "enable_image_generation": true
  }
}

4. Build & Run

Using Docker (Production)

# Pull the latest image
docker pull ghcr.io/open-webui/open-webui:ollama

# Start with docker-compose (as shown in Installation)
docker-compose up -d

# View logs
docker-compose logs -f open-webui

# Stop
docker-compose down

Local Development (Python)

# Clone repository
git clone https://github.com/open-webui/open-webui.git
cd open-webui

# Install backend dependencies
cd backend
pip install -r requirements.txt

# Run backend (default port 8080)
python -m open_webui

# Frontend is served by the backend in development mode.
# Access at http://localhost:8080

Note: The project uses a monorepo structure. The backend (backend/) is a FastAPI application. The frontend is built into the Docker image; for local frontend development, refer to the project's frontend repository (if separated).


5. Deployment

Recommended Platforms

  1. Docker Compose – Single-server deployments (VPS, bare metal)
  2. Kubernetes – Scalable cluster deployments (EKS, GKE, AKS, self-managed)
  3. Cloud Container Services:
    • AWS ECS/Fargate
    • Google Cloud Run
    • Azure Container Instances
  4. Railway/Render – Simplified PaaS for containers

Production Checklist

  • ✅ Use :cuda tag for GPU acceleration (NVIDIA GPU + CUDA drivers)
  • ✅ Mount persistent volume for ./data (chat history, user data, uploaded files)
  • ✅ Set WEBUI_SECRET_KEY to a strong random string
  • ✅ Enable authentication (WEBUI_AUTH=true) for public-facing instances
  • ✅ Configure reverse proxy (nginx/Traefik) with SSL termination
  • ✅ Set resource limits (CPU/RAM) in Docker/K8s manifests
  • ✅ Backup the SQLite database (data/open-webui.db) or use PostgreSQL

Example: Nginx Reverse Proxy

server {
    listen 80;
    server_name your-domain.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name your-domain.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

6. Troubleshooting

Common Issues & Solutions

IssueCauseSolution
Cannot connect to OllamaWrong OLLAMA_BASE_URL or Ollama not runningVerify Ollama is running (curl http://localhost:11434/api/tags). Set correct URL in env. If using Docker, use host.docker.internal for Mac/Windows or host network for Linux.
Models not appearingOllama API inaccessible or model not pulledCheck Ollama logs. Pull model manually: ollama pull llama2. Ensure network connectivity between Open WebUI and Ollama.
Authentication failsMissing/invalid WEBUI_SECRET_KEY or session issuesSet a strong WEBUI_SECRET_KEY. Clear browser cookies. Check container logs for auth errors.
RAG/vector DB errorsMissing vector DB or embedding modelInstall required vector DB (e.g., chromadb). Set VECTOR_DB correctly. Ensure embedding model downloads on first run (requires internet).
File upload failsVolume permission issuesEnsure Docker volume ./data is writable by container user (UID 1000). On Linux: sudo chown -R 1000:1000 ./data.
Slow responses/timeoutsDefault timeouts too short for large modelsIncrease AIOHTTP_CLIENT_TIMEOUT (e.g., 600). For OpenAI API, check rate limits.
Web search not workingMissing API keys for search providersConfigure provider-specific keys (e.g., GOOGLE_PSE_API_KEY, SEARXNG_URL) in environment or UI settings.
Image generation failsComfyUI/Automatic1111 not reachableSet IMAGE_GENERATION_ENGINE and corresponding COMFYUI_BASE_URL or AUTOMATIC1111_BASE_URL. Ensure those services are running and accessible.

Log Inspection

# Docker
docker-compose logs open-webui
docker-compose logs --tail=100 open-webui

# Kubernetes
kubectl logs -n open-webui deployment/open-webui

Database Issues

  • SQLite locked/readonly: Check volume permissions. Ensure no other process is accessing the DB file.
  • PostgreSQL connection refused: Verify DATABASE_URL format: postgresql://user:password@host:port/dbname. Ensure network connectivity and credentials.

Reset/Recover

  • To reset admin password: Use the reset-admin CLI command inside the container (if available) or manually update the database.
  • To clear all data: Stop container, delete ./data volume, restart (⚠️ data loss).

For more details, visit the Open WebUI Documentation.