← Back to getredash/redash

How to Deploy & Use getredash/redash

# Redash Deployment & Usage Guide

## 1. Prerequisites

### Docker Deployment (Recommended)
- **Docker Engine** 20.10+
- **Docker Compose** 2.0+
- **Git**

### Manual Installation
- **Python** 3.8+
- **Node.js** 16+ and **npm** 8+
- **PostgreSQL** 12+ (Redash stores queries, results, metadata here)
- **Redis** 5+ (Used for RQ job queues and caching)
- **Git**

### External Dependencies
- **SMTP Server** (optional, required for email alerts and invites)
- **Data Source Endpoints** (PostgreSQL, MySQL, Elasticsearch, etc. based on your needs)

## 2. Installation

### Option A: Docker Production Setup

```bash
# Clone repository
git clone https://github.com/getredash/redash.git
cd redash

# Create environment file
cp .env.example .env

# Edit .env with your production settings (see Configuration section)
nano .env

# Build and start services
docker-compose -f docker-compose.yml up -d

# Initialize database (run once)
docker-compose run --rm server create_db

# Create admin user
docker-compose run --rm server manage users create \
 --admin \
 --password "ADMIN_PASSWORD" \
 "admin@example.com" \
 "Admin User"

Option B: Local Development Setup

# Clone repository
git clone https://github.com/getredash/redash.git
cd redash

# Install Python dependencies
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -r requirements.txt -r requirements_dev.txt

# Install frontend dependencies
npm install

# Build frontend assets
npm run build

# Or start dev server with hot reload
npm run watch

3. Configuration

Configure via environment variables in .env file or export directly:

Core Required Variables

# Database (PostgreSQL required)
export REDASH_DATABASE_URL="postgresql://user:password@localhost:5432/redash"
# Or use the generic DATABASE_URL
export DATABASE_URL="postgresql://user:password@localhost:5432/redash"

# Redis (used for caching and RQ job queue)
export REDASH_REDIS_URL="redis://localhost:6379/0"
# Separate Redis for RQ workers (optional, defaults to REDIS_URL)
export RQ_REDIS_URL="redis://localhost:6379/0"

# Secrets (generate strong random strings)
export REDASH_COOKIE_SECRET="your-secret-key-here"
export REDASH_SECRET_KEY="your-secret-key-here"

Performance & Connection Settings

# SQLAlchemy connection pooling
export SQLALCHEMY_POOL_SIZE="10"
export SQLALCHEMY_MAX_OVERFLOW="20"
export SQLALCHEMY_DISABLE_POOL="false"
export SQLALCHEMY_ENABLE_POOL_PRE_PING="true"

# Proxy configuration (if behind load balancer)
export REDASH_PROXIES_COUNT="1"

# Query results cleanup (removes old cached results every 5 minutes)
export REDASH_QUERY_RESULTS_CLEANUP_ENABLED="true"
export REDASH_QUERY_RESULTS_CLEANUP_COUNT="100"
export REDASH_QUERY_RESULTS_CLEANUP_MAX_AGE="7"  # days

Monitoring & Logging

# StatsD metrics export
export REDASH_STATSD_HOST="127.0.0.1"
export REDASH_STATSD_PORT="8125"
export REDASH_STATSD_PREFIX="redash"
export REDASH_STATSD_USE_TAGS="false"

Data Source Specific Configuration

Data sources are configured via the UI, but some require environment setup:

PostgreSQL with IAM (AWS): Ensure boto3 is installed in the worker containers for IAM authentication.

Elasticsearch: No additional env vars required; configure base URL and auth via UI.

4. Build & Run

Production (Docker)

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

# Scale workers (if needed)
docker-compose up -d --scale worker=4

# Restart after configuration changes
docker-compose restart

Services included:

  • server: Flask web application (port 5000)
  • worker: RQ background job processors
  • scheduler: RQ scheduler for periodic queries
  • redis: Cache and job queue
  • postgres: Metadata database

Development (Local)

Terminal 1 (Flask backend):

export FLASK_APP=redash
export FLASK_ENV=development
export REDASH_DATABASE_URL="postgresql://localhost/redash"
export REDASH_REDIS_URL="redis://localhost:6379/0"

# Run database migrations
flask db upgrade

# Start dev server
flask run --port 5000

Terminal 2 (Webpack dev server):

npm run start
# Serves frontend on port 8080 with hot reload

Terminal 3 (RQ Workers):

export REDASH_DATABASE_URL="postgresql://localhost/redash"
export REDASH_REDIS_URL="redis://localhost:6379/0"

# Run worker
python manage.py rq worker

5. Deployment

Docker Compose (Recommended for Single Node)

Use the provided docker-compose.yml with production overrides:

# docker-compose.production.yml
version: '3'
services:
  server:
    image: redash/redash:latest
    command: server
    environment:
      - REDASH_DATABASE_URL=${REDASH_DATABASE_URL}
      - REDASH_REDIS_URL=${REDASH_REDIS_URL}
      - REDASH_COOKIE_SECRET=${REDASH_COOKIE_SECRET}
      - REDASH_SECRET_KEY=${REDASH_SECRET_KEY}
      - PYTHONUNBUFFERED=0
    ports:
      - "5000:5000"
    restart: always

  worker:
    image: redash/redash:latest
    command: worker
    environment:
      - REDASH_DATABASE_URL=${REDASH_DATABASE_URL}
      - REDASH_REDIS_URL=${REDASH_REDIS_URL}
      - PYTHONUNBUFFERED=0
    restart: always

  scheduler:
    image: redash/redash:latest
    command: scheduler
    environment:
      - REDASH_DATABASE_URL=${REDASH_DATABASE_URL}
      - REDASH_REDIS_URL=${REDASH_REDIS_URL}
    restart: always

Deploy:

docker-compose -f docker-compose.production.yml up -d

Cloud Images (AWS/GCE)

The project maintains ready-made images for quick cloud deployment:

  • AWS: Search for "Redash" in Community AMIs
  • Google Cloud: Use the published image in Cloud Launcher

Post-installation on cloud images:

# SSH into instance
sudo systemctl status redash
sudo docker ps

Kubernetes (Helm)

Community Helm charts are available for Kubernetes deployment:

helm repo add redash https://getredash.github.io/redash-helm
helm install my-redash redash/redash \
  --set redash.databaseURL=${REDASH_DATABASE_URL} \
  --set redash.redisURL=${REDASH_REDIS_URL}

Reverse Proxy Setup (Nginx)

server {
    listen 80;
    server_name redash.yourdomain.com;
    
    location / {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Important: Set REDASH_PROXIES_COUNT=1 when behind a reverse proxy.

6. Troubleshooting

Database Connection Errors

Issue: sqlalchemy.exc.OperationalError: could not connect to server: Connection refused

# Verify PostgreSQL is running and accessible
psql ${REDASH_DATABASE_URL} -c "SELECT 1;"

# Check if database exists
createdb redash  # Create if missing

# Run migrations manually
docker-compose run --rm server flask db upgrade

Redis Connection Issues

Issue: redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379

# Verify Redis URL format (must include decode_responses)
redis-cli -u ${REDASH_REDIS_URL} ping

# For Docker setups, ensure service name matches
export REDASH_REDIS_URL="redis://redis:6379/0"

Query Runner Import Failures

Issue: Data sources missing or ImportError for specific query runners

# Install additional dependencies for specific sources
pip install psycopg2-binary pymongo elasticsearch-py

# Or for Docker, extend the image
# Dockerfile:
FROM redash/redash:latest
RUN pip install psycopg2-binary boto3

Frontend Asset Issues

Issue: Static files not loading or 404 errors

# Rebuild assets
npm run build

# For dev mode, ensure webpack dev server is running
npm run start

# Check permissions in Docker
docker-compose run --rm server bash -c "ls -la /app/client/dist"

Worker Queue Backlog

Issue: Queries stuck in "Waiting" state

# Check worker status
docker-compose logs worker

# Scale workers
docker-compose up -d --scale worker=4 --no-recreate

# Clear stuck jobs (caution: kills running queries)
redis-cli -u ${REDASH_REDIS_URL} flushdb

Migration Failures

Issue: Alembic upgrade failed or schema conflicts

# Backup first
pg_dump ${REDASH_DATABASE_URL} > redash_backup.sql

# Force stamp to current (if schema is already current but alembic is confused)
flask db stamp head

# Or reset (DESTRUCTIVE - only for new installs)
flask db drop
flask db create

Query Results Cleanup Not Working

Issue: Disk space filling up with cached results

# Verify cleanup is enabled
echo $REDASH_QUERY_RESULTS_CLEANUP_ENABLED  # Should be true

# Manual cleanup via API or database:
# Connect to PostgreSQL and run:
# TRUNCATE query_results WHERE created_at < NOW() - INTERVAL '7 days';

SSL/TLS Issues

Issue: SSL: CERTIFICATE_VERIFY_FAILED when connecting to data sources

# For PostgreSQL with SSL, use sslmode in URL
export REDASH_DATABASE_URL="postgresql://...?sslmode=require"

# For self-signed certs in development
export PYTHONHTTPSVERIFY=0  # Not recommended for production