Meetily Deployment and Usage Guide
1. Prerequisites
System Requirements
- Operating System: macOS (Apple Silicon or Intel) or Windows 10/11 (64-bit)
- Memory: Minimum 8GB RAM, 16GB recommended for optimal performance
- Storage: 2GB free space for application and models
- Audio: Working microphone and speakers/headphones
Software Dependencies
- Rust Toolchain: Required for building from source
# Install Rust via rustup curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh rustup default stable - Node.js 18+: Required for frontend build
- Ollama (Optional): For local LLM summarization
# macOS/Linux curl -fsSL https://ollama.ai/install.sh | sh - Git: For cloning the repository
AI Models (Downloaded Automatically)
Meetily will automatically download required models:
- Parakeet/Whisper models for transcription (4x faster than standard Whisper)
- Speaker diarization models for identifying different speakers
- Models are stored locally at
~/.meetily/models/
2. Installation
Option A: Pre-built Binaries (Recommended)
Windows:
- Download the latest
x64-setup.exefrom Releases - Run the installer with administrator privileges
- Follow the installation wizard
macOS:
- Download
meetily_0.2.1_aarch64.dmg(Apple Silicon) or Intel version from Releases - Open the downloaded
.dmgfile - Drag Meetily to your Applications folder
- Right-click and select "Open" to bypass Gatekeeper restrictions on first run
Option B: Build from Source
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes
# Install frontend dependencies
npm install
# Build the Tauri application
npm run tauri build
# The built application will be in:
# - macOS: src-tauri/target/release/bundle/dmg/
# - Windows: src-tauri/target/release/bundle/nsis/
3. Configuration
Initial Setup
- Launch Meetily after installation
- Complete the onboarding wizard
- Grant microphone and system audio permissions when prompted
Audio Configuration
Meetily uses a sophisticated audio pipeline with:
- Dual audio capture: Microphone + system audio mixing
- Ring buffer synchronization: 50ms windows with 400ms max buffer for stability
- Audio processing: Noise suppression, loudness normalization, high-pass filtering
Configure audio devices in Settings:
- Microphone: Select your preferred input device
- System Audio: Select output device for capturing computer audio
- Volume Levels: Adjust input gain if needed
AI Provider Configuration
Navigate to Settings → AI Configuration:
Option 1: Local Processing (Recommended)
- Select "Ollama" as provider
- Ensure Ollama is running locally (default: http://localhost:11434)
- Select model:
llama3.2,mistral, or other supported models
Option 2: Cloud Providers
- OpenAI: Enter your API key and select model (gpt-4, gpt-3.5-turbo)
- Claude: Enter Anthropic API key
- Groq: Enter Groq API key
- OpenRouter: Enter OpenRouter API key
- Custom: Use your own OpenAI-compatible endpoint
Option 3: Hybrid Mode
- Use local Ollama for summaries
- Use cloud provider for fallback or specific tasks
Model Configuration
{
"provider": "ollama",
"model": "llama3.2",
"whisperModel": "parakeet-tiny",
"apiKey": null,
"ollamaEndpoint": "http://localhost:11434"
}
Models are automatically downloaded to:
- macOS:
~/Library/Application Support/meetily/models/ - Windows:
%APPDATA%\meetily\models\
4. Build & Run
Development Mode
# Clone and setup
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes
# Install dependencies
npm install
# Start development server
npm run tauri dev
# This launches:
# - Frontend: http://localhost:3000
# - Backend API: http://localhost:5167
Production Build
# Build for current platform
npm run tauri build
# Build specific targets
npm run tauri build -- --target universal-apple-darwin # macOS Universal
npm run tauri build -- --target x86_64-pc-windows-msvc # Windows 64-bit
Audio Pipeline Architecture
The recording system uses:
- Continuous VAD processing: Voice activity detection
- Audio mixing: Combines mic and system audio with synchronized buffers
- Batch processing: Efficient audio metric batching for performance
- Real-time transcription: Parakeet engine with Int8 quantization for speed
5. Deployment
Enterprise Self-Hosting
Meetily is designed for enterprise deployment with these considerations:
Docker Deployment (Recommended for servers):
# Example Dockerfile for server deployment
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bullseye-slim
COPY --from=builder /app/target/release/meetily /usr/local/bin/
RUN apt-get update && apt-get install -y \
libasound2-dev \
portaudio19-dev \
&& rm -rf /var/lib/apt/lists/*
CMD ["meetily"]
Deployment Platforms:
- On-premise Servers: Deploy on internal infrastructure
- Docker/Kubernetes: Containerized deployment with persistent storage for models
- Virtual Machines: Windows/macOS VMs for team access
- Network-Attached Storage: Store meeting data on NAS for team collaboration
Security Considerations:
- All data processed locally, no cloud transmission
- Models stored on encrypted volumes
- Meeting data encrypted at rest
- GDPR compliant by design
Scaling Considerations
- Single User: Local installation sufficient
- Team (5-50 users: Central server with shared storage
- Enterprise (50+ users): Load-balanced instances with shared database
6. Troubleshooting
Common Issues
1. Audio Capture Problems
Error: "No audio devices found" or "Failed to start recording"
Solutions:
- Grant microphone permissions in system settings
- Restart application after granting permissions
- Check audio device selection in Meetily Settings
- Ensure no other application is exclusively using audio devices
2. Model Download Failures
Error: "Model download failed" or "Corrupted model file"
Solutions:
- Check internet connection
- Clear model cache: Delete
~/.meetily/models/and restart - Manual download: Models available from Hugging Face
- Verify disk space (minimum 2GB free)
3. Ollama Connection Issues
Error: "Cannot connect to Ollama" or "Model not found"
Solutions:
- Ensure Ollama is running:
ollama serve - Check Ollama endpoint in settings (default: http://localhost:11434)
- Pull required model:
ollama pull llama3.2 - Verify Ollama version compatibility
4. Performance Issues
Problem: "High CPU usage" or "Transcription lag"
Solutions:
- Use Int8 quantized models (default)
- Reduce audio sample rate in settings
- Close other resource-intensive applications
- Ensure sufficient RAM (16GB recommended)
- Use Parakeet-tiny for faster transcription vs. larger models for accuracy
5. Build/Rust Compilation Errors
Error: "Cargo build failed" or "Missing dependencies"
Solutions:
- Update Rust:
rustup update stable - Install system dependencies:
# macOS brew install pkg-config # Ubuntu/Debian sudo apt-get install build-essential libasound2-dev # Windows Install Visual Studio Build Tools with C++ support
Debug Mode
Enable detailed logging for troubleshooting:
# Set environment variable
export RUST_LOG=debug
# Or run with debug flag
./meetily --debug
# Check logs location:
# macOS: ~/Library/Logs/meetily/
# Windows: %APPDATA%\meetily\logs\
Getting Help
- Discord Community: Meetily Discord
- GitHub Issues: Report bugs and feature requests
- Documentation: Check meetily.ai for updates
- Enterprise Support: Contact via website for commercial support
Known Limitations
- Real-time transcription requires modern CPU (Apple Silicon M1+ or Intel i5+)
- Speaker diarization accuracy depends on audio quality
- Local LLM summarization requires substantial RAM (8GB+ for larger models)
- Windows system audio capture requires specific audio drivers