Neural Doodle Deployment and Usage Guide
Prerequisites
- Python 3.x - Required runtime environment
- CUDA-compatible GPU (optional but recommended for faster processing)
- Theano - Deep learning library (automatically loaded based on device)
- Required Python packages:
numpyscipyscikit-imageargparsebz2pickleitertoolscollections
Installation
-
Clone the repository:
git clone https://github.com/alexjc/neural-doodle.git cd neural-doodle -
Install dependencies:
pip install numpy scipy scikit-image -
Set up Theano (if using GPU acceleration):
- Install Theano:
pip install theano - Configure
.theanorcfor GPU usage if applicable
- Install Theano:
Configuration
Command Line Arguments
The application uses command-line arguments for configuration. Key options include:
--content: Path to content image (optimization target)--style: Path to style image (patch extraction)--output: Output image path (default:output.png)--output-size: Size of output image (e.g.,512x512)--content-weight: Weight of content relative to style (default:10.0)--style-weight: Weight of style relative to content (default:25.0)--semantic-weight: Weight of semantics vs. features (default:10.0)--phases: Number of image scales to process (default:3)--slices: Number of patch batches (default:2)
Semantic Maps
- Semantic maps use the extension specified by
--semantic-ext(default:_sem.png) - These maps help guide the style transfer process
Build & Run
Basic Usage
-
Prepare your images:
- Content image (what you want to transform)
- Style image (the artistic style to apply)
- Optional: Semantic map for the content image
-
Run the neural doodle:
python doodle.py --content my_content.png --style my_style.png --output result.png
Advanced Examples
Custom output size:
python doodle.py --content content.png --style style.png --output output.png --output-size 512x512
Adjust style strength:
python doodle.py --content content.png --style style.png --style-weight 50.0
Multi-phase processing:
python doodle.py --content content.png --style style.png --phases 4
Deployment
Local Development
- Run directly from the cloned repository
- Ensure all dependencies are installed in your Python environment
- Use GPU acceleration if available for faster processing
Production Deployment Options
Docker Container:
FROM python:3.8-slim
WORKDIR /app
COPY . .
RUN pip install numpy scipy scikit-image
CMD ["python", "doodle.py"]
Cloud Platforms:
- Google Colab - Free GPU access, ideal for experimentation
- AWS EC2 - GPU instances for production workloads
- Google Cloud AI Platform - Managed ML infrastructure
- Heroku - For web-based applications (CPU only)
Troubleshooting
Common Issues
1. Memory Errors
- Reduce
--output-sizeto lower resolution - Increase
--slicesto process patches in smaller batches - Use CPU instead of GPU if memory is limited
2. Slow Processing
- Ensure CUDA is properly configured if using GPU
- Reduce
--phasesfor faster but lower quality results - Check that Theano is using the GPU:
THEANO_FLAGS=device=gpu
3. Missing Dependencies
pip install --upgrade numpy scipy scikit-image
4. CUDA Configuration Issues
- Verify GPU is detected:
THEANO_FLAGS=device=gpu python -c "import theano; print(theano.sandbox.cuda.cuda_available())" - Check CUDA toolkit installation
- Ensure correct GPU drivers are installed
5. Output Quality Issues
- Increase
--phasesfor better quality - Adjust
--content-weightand--style-weightratios - Ensure semantic maps are properly aligned with content images
Performance Tips
- Use
--slices 4for very large images to avoid memory issues - Start with
--phases 2for quick previews, then increase to 3-4 for final results - For web deployment, consider pre-processing images to standard sizes