Docker setup for a powerful and modular diffusion model GUI and backend.
This project sets up a complete AI development environment with NVIDIA CUDA, cuDNN, and various essential AI/ML libraries using Docker. It includes Stable Diffusion models and ControlNet for text-to-image generation and various deep learning models.
This Docker image is based on nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04, ensuring GPU acceleration for deep learning workloads. It is configured for ease of use with Pyenv, custom Python versions, and GPU-specific libraries.
- NVIDIA CUDA & cuDNN support for GPU-accelerated AI tasks.
- Pyenv for managing Python versions.
- Integration with ComfyUI, Stable Diffusion, and ControlNet models.
- Custom node installation for advanced workflows and extensions.
- Ready-to-use AI/ML models from Hugging Face, including various checkpoints for text-to-image generation.
Ensure you have the following installed:
- Docker
- NVIDIA GPU with proper drivers.
- NVIDIA Container Toolkit (for GPU acceleration).
-
Clone the repository and navigate to the project root:
git clone https://github.com/krasamo/comfyui-docker.git cd https://github.com/krasamo/comfyui-docker.git -
Build the Docker image:
docker-compose build
-
Run the Docker container:
docker-compose up
The service will run on port
7860, accessible viahttp://localhost:7860.
You can modify the Docker build to suit your needs by adding models or adjusting the configuration.
To enable persistent data storage, pass the USE_PERSISTENT_DATA argument:
docker-compose up -dThis will store outputs and models in /data on the host machine, ensuring that checkpoints and configurations are preserved.
This Docker setup automatically downloads several pre-trained models from Hugging Face, including:
- Stable Diffusion XL: High-quality text-to-image models.
- ControlNet: Pretrained models for image control.
- ESRGAN: For super-resolution upscaling.
Refer to the Dockerfile for the exact models included and their paths.
/models: Contains all downloaded models (checkpoints, VAE, Loras, etc.)./code: Working directory for the main codebase./data: Optional directory for persistent data, checkpoints, and output.
You can add custom models by modifying the Dockerfile and adding additional wget commands to download models or use external repositories.
For example, add the following to download a custom checkpoint:
RUN wget -c <model-url> -P ./models/checkpoints/This project is licensed under the MIT License. See the LICENSE file for more details.
- NVIDIA for CUDA and GPU acceleration.
- Hugging Face for pre-trained AI models.
- Stability AI for Stable Diffusion.
Feel free to modify or expand upon this depending on your specific project needs!