Skip to content

Latest commit

 

History

History
32 lines (22 loc) · 945 Bytes

open_webui.md

File metadata and controls

32 lines (22 loc) · 945 Bytes

Open WebUI

Install Open WebUI

  1. Download and install Ollama.
  2. Run a model in ollama
ollama run llama3.2
  1. Run Open WebUI with Nvidia GPU support
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
  1. Navigate to http://localhost:3000/
  2. First time, you will need to create an account

Upgrade Open WebUI

docker container stop open-webui
docker container rm open-webui
docker pull ghcr.io/open-webui/open-webui:cuda

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda