trying to run Ollama locally for RAG purposes. Interact with the model using rest API (WIP)
llama3.1-local-demo.mp4
Docker installed Ollama installed
Download model locally
ollama run llama3.1:8b
docker run --name redis-container -p 6379:6379 -d redis
cd llm-service
pip install -r requirements.txt
uvicorn llm:app --reload
- Cd in frontend directory
cd frontend
- Install dependencies
npm install
- Run the server
npm run dev
Go to http://localhost:3000