Skip to content

vudiep411/local-ollama-fastapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLAMA 3.1 Self deploy

trying to run Ollama locally for RAG purposes. Interact with the model using rest API (WIP)

llama3.1-local-demo.mp4

Installation

Prerequesite

Docker installed Ollama installed

Download model locally

ollama run llama3.1:8b

Run redis

docker run --name redis-container -p 6379:6379 -d redis

Run LLM-Service

cd llm-service
pip install -r requirements.txt
uvicorn llm:app --reload

Go to http://127.0.0.1:8000/docs

Run UI frontend

  1. Cd in frontend directory
    cd frontend
    
  2. Install dependencies
    npm install
    
  3. Run the server
    npm run dev
    

Go to http://localhost:3000

About

Interacting with Ollama via WebUi

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published