A simple GenAI app for Docker's Guides Docs based on the GenAI Stack PDF Reader application.
See the Ollama Environment Variables in the env.demo file and adjust them accoring to your needs
- User sends PDF & prompts to the server-1 container
- Server splits, chunks, embeds & sends PDF + prompts to ollama-1 container
- Ollama-1 checks for the LLM (Large Language Model) listed in the .env file, if not available it requests the model from ollama-pull-1
- Ollama-pull-1 will request the model from the Ollama models library
- Then it pulls (downloads) the model
- Ollama-pull-1 then stores it in the Ollama-1 container and shuts down
- Ollama-1 uses the model to perform a vector search for embedding similarities between the PDF and the prompt(s) in the Neo4j database-1 container
- Database-1 retrieves the relative embeddings and returns them to ollama-1
- Ollama-1 returns the response embeddings along with the relative PDF segments
- Server-1 turns the embeddings back into Natural Language the user can understand