Skip to content

Commit

Permalink
feat: Upgrade to LlamaIndex to 0.10 (#1663)
Browse files Browse the repository at this point in the history
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
  • Loading branch information
imartinez authored Mar 6, 2024
1 parent 12f3a39 commit 45f0571
Show file tree
Hide file tree
Showing 43 changed files with 1,470 additions and 1,392 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/actions/install_dependencies/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,6 @@ runs:
python-version: ${{ inputs.python_version }}
cache: "poetry"
- name: Install Dependencies
run: poetry install --with ui --no-root
run: poetry install --extras "ui vector-stores-qdrant" --no-root
shell: bash

2 changes: 1 addition & 1 deletion Dockerfile.external
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ FROM base as dependencies
WORKDIR /home/worker/app
COPY pyproject.toml poetry.lock ./

RUN poetry install --with ui
RUN poetry install --extras "ui vector-stores-qdrant"

FROM base as app

Expand Down
3 changes: 1 addition & 2 deletions Dockerfile.local
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,7 @@ FROM base as dependencies
WORKDIR /home/worker/app
COPY pyproject.toml poetry.lock ./

RUN poetry install --with local
RUN poetry install --with ui
RUN poetry install --extras "ui embeddings-huggingface llms-llama-cpp vector-stores-qdrant"

FROM base as app

Expand Down
6 changes: 3 additions & 3 deletions fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,15 @@ navigation:
layout:
- section: Welcome
contents:
- page: Welcome
- page: Introduction
path: ./docs/pages/overview/welcome.mdx
- page: Quickstart
path: ./docs/pages/overview/quickstart.mdx
# How to install privateGPT, with FAQ and troubleshooting
- tab: installation
layout:
- section: Getting started
contents:
- page: Main Concepts
path: ./docs/pages/installation/concepts.mdx
- page: Installation
path: ./docs/pages/installation/installation.mdx
# Manual of privateGPT: how to use it and configure it
Expand Down
60 changes: 60 additions & 0 deletions fern/docs/pages/installation/concepts.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
PrivateGPT is a service that wraps a set of AI RAG primitives in a comprehensive set of APIs providing a private, secure, customizable and easy to use GenAI development framework.

It uses FastAPI and LLamaIndex as its core frameworks. Those can be customized by changing the codebase itself.

It supports a variety of LLM providers, embeddings providers, and vector stores, both local and remote. Those can be easily changed without changing the codebase.

# Different Setups support

## Setup configurations available
You get to decide the setup for these 3 main components:
- LLM: the large language model provider used for inference. It can be local, or remote, or even OpenAI.
- Embeddings: the embeddings provider used to encode the input, the documents and the users' queries. Same as the LLM, it can be local, or remote, or even OpenAI.
- Vector store: the store used to index and retrieve the documents.

There is an extra component that can be enabled or disabled: the UI. It is a Gradio UI that allows to interact with the API in a more user-friendly way.

### Setups and Dependencies
Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](/installation) section.
PrivateGPT uses poetry to manage its dependencies. You can install the dependencies for the different setups by running `poetry install --extras "<extra1> <extra2>..."`.
Extras are the different options available for each component. For example, to install the dependencies for a a local setup with UI and qdrant as vector database, Ollama as LLM and HuggingFace as local embeddings, you would run

`poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-huggingface"`.

Refer to the [installation](/installation) section for more details.

### Setups and Configuration
PrivateGPT uses yaml to define its configuration in files named `settings-<profile>.yaml`.
Different configuration files can be created in the root directory of the project.
PrivateGPT will load the configuration at startup from the profile specified in the `PGPT_PROFILES` environment variable.
For example, running:
```bash
PGPT_PROFILES=ollama make run
```
will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
- `settings.yaml` is always loaded and contains the default configuration.
- `settings-ollama.yaml` is loaded if the `ollama` profile is specified in the `PGPT_PROFILES` environment variable. It can override configuration from the default `settings.yaml`

## About Fully Local Setups
In order to run PrivateGPT in a fully local setup, you will need to run the LLM, Embeddings and Vector Store locally.
### Vector stores
The vector stores supported (Qdrant, ChromaDB and Postgres) run locally by default.
### Embeddings
For local Embeddings there are two options:
* (Recommended) You can use the 'ollama' option in PrivateGPT, which will connect to your local Ollama instance. Ollama simplifies a lot the installation of local LLMs.
* You can use the 'embeddings-huggingface' option in PrivateGPT, which will use HuggingFace.

In order for HuggingFace LLM to work (the second option), you need to download the embeddings model to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```

### LLM
For local LLM there are two options:
* (Recommended) You can use the 'ollama' option in PrivateGPT, which will connect to your local Ollama instance. Ollama simplifies a lot the installation of local LLMs.
* You can use the 'llms-llama-cpp' option in PrivateGPT, which will use LlamaCPP. It works great on Mac with Metal most of the times (leverages Metal GPU), but it can be tricky in certain Linux and Windows distributions, depending on the GPU. In the installation document you'll find guides and troubleshooting.

In order for LlamaCPP powered LLM to work (the second option), you need to download the LLM model to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```
205 changes: 146 additions & 59 deletions fern/docs/pages/installation/installation.mdx
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## Installation and Settings
It is important that you review the Main Concepts before you start the installation process.

### Base requirements to run PrivateGPT
## Base requirements to run PrivateGPT

* Git clone PrivateGPT repository, and navigate to it:
* Clone PrivateGPT repository, and navigate to it:

```bash
git clone https://github.com/imartinez/privateGPT
Expand All @@ -21,93 +21,180 @@ pyenv local 3.11

* Install [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) for dependency management:

* Have a valid C++ compiler like gcc. See [Troubleshooting: C++ Compiler](#troubleshooting-c-compiler) for more details.

* Install `make` for scripts:
* Install `make` to be able to run the different scripts:
* osx: (Using homebrew): `brew install make`
* windows: (Using chocolatey) `choco install make`

### Install dependencies
## Install and run your desired setup

PrivateGPT allows to customize the setup -from fully local to cloud based- by deciding the modules to use.
Here are the different options available:

Install the dependencies:
- LLM: "llama-cpp", "ollama", "sagemaker", "openai", "openailike"
- Embeddings: "huggingface", "openai", "sagemaker"
- Vector stores: "qdrant", "chroma", "postgres"
- UI: whether or not to enable UI (Gradio) or just go with the API

In order to only install the required dependencies, PrivateGPT offers different `extras` that can be combined during the installation process:

```bash
poetry install --with ui
poetry install --extras "<extra1> <extra2>..."
```

Verify everything is working by running `make run` (or `poetry run python -m private_gpt`) and navigate to
http://localhost:8001. You should see a [Gradio UI](https://gradio.app/) **configured with a mock LLM** that will
echo back the input. Below we'll see how to configure a real LLM.
Where `<extra>` can be any of the following:

- ui: adds support for UI using Gradio
- llms-ollama: adds support for Ollama LLM, the easiest way to get a local LLM running, requires Ollama running locally
- llms-llama-cpp: adds support for local LLM using LlamaCPP - expect a messy installation process on some platforms
- llms-sagemaker: adds support for Amazon Sagemaker LLM, requires Sagemaker inference endpoints
- llms-openai: adds support for OpenAI LLM, requires OpenAI API key
- llms-openai-like: adds support for 3rd party LLM providers that are compatible with OpenAI's API
- embeddings-ollama: adds support for Ollama Embeddings, requires Ollama running locally
- embeddings-huggingface: adds support for local Embeddings using HuggingFace
- embeddings-sagemaker: adds support for Amazon Sagemaker Embeddings, requires Sagemaker inference endpoints
- embeddings-openai = adds support for OpenAI Embeddings, requires OpenAI API key
- vector-stores-qdrant: adds support for Qdrant vector store
- vector-stores-chroma: adds support for Chroma DB vector store
- vector-stores-postgres: adds support for Postgres vector store

## Recommended Setups

There are just some examples of recommended setups. You can mix and match the different options to fit your needs.
You'll find more information in the Manual section of the documentation.

> **Important for Windows**: In the examples below or how to run PrivateGPT with `make run`, `PGPT_PROFILES` env var is being set inline following Unix command line syntax (works on MacOS and Linux).
If you are using Windows, you'll need to set the env var in a different way, for example:

```powershell
# Powershell
$env:PGPT_PROFILES="ollama"
make run
```

or

```cmd
# CMD
set PGPT_PROFILES=ollama
make run
```

### Local, Ollama-powered setup - RECOMMENDED

**The easiest way to run PrivateGPT fully locally** is to depend on Ollama for the LLM. Ollama provides local LLM and Embeddings super easy to install and use, abstracting the complexity of GPU support. It's the recommended setup for local development.

### Settings
Go to [ollama.ai](https://ollama.ai/) and follow the instructions to install Ollama on your machine.

<Callout intent="info">
The default settings of PrivateGPT should work out-of-the-box for a 100% local setup. **However**, as is, it runs exclusively on your CPU.
Skip this section if you just want to test PrivateGPT locally, and come back later to learn about more configuration options (and have better performances).
</Callout>
After the installation, make sure the Ollama desktop app is closed.

<br />
Install the models to be used, the default settings-ollama.yaml is configured to user `mistral 7b` LLM (~4GB) and `nomic-embed-text` Embeddings (~275MB). Therefore:

### Local LLM requirements
```bash
ollama pull mistral
ollama pull nomic-embed-text
```

Now, start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings):
```bash
ollama serve
```

Once done, on a different terminal, you can install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
```

Install extra dependencies for local execution:
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.

```bash
poetry install --with local
PGPT_PROFILES=ollama make run
```

For PrivateGPT to run fully locally GPU acceleration is required
(CPU execution is possible, but very slow), however,
typical Macbook laptops or window desktops with mid-range GPUs lack VRAM to run
even the smallest LLMs. For that reason
**local execution is only supported for models compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp)**
PrivateGPT will use the already existing `settings-ollama.yaml` settings file, which is already configured to use Ollama LLM and Embeddings, and Qdrant. Review it and adapt it to your needs (different models, different Ollama port, etc.)

These two models are known to work well:
The UI will be available at http://localhost:8001

* https://huggingface.co/TheBloke/Llama-2-7B-chat-GGUF
* https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF (recommended)
### Private, Sagemaker-powered setup

To ease the installation process, use the `setup` script that will download both
the embedding and the LLM model and place them in the correct location (under `models` folder):
If you need more performance, you can run a version of PrivateGPT that relies on powerful AWS Sagemaker machines to serve the LLM and Embeddings.

You need to have access to sagemaker inference endpoints for the LLM and / or the embeddings, and have AWS credentials properly configured.

Edit the `settings-sagemaker.yaml` file to include the correct Sagemaker endpoints.

Then, install PrivateGPT with the following command:
```bash
poetry run python scripts/setup
poetry install --extras "ui llms-sagemaker embeddings-sagemaker vector-stores-qdrant"
```

If you are ok with CPU execution, you can skip the rest of this section.
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.

As stated before, llama.cpp is required and in
particular [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
is used.
```bash
PGPT_PROFILES=sagemaker make run
```

> It's highly encouraged that you fully read llama-cpp and llama-cpp-python documentation relevant to your platform.
> Running into installation issues is very likely, and you'll need to troubleshoot them yourself.
PrivateGPT will use the already existing `settings-sagemaker.yaml` settings file, which is already configured to use Sagemaker LLM and Embeddings endpoints, and Qdrant.

The UI will be available at http://localhost:8001

### Non-Private, OpenAI-powered test setup

If you want to test PrivateGPT with OpenAI's LLM and Embeddings -taking into account your data is going to OpenAI!- you can run the following command:

You need an OPENAI API key to run this setup.

Edit the `settings-openai.yaml` file to include the correct API KEY. Never commit it! It's a secret! As an alternative to editing `settings-openai.yaml`, you can just set the env var OPENAI_API_KEY.

Then, install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-openai embeddings-openai vector-stores-qdrant"
```

Once installed, you can run PrivateGPT.

```bash
PGPT_PROFILES=openai make run
```

PrivateGPT will use the already existing `settings-openai.yaml` settings file, which is already configured to use OpenAI LLM and Embeddings endpoints, and Qdrant.

#### Customizing low level parameters
The UI will be available at http://localhost:8001

Currently, not all the parameters of `llama.cpp` and `llama-cpp-python` are available at PrivateGPT's `settings.yaml` file.
In case you need to customize parameters such as the number of layers loaded into the GPU, you might change
these at the `llm_component.py` file under the `private_gpt/components/llm/llm_component.py`.
### Local, Llama-CPP powered setup

##### Available LLM config options
If you want to run PrivateGPT fully locally without relying on Ollama, you can run the following command:

The `llm` section of the settings allows for the following configurations:
```bash
poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant"
```

- `mode`: how to run your llm
- `max_new_tokens`: this lets you configure the number of new tokens the LLM will generate and add to the context window (by default Llama.cpp uses `256`)
In order for local LLM and embeddings to work, you need to download the models to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```

Example:
Once installed, you can run PrivateGPT with the following command:

```yaml
llm:
mode: local
max_new_tokens: 256
```bash
PGPT_PROFILES=local make run
```

If you are getting an out of memory error, you might also try a smaller model or stick to the proposed
recommended models, instead of custom tuning the parameters.
PrivateGPT will load the already existing `settings-local.yaml` file, which is already configured to use LlamaCPP LLM, HuggingFace embeddings and Qdrant.

The UI will be available at http://localhost:8001

#### Llama-CPP support

For PrivateGPT to run fully locally without Ollama, Llama.cpp is required and in
particular [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
is used.

You'll need to have a valid C++ compiler like gcc installed. See [Troubleshooting: C++ Compiler](#troubleshooting-c-compiler) for more details.

> It's highly encouraged that you fully read llama-cpp and llama-cpp-python documentation relevant to your platform.
> Running into installation issues is very likely, and you'll need to troubleshoot them yourself.
#### OSX GPU support
##### Llama-CPP OSX GPU support

You will need to build [llama.cpp](https://github.com/ggerganov/llama.cpp) with metal support.

Expand All @@ -127,7 +214,7 @@ More information is available in the documentation of the libraries themselves:
* [llama-cpp-python's documentation](https://llama-cpp-python.readthedocs.io/en/latest/#installation-with-hardware-acceleration)
* [llama.cpp](https://github.com/ggerganov/llama.cpp#build)

#### Windows NVIDIA GPU support
##### Llama-CPP Windows NVIDIA GPU support

Windows GPU support is done through CUDA.
Follow the instructions on the original [llama.cpp](https://github.com/ggerganov/llama.cpp) repo to install the required
Expand Down Expand Up @@ -160,7 +247,7 @@ Note that llama.cpp offloads matrix calculations to the GPU but the performance
still hit heavily due to latency between CPU and GPU communication. You might need to tweak
batch sizes and other parameters to get the best performance for your particular system.

#### Linux NVIDIA GPU support and Windows-WSL
##### Llama-CPP Linux NVIDIA GPU support and Windows-WSL

Linux GPU support is done through CUDA.
Follow the instructions on the original [llama.cpp](https://github.com/ggerganov/llama.cpp) repo to install the required
Expand Down Expand Up @@ -188,7 +275,7 @@ llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, co
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
```

### Known issues and Troubleshooting
##### Llama-CPP Known issues and Troubleshooting

Execution of LLMs locally still has a lot of sharp edges, specially when running on non Linux platforms.
You might encounter several issues:
Expand All @@ -205,7 +292,7 @@ If, during your installation, something does not go as planned, retry in *verbos

For example, when installing packages with `pip install`, you can add the option `-vvv` to show the details of the installation.

#### Troubleshooting: C++ Compiler
##### Llama-CPP Troubleshooting: C++ Compiler

If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++
compiler on your computer.
Expand All @@ -227,7 +314,7 @@ To install a C++ compiler on Windows 10/11, follow these steps:
Store and search for Xcode and install it. **Or** you can install the command line tools by running `xcode-select --install`.
2. If not, you can install clang or gcc with homebrew `brew install gcc`

#### Troubleshooting: Mac Running Intel
##### Llama-CPP Troubleshooting: Mac Running Intel

When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '
-march=native'_ during pip install.
Expand Down
Loading

1 comment on commit 45f0571

@eshafaq1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My coworker "poetry install --with ui,local && ./scripts/setup" worked for me this morning!?"
Me "hmm let me double check the docs" https://docs.privategpt.dev/overview/welcome/quickstart
Me after scrating my head for a several min.. "ahHAA!"

All in good fun :) Any chance the docs can be updated to reflect the changes here. ?

Please sign in to comment.