This repository contains a collection of cookbooks to show you how to build LLM applications using LlamaCloud to help manage your data pipelines, and LlamaIndex as the core orchestration framework.
- Follow the instructions in the section below for setting up the Jupyter Environment.
- Go to https://cloud.llamaindex.ai/ and create an account using one of the authentication providers.
- Once logged in, go to the API Key page and create an API key. Copy that generated API key to your clipboard.
- Go back to LlamaCloud. Create a project and initialize a new index by specifying the data source, data sink, embedding, and optionally transformation parameters.
- Open one of the Jupyter notebooks in this repo (e.g.
examples/getting_started.ipynb
) and paste the API key into the first cell block that readsos.environ["PLATFORM_API_KEY"] = "..."
- Copy the
index_name
andproject_name
from the deployed index into theLlamaCloudIndex
initialization in the notebook.
That should get you started! You should now be able to create an e2e pipeline with a LlamaCloud pipeline as the backend.
Here's some commands for installing the Python dependencies & running Jupyter.
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
jupyter lab
Notebooks are in examples
.
Note: if you encounter package issues when running notebook examples, please rm -rf .venv
and repeat the above steps again.