|
7 | 7 | "# 🛠️🦙 Build with Llama Stack and Haystack Agent\n",
|
8 | 8 | "\n",
|
9 | 9 | "\n",
|
10 |
| - "This notebook demonstrates how to use the `LlamaStackChatGenerator` component with Haystack `Agent` to enable function calling capabilities. We'll create a simple weather tool that the `Agent` can call to provide dynamic, up-to-date information.\n", |
| 10 | + "This notebook demonstrates how to use the `LlamaStackChatGenerator` component with Haystack [Agent](https://docs.haystack.deepset.ai/docs/agent) to enable function calling capabilities. We'll create a simple weather tool that the `Agent` can call to provide dynamic, up-to-date information.\n", |
11 | 11 | "\n",
|
12 | 12 | "We start with installing integration package."
|
13 | 13 | ]
|
|
45 | 45 | "source": [
|
46 | 46 | "## Defining a Tool\n",
|
47 | 47 | "\n",
|
48 |
| - "Tools in Haystack allow models to call functions to get real-time information or perform actions. Let's create a simple weather tool that the model can use to provide weather information.\n" |
| 48 | + "[Tool](https://docs.haystack.deepset.ai/docs/tool) in Haystack allow models to call functions to get real-time information or perform actions. Let's create a simple weather tool that the model can use to provide weather information.\n" |
49 | 49 | ]
|
50 | 50 | },
|
51 | 51 | {
|
|
86 | 86 | "source": [
|
87 | 87 | "## Setting Up Agent\n",
|
88 | 88 | "\n",
|
89 |
| - "Now let's create a `LlamaStackChatGenerator` and pass it to the `Agent`.\n" |
| 89 | + "Now, let's create a `LlamaStackChatGenerator` and pass it to the `Agent`. The Agent component will use the model running with `LlamaStackChatGenerator` to reason and make decisions.\n" |
90 | 90 | ]
|
91 | 91 | },
|
92 | 92 | {
|
|
120 | 120 | "source": [
|
121 | 121 | "## Using Tools with the Agent\n",
|
122 | 122 | "\n",
|
123 |
| - "Now, when we ask questions, the `Agent` will utilize both the provided `tool` and the `LlamaStackChatGenerator` to generate answers. We enable the streaming in Agent, so that you can observe the tool calls and the tool results in real time.\n" |
| 123 | + "Now, when we ask questions, the `Agent` will utilize both the provided `tool` and the `LlamaStackChatGenerator` to generate answers. We enable the streaming in `Agent` through `streaming_callback`, so you can observe the tool calls and results in real time.\n" |
124 | 124 | ]
|
125 | 125 | },
|
126 | 126 | {
|
|
195 | 195 | "cell_type": "markdown",
|
196 | 196 | "metadata": {},
|
197 | 197 | "source": [
|
198 |
| - "If you want to switch your model provider, you can reuse the same `LlamaStackChatGenerator` code with different providers. Simply run the desired inference provider on the Llama Stack Server and update the model name during the initialization of `LlamaStackChatGenerator`.\n", |
| 198 | + "If you want to switch your model provider, you can reuse the same `LlamaStackChatGenerator` code with different providers. Simply run the desired inference provider on the Llama Stack Server and update the `model` name during the initialization of `LlamaStackChatGenerator`.\n", |
199 | 199 | "\n",
|
200 |
| - "For more details on available inference providers, see (Llama Stack docs)[https://llama-stack.readthedocs.io/en/latest/providers/inference/index.html]." |
| 200 | + "For more details on available inference providers, see [Llama Stack docs](https://llama-stack.readthedocs.io/en/latest/providers/inference/index.html)." |
201 | 201 | ]
|
202 | 202 | }
|
203 | 203 | ],
|
|
0 commit comments