Skip to content

Commit fc35dd9

Browse files
authored
Merge pull request #245 from deepset-ai/fix-notebook
Fix llama-stack notebook (continued)
2 parents f974980 + ab9fba8 commit fc35dd9

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

notebooks/llama_stack_with_agent.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"# 🛠️🦙 Build with Llama Stack and Haystack Agent\n",
88
"\n",
99
"\n",
10-
"This notebook demonstrates how to use the `LlamaStackChatGenerator` component with Haystack `Agent` to enable function calling capabilities. We'll create a simple weather tool that the `Agent` can call to provide dynamic, up-to-date information.\n",
10+
"This notebook demonstrates how to use the `LlamaStackChatGenerator` component with Haystack [Agent](https://docs.haystack.deepset.ai/docs/agent) to enable function calling capabilities. We'll create a simple weather tool that the `Agent` can call to provide dynamic, up-to-date information.\n",
1111
"\n",
1212
"We start with installing integration package."
1313
]
@@ -45,7 +45,7 @@
4545
"source": [
4646
"## Defining a Tool\n",
4747
"\n",
48-
"Tools in Haystack allow models to call functions to get real-time information or perform actions. Let's create a simple weather tool that the model can use to provide weather information.\n"
48+
"[Tool](https://docs.haystack.deepset.ai/docs/tool) in Haystack allow models to call functions to get real-time information or perform actions. Let's create a simple weather tool that the model can use to provide weather information.\n"
4949
]
5050
},
5151
{
@@ -86,7 +86,7 @@
8686
"source": [
8787
"## Setting Up Agent\n",
8888
"\n",
89-
"Now let's create a `LlamaStackChatGenerator` and pass it to the `Agent`.\n"
89+
"Now, let's create a `LlamaStackChatGenerator` and pass it to the `Agent`. The Agent component will use the model running with `LlamaStackChatGenerator` to reason and make decisions.\n"
9090
]
9191
},
9292
{
@@ -120,7 +120,7 @@
120120
"source": [
121121
"## Using Tools with the Agent\n",
122122
"\n",
123-
"Now, when we ask questions, the `Agent` will utilize both the provided `tool` and the `LlamaStackChatGenerator` to generate answers. We enable the streaming in Agent, so that you can observe the tool calls and the tool results in real time.\n"
123+
"Now, when we ask questions, the `Agent` will utilize both the provided `tool` and the `LlamaStackChatGenerator` to generate answers. We enable the streaming in `Agent` through `streaming_callback`, so you can observe the tool calls and results in real time.\n"
124124
]
125125
},
126126
{
@@ -195,9 +195,9 @@
195195
"cell_type": "markdown",
196196
"metadata": {},
197197
"source": [
198-
"If you want to switch your model provider, you can reuse the same `LlamaStackChatGenerator` code with different providers. Simply run the desired inference provider on the Llama Stack Server and update the model name during the initialization of `LlamaStackChatGenerator`.\n",
198+
"If you want to switch your model provider, you can reuse the same `LlamaStackChatGenerator` code with different providers. Simply run the desired inference provider on the Llama Stack Server and update the `model` name during the initialization of `LlamaStackChatGenerator`.\n",
199199
"\n",
200-
"For more details on available inference providers, see (Llama Stack docs)[https://llama-stack.readthedocs.io/en/latest/providers/inference/index.html]."
200+
"For more details on available inference providers, see [Llama Stack docs](https://llama-stack.readthedocs.io/en/latest/providers/inference/index.html)."
201201
]
202202
}
203203
],

0 commit comments

Comments
 (0)