Skip to content

Commit

Permalink
renaming todo to exercise
Browse files Browse the repository at this point in the history
  • Loading branch information
sanjanalreddy committed Nov 27, 2024
1 parent d2d20d0 commit a3a7578
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions notebooks/vertex_genai/labs/gen_ai_evaluation_service.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@
"\n",
"Let's say that we are trying to evaluate how well different prompts works for summarization using Gemini. We'll start by defining a few articles in `context` and since we are using computation based metrics for evaluation, we will also need to define the ground truth for the summaries. This will be defined in `reference`. `eval_dataset` should be a dataframe that contains columns needed for evaluation.\n",
"\n",
"**TODO** Create a `eval_dataset` DataFrame with the columns `context`, `reference` and `instruction`"
"**Exercise** Create a `eval_dataset` DataFrame with the columns `context`, `reference` and `instruction`"
]
},
{
Expand Down Expand Up @@ -309,7 +309,7 @@
"\n",
"To start with, we'll be using model inference with prompt templates.\n",
"\n",
"**TODO** Define `EvalTask` with the parameters dataset, metrics and experiment. See the documentation for [EvalTask](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest/vertexai.preview.evaluation.EvalTask) "
"**Exercise** Define `EvalTask` with the parameters dataset, metrics and experiment. See the documentation for [EvalTask](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest/vertexai.preview.evaluation.EvalTask) "
]
},
{
Expand Down Expand Up @@ -512,7 +512,7 @@
"id": "5cf4de49-8165-45a7-a743-bc3696c9d0d1",
"metadata": {},
"source": [
"**TODO:** For the above `context` list, generate the summary responses from Gemini using the `generate_content()` function. Refer to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#non-streaming) "
"**Exercise:** For the above `context` list, generate the summary responses from Gemini using the `generate_content()` function. Refer to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#non-streaming) "
]
},
{
Expand All @@ -538,7 +538,7 @@
"id": "3610f476-b48a-487d-9c5f-fb6db55ef1a8",
"metadata": {},
"source": [
"**TODO**: Define a dataset for evaluation `eval_dataset` as a pandas Dataframe. This eval_dataset only has one column, the responses you generated above"
"**Exercise**: Define a dataset for evaluation `eval_dataset` as a pandas Dataframe. This eval_dataset only has one column, the responses you generated above"
]
},
{
Expand Down Expand Up @@ -727,7 +727,7 @@
"source": [
"Once we have defined the dataset and the evaluation crieteria. We are ready to kick off the evaluation job.\n",
"\n",
"**TODO:** Define your `EvalTask` function here. The parameters that are needed for pairwise evaluateion are, dataset, metrics and experiment\n",
"**Exercise** Define your `EvalTask` function here. The parameters that are needed for pairwise evaluateion are, dataset, metrics and experiment\n",
"\n",
"Please note: the cell below is going to take 10-15 mins to finish execution."
]
Expand Down

0 comments on commit a3a7578

Please sign in to comment.