Skip to content

Commit

Permalink
Update all skill configs to use jinja templates
Browse files Browse the repository at this point in the history
This also makes the test running `Block._validate` on all our shipped
configs a bit more generic so that it can cover all skill and
knowledge yaml files without having to keep a separate list of config
files to test.

Signed-off-by: Ben Browning <[email protected]>
  • Loading branch information
bbrowning committed Nov 26, 2024
1 parent 80b4bbf commit a217d3e
Show file tree
Hide file tree
Showing 12 changed files with 54 additions and 61 deletions.
4 changes: 2 additions & 2 deletions src/instructlab/sdg/configs/skills/contexts.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
system: You are a very knowledgeable AI Assistant that will faithfully assist the user with their task.

introduction: You are asked to come up with a diverse context for - {task_description}.
introduction: You are asked to come up with a diverse context for - {{task_description}}.
principles: |
Please follow these guiding principles when generating responses:
* Use proper grammar and punctuation.
Expand All @@ -11,7 +11,7 @@ principles: |
examples: |
To better assist you with this task, here is an example of a context:
[Start of Context]
{seed_context}
{{seed_context}}
[End of Context]
generation: |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ examples: |
generation: |
Here's the question and the answer you need to evaluate:
[Start of Question]
{question}
{{question}}
[End of Question]
[Start of Answer]
{response}
{{response}}
[End of Answer]
Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the answer on a scale of 1 to 3 as mentioned above.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ principles: |
* The questions should be in English.
* The questions should be 1 to 2 sentences long and should be properly formatted.
* The question should not be offensive, abusive, or harmful. It should be safe and respectful.
* The question should be relevant to the task given - {task_description}.
* The question should be relevant to the task given - {{task_description}}.
If the question meets the above requirements, please rate it 1. If not, please rate it 0.
Expand All @@ -32,10 +32,10 @@ examples: |
generation: |
Here's the question you need to evaluate:
Task Description: {task_description}
Task Description: {{task_description}}
[Start of Question]
{question}
{{question}}
[End of Question]
Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the question on a scale of 0 or 1 as mentioned above. Strictly follow the format below:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,15 +35,15 @@ generation: |
Here's the context, question and the answer you need to evaluate:
[Start of Context]
{context}
{{context}}
[End of Context]
[Start of Question]
{question}
{{question}}
[End of Question]
[Start of Answer]
{response}
{{response}}
[End of Answer]
* Return the evaluation between [Start of Evaluation] and [End of Evaluation] tags.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ principles: |
* The questions should be in English.
* The questions should be 1 to 2 sentences long and should be properly formatted.
* The question should not be offensive, abusive, or harmful. It should be safe and respectful.
* The question should be relevant to the task given - {task_description}.
* The question should be relevant to the task given - {{task_description}}.
* Most importantly all the questions should be grounded in the context provided and should be answerable solely based on the provided context.
If the question meets the above requirements, please rate it 1. If not, please rate it 0.
Expand Down Expand Up @@ -37,10 +37,10 @@ generation: |
Here's the context and question you need to evaluate. Return the evaluation between [Start of Evaluation] and [End of Evaluation] tags.
[Start of Context]
{context}
{{context}}
[End of Context]
[Start of Question]
{question}
{{question}}
[End of Question]
Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the question on a scale of 0 or 1 as mentioned above.
Expand Down
6 changes: 3 additions & 3 deletions src/instructlab/sdg/configs/skills/freeform_questions.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
system: You are a very knowledgeable AI Assistant that will faithfully assist the user with their task.

introduction: |
You are asked to come up with a set of {num_samples} diverse questions - {task_description}.
You are asked to come up with a set of {{num_samples}} diverse questions - {{task_description}}.
principles: |
Please follow these guiding principles when generating responses:
Expand All @@ -19,11 +19,11 @@ examples: |
To better assist you with this task, here is an example:
[Start of Question]
{seed_question}
{{seed_question}}
[End of Question]
generation: |
Now generate {num_samples} such questions, remember to follow the principles mentioned above and use the same format as the examples. Remember to use the same style and format as the example above. Return each question between [Start of Question] and [End of Question] tags.
Now generate {{num_samples}} such questions, remember to follow the principles mentioned above and use the same format as the examples. Remember to use the same style and format as the example above. Return each question between [Start of Question] and [End of Question] tags.
start_tags: ["[Start of Question]"]
end_tags: ["[End of Question]"]
6 changes: 3 additions & 3 deletions src/instructlab/sdg/configs/skills/freeform_responses.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,18 @@ principles: |
examples: |
To better assist you with this task, here is an example:
[Start of Question]
{seed_question}
{{seed_question}}
[End of Question]
[Start of Response]
{seed_response}
{{seed_response}}
[End of Response]
generation: |
Now generate a response to the following prompt. Remember to use the same style and format as the example above.
[Start of Question]
{question}
{{question}}
[End of Question]
Return the response between [Start of Response] and [End of Response] tags.
Expand Down
10 changes: 5 additions & 5 deletions src/instructlab/sdg/configs/skills/grounded_questions.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
system: You are a very knowledgeable AI Assistant that will faithfully assist the user with their task.

introduction: |
You are asked to come up with a set of {num_samples} diverse questions - {task_description}.
You are asked to come up with a set of {{num_samples}} diverse questions - {{task_description}}.
principles: |
Please follow these guiding principles when generating responses:
Expand All @@ -21,17 +21,17 @@ examples: |
To better assist you with this task, here is an example:
[Start of Context]
{seed_context}
{{seed_context}}
[End of Context]
[Start of Question]
{seed_question}
{{seed_question}}
[End of Question]
generation: |
Now generate {num_samples} such questions, remember to follow the principles mentioned above and use the same format as the examples. Remember to use the same style and format as the example above. Do not return any contexts or answers, only the questions. Return each question between [Start of Question] and [End of Question] tags.
Now generate {{num_samples}} such questions, remember to follow the principles mentioned above and use the same format as the examples. Remember to use the same style and format as the example above. Do not return any contexts or answers, only the questions. Return each question between [Start of Question] and [End of Question] tags.
[Start of Context]
{context}
{{context}}
[End of Context]
start_tags: ["[Start of Question]"]
Expand Down
10 changes: 5 additions & 5 deletions src/instructlab/sdg/configs/skills/grounded_responses.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,26 +14,26 @@ examples: |
To better assist you with this task, here is an example:
[Start of Context]
{seed_context}
{{seed_context}}
[End of Context]
[Start of Question]
{seed_question}
{{seed_question}}
[End of Question]
[Start of Response]
{seed_response}
{{seed_response}}
[End of Response]
generation: |
Now generate a response to the following prompt. Remember to use the same style and format as the example above.
Return the response between [Start of Response] and [End of Response] tags.
[Start of Context]
{context}
{{context}}
[End of Context]
[Start of Question]
{question}
{{question}}
[End of Question]
Return the response between [Start of Response] and [End of Response] tags.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ principles: |
7. The output should be an appropriate response to the input and the instruction. Long outputs are preferable.
examples: |
The task is {task_description}.
The task is {{task_description}}.
Here is an example to help you understand the type of questions that are asked for:
{seed_question}
{seed_response}
{{seed_question}}
{{seed_response}}
generation: |
Provide a single question and answer pair based on the examples.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,16 @@ principles: |
7. The output should be an appropriate response to the input and the instruction. Long outputs are preferable.
examples: |
The task is {task_description}.
The task is {{task_description}}.
Here is some context for the example question:
{seed_context}
{{seed_context}}
Here is an example to help you understand the type of questions that are asked for:
{seed_question}
{seed_response}
{{seed_question}}
{{seed_response}}
generation: |
Provide a single question and answer pair based on the example.
Expand Down
43 changes: 18 additions & 25 deletions tests/test_llmblock.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,31 +103,24 @@ def setUp(self):
self.mock_ctx.model_id = "test_model"
self.mock_pipe = MagicMock()

def test_knowledge_configs_with_invalid_sample(self):
configs = [
"evaluate_faithfulness.yaml",
"evaluate_question.yaml",
"evaluate_relevancy.yaml",
"generate_questions_responses.yaml",
"mcq_generation.yaml",
"spellcheck.yaml",
"simple_generate_qa.yaml",
]
for config in configs:
config_yaml = os.path.join(
resources.files("instructlab.sdg.configs.knowledge"), config
)
block = LLMBlock(
ctx=self.mock_ctx,
pipe=self.mock_pipe,
block_name=config,
config_path=config_yaml,
output_cols=[],
)
sample = {"foo": "bar"}
assert not block._validate(
block.prompt_template, sample
), f"knowledge config {config} validated even though it was given a sample with none of the expected fields"
def test_configs_with_invalid_sample(self):
for config_type in ["knowledge", "skills"]:
for config_yaml in resources.files(
f"instructlab.sdg.configs.{config_type}"
).iterdir():
if config_yaml.suffix != ".yaml":
continue
block = LLMBlock(
ctx=self.mock_ctx,
pipe=self.mock_pipe,
block_name=config_yaml.stem,
config_path=config_yaml,
output_cols=[],
)
sample = {"foo": "bar"}
assert not block._validate(
block.prompt_template, sample
), f"{config_type} config {config_yaml.name} validated even though it was given a sample with none of the expected fields"

def test_simple_generate_qa_with_valid_sample(self):
config_yaml = os.path.join(
Expand Down

0 comments on commit a217d3e

Please sign in to comment.