Skip to content

Commit

Permalink
Merge pull request #237 from bbrowning/doc-pregenerated-dataset
Browse files Browse the repository at this point in the history
Document how to mix in pregenerated skills dataset
  • Loading branch information
markmc authored Aug 9, 2024
2 parents c2eacfb + f86dd69 commit 0bc6c39
Showing 1 changed file with 41 additions and 0 deletions.
41 changes: 41 additions & 0 deletions docs/data_mixing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Data Mixing

As one of the last steps in data generation, the SDG library can optionally mix multiple datasets into a single output dataset in proportions specified by a recipe yaml file. The current implementation is designed to be used with mostly static recipes, that get used by default for every `ilab data generate` run. There is not yet an easy way to specify the recipe to use with each generation run, but we do make it possible to change the default recipe used for skills and/or knowledge data generation.

The primary intended use of this is to specify an optional pregenerated dataset maintained by the InstructLab community that can improve training results when attempting to teach new skills to a model. This process is a bit manual for now, and the steps to do that are documented below.

## Using InstructLab Community Pregenerated Dataset

To use the [InstructLab Community pregenerated dataset](https://huggingface.co/datasets/instructlab/InstructLabCommunity) with all skills training, we first need to create a default recipe that specifies this dataset to include when mixing generated skills data. This recipe will get automatically picked up if placed in a `default_data_recipes/skills.yaml` subfolder and file under one of several possible locations - `'/home/<user>/.local/share/instructlab/sdg'`, `'/usr/local/share/instructlab/sdg'`, or `'/usr/share/instructlab/sdg'`. The exact list of possible locations is platform-dependent, and can be enumerated by a Python command like below:
```

Check failure on line 10 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should be surrounded by blank lines

docs/data_mixing.md:10 MD031/blanks-around-fences Fenced code blocks should be surrounded by blank lines [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md031.md

Check failure on line 10 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should have a language specified

docs/data_mixing.md:10 MD040/fenced-code-language Fenced code blocks should have a language specified [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md040.md
python3 -c '
import os, platformdirs
print(list(platformdirs.PlatformDirs(
appname=os.path.join("instructlab", "sdg"), multipath=True
).iter_data_dirs()))'
```

For this example, we'll assume you want to place to default data recipe under the `~/.local/share/instructlab/sdg/` platform directory.

Ensure that directory exists and create the recipe yaml file:
```

Check failure on line 21 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should be surrounded by blank lines

docs/data_mixing.md:21 MD031/blanks-around-fences Fenced code blocks should be surrounded by blank lines [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md031.md

Check failure on line 21 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should have a language specified

docs/data_mixing.md:21 MD040/fenced-code-language Fenced code blocks should have a language specified [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md040.md
mkdir -p ~/.local/share/instructlab/sdg/default_data_recipes/
cat <<EOF > ~/.local/share/instructlab/sdg/default_data_recipes/skills.yaml
datasets:
- path: instructlab_community.jsonl
sampling_size: 1.0
EOF
```

Next, download the instructlab_community.jsonl file from https://huggingface.co/datasets/instructlab/InstructLabCommunity/tree/main and place it in `~/.local/share/instructlab/datasets/`, where the recipe we wrote above will pick it up. If you prefer to place this pregenerated dataset in a different location, you can specify the absolute path to that different location in your recipe yaml file instead of using relative paths as shown here.

Check failure on line 30 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Bare URL used

docs/data_mixing.md:30:58 MD034/no-bare-urls Bare URL used [Context: "https://huggingface.co/dataset..."] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md034.md

Then, during your next `ilab data generate`, you should see output near the end like:

```

Check failure on line 34 in docs/data_mixing.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should have a language specified

docs/data_mixing.md:34 MD040/fenced-code-language Fenced code blocks should have a language specified [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md040.md
INFO 2024-08-06 16:08:42,069 instructlab.sdg.datamixing:123: Loading dataset from /home/user/.local/share/instructlab/datasets/instructlab_community.jsonl ...
Generating train split: 13863 examples [00:00, 185935.73 examples/s]
INFO 2024-08-06 16:08:42,414 instructlab.sdg.datamixing:125: Dataset columns: ['messages', 'metadata', 'id']
INFO 2024-08-06 16:08:42,414 instructlab.sdg.datamixing:126: Dataset loaded with 13863 samples
```

Your resulting skills_train_*.jsonl file will now contain the additional 13k+ examples from the precomputed dataset, which should ensure your subsequent skills training doesn't regress in already-learned skills while being taught the new skill.

0 comments on commit 0bc6c39

Please sign in to comment.