Skip to content

Commit 18de0c3

Browse files
authored
Update README.md
1 parent 5c24329 commit 18de0c3

File tree

1 file changed

+1
-25
lines changed

1 file changed

+1
-25
lines changed

README.md

+1-25
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
1-
# Kotoba Recipes
1+
# MoE Recipes
22

33
# Table of Contents
44

55
1. [Installation](#installation)
6-
2. [Instruction Tuning](#instruction-tuning)
7-
3. [LLM Continual Pre-Training](#llm-continual-pre-training)
8-
4. [Support Models](#support-models)
96

107
## Installation
118

@@ -31,24 +28,3 @@ To install the FlashAttention, run the following command: (GPU is required)
3128
pip install ninja packaging wheel
3229
pip install flash-attn --no-build-isolation
3330
```
34-
35-
### ABCI
36-
37-
If you use [ABCI](https://abci.ai/) to run the experiments, install scripts are available in `kotoba-recipes/install.sh`.
38-
39-
## Instruction Tuning
40-
41-
[scripts/abci/instruction](scripts/abci/instruction) contains the scripts to run instruction tunings on ABCI.
42-
43-
If you want to use custom instructions, you need to modify the `src/llama_recipes/datasets/alpaca_dataset.py`.
44-
45-
## LLM Continual Pre-Training
46-
47-
[scripts/abci/](scripts/abci/) contains the scripts to run LLM continual pre-training on ABCI.
48-
7B, 13B, 70B directories contain the scripts to run the experiments with the corresponding model size (Llama 2).
49-
50-
## Support Models
51-
52-
- [meta Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
53-
- [mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1)
54-
- [swallow](https://huggingface.co/tokyotech-llm/Swallow-70b-hf)

0 commit comments

Comments
 (0)