Skip to content

Commit

Permalink
add LLM-jp-3 172B (#418)
Browse files Browse the repository at this point in the history
  • Loading branch information
kaisugi authored Dec 28, 2024
1 parent 91d8484 commit 04c2b55
Show file tree
Hide file tree
Showing 7 changed files with 46 additions and 1 deletion.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
| | アーキテクチャ | 入出力で扱える<br>トークン数 | 学習テキスト | 開発元 | ライセンス / 利用規約 |
|:---|:---:|:---:|:---:|:---:|:---:|
| [Sarashina2-8x70B](https://www.sbintuitions.co.jp/news/press/20241108_01/) | Mixtral<br>([8x70b (**465b**)](https://huggingface.co/sbintuitions/sarashina2-8x70b)) | 8,192 | Sarashina2 (70B) に対して Sparse Upcycling で学習 | SB Intuitions | Sarashina Model NonCommercial License |
| [LLM-jp-3 172B](https://www.nii.ac.jp/news/release/2024/1224.html) | Llama<br>([**172b**](https://huggingface.co/llm-jp/llm-jp-3-172b), [**172b**-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3)) | 4,096 | 事前学習: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(計 **2.1T** トークン)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k<br>DPO: 合成データ | 大規模言語モデル研究開発センター (LLMC) | 事前学習済みモデル: LLM-jp-3 172B Terms of Use<br>事後学習済みモデル: llm-jp-3-172b-instruct3利用許諾契約 |
| [LLM-jp-3 172B beta2](https://llmc.nii.ac.jp/topics/llm-jp-3-172b-beta2/) | Llama<br>([**172b**-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2), [**172b**-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2)) | 4,096 | 事前学習: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)の一部<br>(計 **1.4T** トークン)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k | 大規模言語モデル研究開発センター (LLMC) | LLM-jp-3 172B beta2 Terms of Use |
| [LLM-jp-3 172B beta1](https://www.nii.ac.jp/news/release/2024/0917.html) | Llama<br>([**172b**-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1), [**172b**-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct)) | 4,096 | 事前学習: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)の一部<br>(計 **0.7T** トークン)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | 大規模言語モデル研究開発センター (LLMC) | LLM-jp-3 172B beta1 Terms of Use |
| [LLM-jp-3 172B alpha](https://llmc.nii.ac.jp/topics/llm-jp-3-172b-alpha1-alpha2/) | Llama<br>([**172b**-alpha1](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1), [**172b**-alpha1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1-instruct), [**172b**-alpha2](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2), [**172b**-alpha2-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2-instruct)) | 4,096 | 事前学習: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)の一部<br>(alpha1: 計 **0.7T** トークン, alpha2: 計 **1.4T** トークン)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | 大規模言語モデル研究開発センター (LLMC) | Apache 2.0 |
Expand Down
1 change: 1 addition & 0 deletions en/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ Please point out any errors on the [issues page](https://github.com/llm-jp/aweso
| | Architecture | Max Context Length | Training Data | Developer | License / Terms of Use |
|:---|:---:|:---:|:---:|:---:|:---:|
| [Sarashina2-8x70B](https://www.sbintuitions.co.jp/news/press/20241108_01/) | Mixtral<br>([8x70b (**465b**)](https://huggingface.co/sbintuitions/sarashina2-8x70b)) | 8,192 | Sparse Upcycling on Sarashina2 (70B) | SB Intuitions | Sarashina Model NonCommercial License |
| [LLM-jp-3 172B](https://www.nii.ac.jp/news/release/2024/1224.html) | Llama<br>([**172b**](https://huggingface.co/llm-jp/llm-jp-3-172b), [**172b**-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3)) | 4,096 | Pre-training: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**2.1T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k<br>DPO: synthetic data | Research and Development Center for Large Language Models (LLMC) | Pre-trained model: LLM-jp-3 172B Terms of Use<br>Post-trained model: llm-jp-3-172b-instruct3 Terms of Use |
| [LLM-jp-3 172B beta2](https://llmc.nii.ac.jp/en/topics/llm-jp-3-172b-beta2/) | Llama<br>([**172b**-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2), [**172b**-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**1.4T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k | Research and Development Center for Large Language Models (LLMC) | LLM-jp-3 172B beta2 Terms of Use |
| [LLM-jp-3 172B beta1](https://www.nii.ac.jp/en/news/release/2024/0917.html) | Llama<br>([**172b**-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1), [**172b**-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**0.7T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | Research and Development Center for Large Language Models (LLMC) | LLM-jp-3 172B beta1 Terms of Use |
| [LLM-jp-3 172B alpha](https://llmc.nii.ac.jp/en/topics/llm-jp-3-172b-alpha1-alpha2/) | Llama<br>([**172b**-alpha1](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1), [**172b**-alpha1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1-instruct), [**172b**-alpha2](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2), [**172b**-alpha2-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2-instruct)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(alpha1: **0.7T** tokens, alpha2: **1.4T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | Research and Development Center for Large Language Models (LLMC) | Apache 2.0 |
Expand Down
Binary file modified figures/parameter_size_overview_en.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified figures/parameter_size_overview_ja.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 42 additions & 1 deletion figures/scripts/parameter_size_overview.csv
Original file line number Diff line number Diff line change
@@ -1,4 +1,39 @@
Model,Lab,Parameters(B),Announced,Type
DeepSeek-V3,DeepSeek-AI,685.0,2024/12/01,EN-available
ModernBERT,International,0.395,2024/12/01,EN-available
Granite 3.1 8B,IBM,8.0,2024/12/01,EN-available
Bamba-9B,CMU,9.0,2024/12/01,EN-available
o1-2024-12-17,OpenAI,200.0,2024/12/01,EN-available
Falcon 3,TII,10.0,2024/12/01,EN-available
Command R7B,Cohere,7.0,2024/12/01,EN-available
Maya,Cohere,8.0,2024/12/01,EN-available
BLT,Meta AI,8.0,2024/12/01,EN-available
phi-4,Microsoft,14.0,2024/12/01,EN-available
Gemini 2.0 Flash exp,Google DeepMind,30.0,2024/12/01,EN-available
Moxin-7B,International,7.0,2024/12/01,EN-available
InternVL 2.5,Shanghai AI Laboratory/SenseTime,78.0,2024/12/01,EN-available
Llama 3.3,Meta AI,70.0,2024/12/01,EN-available
EXAONE-3.5,LG,32.0,2024/12/01,EN-available
Deepthought-8B,Ruliad,8.0,2024/12/01,EN-available
Sailor2,Sail,20.0,2024/12/01,EN-available
Pleias 1.0,PleIAs,3.0,2024/12/01,EN-available
o1,OpenAI,200.0,2024/12/01,EN-available
Nova Pro,Amazon,90.0,2024/12/01,EN-available
DisTrO 15B,Nous Research,15.0,2024/12/01,EN-available
INTELLECT-1,Prime Intellect,10.0,2024/11/01,EN-available
QwQ-32B,Alibaba,32.0,2024/11/01,EN-available
Teuken-7B,OpenGPT-X,7.0,2024/11/01,EN-available
OLMo 2,Allen AI,13.0,2024/11/01,EN-available
k0-math,Moonshot AI,100.0,2024/11/01,EN-available
Marco-o1,Alibaba,7.0,2024/11/01,EN-available
TÜLU 3,Allen AI,70.0,2024/11/01,EN-available
gpt-4o-2024-11-20,OpenAI,200.0,2024/11/01,EN-available
DeepSeek-R1-Lite,DeepSeek-AI,67.0,2024/11/01,EN-available
Xmodel-LM,XiaoduoAI,1.1,2024/11/01,EN-available
Pixtral Large,Mistral,124.0,2024/11/01,EN-available
f1,Fireworks,,2024/11/01,EN-available
Qwen2.5-Coder,Alibaba,32.5,2024/11/01,EN-available
Fox-1,TensorOpera,1.6,2024/11/01,EN-available
Hunyuan-Large,Tencent,389.0,2024/11/01,EN-available
SEA-LIONv3,AI Singapore,9.24,2024/11/01,EN-available
AMD OLMo,AMD,1.0,2024/11/01,EN-available
Expand Down Expand Up @@ -27,7 +62,7 @@ Gemini-1.5-Pro-002 ,Google DeepMind,1500.0,2024/09/01,EN-available
Qwen2.5,Alibaba,72.0,2024/09/01,EN-available
GRIN MoE,Microsoft,60.0,2024/09/01,EN-available
Data-Gemma,Google DeepMind,27.0,2024/09/01,EN-available
o1,OpenAI,200.0,2024/09/01,EN-available
o1-preview,OpenAI,200.0,2024/09/01,EN-available
Reader-LM,Jina AI,1.54,2024/09/01,EN-available
Pixtral-12b-240910,Mistral,12.0,2024/09/01,EN-available
DeepSeek-V2.5,DeepSeek-AI,236.0,2024/09/01,EN-available
Expand Down Expand Up @@ -60,6 +95,7 @@ Mathstral,Mistral,7.0,2024/07/01,EN-available
next-gen,DeepL,,2024/07/01,EN-available
SmolLM,Hugging Face,1.7,2024/07/01,EN-available
Mockingbird,Vectara,9.0,2024/07/01,EN-available
Step-2,StepFun,1000.0,2024/07/01,EN-available
H2O-Danube3-4B,H2O.ai,4.0,2024/07/01,EN-available
SenseNova 5.5,SenseTime,600.0,2024/07/01,EN-available
Helium 7B,Kyutai,7.0,2024/07/01,EN-available
Expand Down Expand Up @@ -97,6 +133,7 @@ ChuXin,Independent,1.6,2024/05/01,EN-available
RWKV-v6 Finch,RWKV,7.63,2024/05/01,EN-available
Granite Code,IBM,34.0,2024/05/01,EN-available
Qwen-Max,Alibaba,300.0,2024/05/01,EN-available
TinyStories,Microsoft,0.033,2024/04/01,EN-available
Tele-FLM,BAAI,52.0,2024/04/01,EN-available
Qwen-1.5 110B,Alibaba,111.0,2024/04/01,EN-available
Arctic,Snowflake AI Research,480.0,2024/04/01,EN-available
Expand Down Expand Up @@ -336,6 +373,9 @@ GPT-J,EleutherAI,6.0,2021/06/01,EN-available
ruGPT-3,Huawei/Sberbank,1.3,2021/02/01,EN-available
Switch,Google,1600.0,2021/01/01,EN-available
GPT-3,OpenAI,175.0,2020/05/01,EN-available
EON-8B,LinkedIn,8.0,2024/12/01,EN-unavailable
o3,OpenAI,5000.0,2024/12/01,EN-unavailable
Bi-Mamba,CMU,2.7,2024/11/01,EN-unavailable
SFR-LLaMA-3.1-70B-Judge,Salesforce,70.0,2024/09/01,EN-unavailable
Unnamed 1T,China Telecom Artificial Intelligence Research Institute,1000.0,2024/09/01,EN-unavailable
LTM-2-mini,Magic,20.0,2024/08/01,EN-unavailable
Expand All @@ -346,6 +386,7 @@ CriticGPT,OpenAI,,2024/06/01,EN-unavailable
ESM3,EvolutionaryScale,98.0,2024/06/01,EN-unavailable
PanGu 5.0 Super,Huawei,1000.0,2024/06/01,EN-unavailable
DCLM-Baseline 7B 2.6T,International,7.0,2024/06/01,EN-unavailable
LearnLM,Google DeepMind,1500.0,2024/05/01,EN-unavailable
xLSTM,ELLIS,2.7,2024/05/01,EN-unavailable
Med-Gemini-L 1.0,Google DeepMind,1500.0,2024/05/01,EN-unavailable
HLAT,Amazon,7.0,2024/04/01,EN-unavailable
Expand Down
1 change: 1 addition & 0 deletions figures/scripts/parameter_size_overview_ja.csv
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
Model,Lab,Parameters(B),Announced,Type,Source(JP)
llm-jp-3-172b-instruct3,LLMC,172,2024/12/24,JP-available-scratch,https://www.nii.ac.jp/news/release/2024/1224.html
Sarashina2-8x70B,SB Intuitions,465,2024/11/08,JP-available-scratch,https://www.sbintuitions.co.jp/news/press/20241108_01/
日本語版 Gemma 2 2B,Google,2,2024/10/03,JP-available,https://developers-jp.googleblog.com/2024/10/gemma-2-for-japan.html
Gemma 2 Baku 2B,rinna,2,2024/10/03,JP-available,https://rinna.co.jp/news/2024/10/20241003.html
Expand Down
1 change: 1 addition & 0 deletions fr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ N'hésitez pas à signaler les erreurs sur la page [issues](https://github.com/l
| | Architecture | Longueur Maximale du Contexte | Données d'entraînement | Développeur | Licence / Conditions d'utilisation |
|:---|:---:|:---:|:---:|:---:|:---:|
| [Sarashina2-8x70B](https://www.sbintuitions.co.jp/news/press/20241108_01/) | Mixtral<br>([8x70b (**465b**)](https://huggingface.co/sbintuitions/sarashina2-8x70b)) | 8,192 | Sparse Upcycling on Sarashina2 (70B) | SB Intuitions | Sarashina Model NonCommercial License |
| [LLM-jp-3 172B](https://www.nii.ac.jp/news/release/2024/1224.html) | Llama<br>([**172b**](https://huggingface.co/llm-jp/llm-jp-3-172b), [**172b**-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3)) | 4,096 | Pre-training: [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**2.1T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k<br>DPO: synthetic data | Research and Development Center for Large Language Models (LLMC) | Pre-trained model: LLM-jp-3 172B Terms of Use<br>Post-trained model: llm-jp-3-172b-instruct3 Terms of Use |
| [LLM-jp-3 172B beta2](https://llmc.nii.ac.jp/en/topics/llm-jp-3-172b-beta2/) | Llama<br>([**172b**-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2), [**172b**-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**1.4T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), [magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0), Daring-Anteater, FLAN, ichikara-instruction-format, AutoMultiTurnByCalm3-22B, ramdom-to-fixed-multiturn-Calm3, wizardlm8x22b-logical-math-coding-sft-ja, wizardlm8x22b-logical-math-coding-sft_additional-ja, Synthetic-JP-EN-Coding-Dataset-567k | Research and Development Center for Large Language Models (LLMC) | LLM-jp-3 172B beta2 Terms of Use |
| [LLM-jp-3 172B beta1](https://www.nii.ac.jp/en/news/release/2024/0917.html) | Llama<br>([**172b**-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1), [**172b**-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(**0.7T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | Research and Development Center for Large Language Models (LLMC) | LLM-jp-3 172B beta1 Terms of Use |
| [LLM-jp-3 172B alpha](https://llmc.nii.ac.jp/en/topics/llm-jp-3-172b-alpha1-alpha2/) | Llama<br>([**172b**-alpha1](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1), [**172b**-alpha1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1-instruct), [**172b**-alpha2](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2), [**172b**-alpha2-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2-instruct)) | 4,096 | Pre-training: part of [llm-jp-corpus-v3](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)<br>(alpha1: **0.7T** tokens, alpha2: **1.4T** tokens)<br>Instruction Tuning: [ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/), [answer-carefully](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/), Dolly Dataset, OASST1, OASST2, Aya Dataset, ichikara-instruction-format, Daring-Anteater, FLAN | Research and Development Center for Large Language Models (LLMC) | Apache 2.0 |
Expand Down

0 comments on commit 04c2b55

Please sign in to comment.