Skip to content

Commit 978ab37

Browse files
authored
update docs (#1084)
1 parent 2b36c7d commit 978ab37

21 files changed

+81
-45
lines changed

docs/assets/favicon.ico

816 Bytes
Binary file not shown.

docs/en/api/peft/MAIN_CLASSES/PEFT_TYPE.md

-2
This file was deleted.

docs/en/api/peft/MAIN_CLASSES/Tuner.md

-2
This file was deleted.
File renamed without changes.

docs/en/api/peft/index.md

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# PEFT (Prompt Engineering with Fine-Tuning)
2+
3+
MindNLP's PEFT (Prompt Engineering with Fine-Tuning) is a methodology for fine-tuning natural language models using prompts. It allows users to tailor their models for specific tasks or domains by providing customized prompts, enabling better performance on downstream tasks.
4+
5+
## Introduction
6+
7+
PEFT leverages the power of prompts, which are short natural language descriptions of the task, to guide the model's understanding and behavior. By fine-tuning with prompts, users can steer the model's attention and reasoning towards relevant information, improving its performance on targeted tasks.
8+
9+
## Supported PEFT Algorithms
10+
11+
| Algorithm | Description |
12+
|------------------|--------------------------------------------------------------|
13+
| [AdaLoRA](./tuners/adalora.md) | Adaptable Prompting with Learned Rationales (AdaLoRA) |
14+
| [Adaption_Prompt](./tuners/adaption_prompt.md) | Adaptation Prompting |
15+
| [IA3](./tuners/ia3.md) | Iterative Alignments for Adaptable Alignment and Prompting (IA3) |
16+
| [LoKr](./tuners/lokr.md) | Large-scale k-shot Knowledge Representation (LoKr) |
17+
| [LoRA](./tuners/lora.md) | Learnable Redirection of Attention (LoRA) |
18+
| [Prompt Tuning](./tuners/prompt_tuning.md) | Prompt Tuning fine-tunes models by optimizing the prompts used during fine-tuning. |
19+
20+
Each algorithm offers unique approaches to prompt engineering and fine-tuning, allowing users to adapt models to diverse tasks and domains effectively.
21+
File renamed without changes.
File renamed without changes.
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
:::mindnlp.peft.tuners.adalora.config.AdaLoraConfig
2-
::: mindnlp.peft.tuners.adalora.model.AdaLoraModel
2+
:::mindnlp.peft.tuners.adalora.model.AdaLoraModel
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
:::mindnlp.peft.tuners.adaption_prompt.config.AdaptionPromptConfig
2-
::: mindnlp.peft.tuners.adaption_prompt.model.AdaptionPromptModel
2+
:::mindnlp.peft.tuners.adaption_prompt.model.AdaptionPromptModel
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
:::mindnlp.peft.tuners.ia3.config
2-
::: mindnlp.peft.tuners.ia3.model
2+
:::mindnlp.peft.tuners.ia3.model
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
:::mindnlp.peft.tuners.lokr.config
2-
::: mindnlp.peft.tuners.lokr.model
2+
:::mindnlp.peft.tuners.lokr.model
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
:::mindnlp.peft.tuners.lora.config
2-
::: mindnlp.peft.tuners.lora.model
2+
:::mindnlp.peft.tuners.lora.model
+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
:::mindnlp.peft.tuners.prompt_tuning.config
2+
:::mindnlp.peft.tuners.prompt_tuning.model
File renamed without changes.

docs/en/api/transformers/index.md

Whitespace-only changes.

docs/en/api/transformers/models/index.md

Whitespace-only changes.

docs/en/api/transformers/pipeline/index.md

Whitespace-only changes.
+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
gradio
2+
mdtex2html

llm/inference/chatglm/web_demo.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
import gradio as gr
33
import mdtex2html
44

5-
model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/chatglm-6b").half()
5+
model = AutoModelForSeq2SeqLM.from_pretrained("ZhipuAI/ChatGLM-6B", mirror='modelscope').half()
66
model.set_train(False)
7-
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b")
7+
tokenizer = AutoTokenizer.from_pretrained("ZhipuAI/ChatGLM-6B", mirror='modelscope')
88

99
"""Override Chatbot.postprocess"""
1010

@@ -81,8 +81,7 @@ def reset_state():
8181
with gr.Row():
8282
with gr.Column(scale=4):
8383
with gr.Column(scale=12):
84-
user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10).style(
85-
container=False)
84+
user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10, container=False)
8685
with gr.Column(min_width=32, scale=1):
8786
submitBtn = gr.Button("Submit", variant="primary")
8887
with gr.Column(scale=1):

mkdocs.yml

+48-32
Original file line numberDiff line numberDiff line change
@@ -10,27 +10,38 @@ nav:
1010
- Supported Models: supported_models.md
1111
- How-To Contribute: contribute.md
1212
- API Reference:
13-
- accelerate: api/accelerate.md
14-
- data: api/data.md
15-
- dataset: api/dataset.md
16-
- engine: api/engine.md
17-
- modules: api/modules.md
18-
- parallel: api/parallel.md
19-
- peft:
20-
MAIN CLASSES:
21-
PEFT model: api/peft/MAIN_CLASSES/peft_model.md
22-
PEFT mapping: api/peft/MAIN_CLASSES/mapping.md
23-
Configuration: api/peft/MAIN_CLASSES/config.md
24-
ADAPTERS:
25-
AdaLoRA: api/peft/ADAPTERS/AdaLoRA.md
26-
Adaption_Prompt: api/peft/ADAPTERS/Adaption_Prompt.md
27-
IA3: api/peft/ADAPTERS/IA3.md
28-
LoKr: api/peft/ADAPTERS/LoKr.md
29-
LoRA: api/peft/ADAPTERS/LoRA.md
30-
- sentence: api/sentence.md
31-
- transformers: api/transformers.md
32-
- trl: api/trl.md
33-
- utils: api/utils.md
13+
- Accelerate: api/accelerate.md
14+
- Data: api/data.md
15+
- Dataset: api/dataset.md
16+
- Engine: api/engine.md
17+
- Modules: api/modules.md
18+
- Parallel: api/parallel.md
19+
- PEFT:
20+
- api/peft/index.md
21+
- tuners:
22+
AdaLoRA: api/peft/tuners/adalora.md
23+
Adaption_Prompt: api/peft/tuners/adaption_prompt.md
24+
IA3: api/peft/tuners/ia3.md
25+
LoKr: api/peft/tuners/lokr.md
26+
LoRA: api/peft/tuners/lora.md
27+
Prompt tuning: api/peft/tuners/prompt_tuning.md
28+
- utils:
29+
- merge_utils: api/peft/utils/merge_utils.md
30+
- config: api/peft/config.md
31+
- mapping: api/peft/mapping.md
32+
- peft_model: api/peft/peft_model.md
33+
34+
- Sentence: api/sentence.md
35+
- Transformers:
36+
- api/transformers/index.md
37+
- generation:
38+
- api/transforemrs/generation/index.md
39+
- models:
40+
- api/transforemrs/models/index.md
41+
- pipeline:
42+
- api/transforemrs/pipeline/index.md
43+
- TRL: api/trl.md
44+
- Utils: api/utils.md
3445
- Notes:
3546
- Change Log: notes/changelog.md
3647
- Code of Conduct: notes/code_of_conduct.md
@@ -41,24 +52,26 @@ theme:
4152
palette:
4253
- media: "(prefers-color-scheme: light)"
4354
scheme: default
44-
primary: black
55+
primary: indigo
56+
accent: indigo
4557
toggle:
46-
icon: material/weather-sunny
58+
icon: material/brightness-7
4759
name: Switch to dark mode
4860
- media: "(prefers-color-scheme: dark)"
4961
scheme: slate
5062
primary: black
63+
accent: indigo
5164
toggle:
52-
icon: material/weather-night
53-
name: Switch to light mode
65+
icon: material/brightness-4
66+
name: Switch to system preference
5467
features:
5568
# - navigation.instant # see https://github.com/ultrabug/mkdocs-static-i18n/issues/62
5669
- navigation.tracking
5770
- navigation.tabs
58-
- navigation.sections
5971
- navigation.indexes
6072
- navigation.top
6173
- navigation.footer
74+
- navigation.path
6275
- toc.follow
6376
- search.highlight
6477
- search.share
@@ -69,6 +82,9 @@ theme:
6982
- content.code.copy
7083
- content.code.select
7184
- content.code.annotations
85+
favicon: assets/favicon.ico
86+
icon:
87+
logo: logo
7288

7389
markdown_extensions:
7490
# Officially Supported Extensions
@@ -146,9 +162,9 @@ plugins:
146162
extra:
147163
generator: false
148164
social:
149-
# - icon: fontawesome/solid/paper-plane
150-
# link: mailto:[email protected]
151-
# - icon: fontawesome/brands/github
152-
# link: https://github.com/mindspore-lab/mindcv
153-
# - icon: fontawesome/brands/zhihu
154-
# link: https://www.zhihu.com/people/mindsporelab
165+
- icon: fontawesome/solid/paper-plane
166+
link: mailto:[email protected]
167+
- icon: fontawesome/brands/github
168+
link: https://github.com/mindspore-lab/mindnlp
169+
- icon: fontawesome/brands/zhihu
170+
link: https://www.zhihu.com/people/lu-yu-feng-46-1

0 commit comments

Comments
 (0)