Skip to content

Commit

Permalink
Setup examples folder
Browse files Browse the repository at this point in the history
  • Loading branch information
EricLBuehler committed Dec 31, 2023
1 parent 7175c0e commit 3d87264
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 3 deletions.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@ Mixture of LoRA Experts: leverage the power of fine-tuned LoRA experts by employ
MoLE works by learning the alpha scaling values for LoRA adapters, which are frozen. These learned alpha values are used to
gate the LoRA experts in a dense fashion. This method has several advantages:

See the [examples](examples) folder for some examples of how to get started with MoLE.

## Advantages and features
- Dense gating of experts allows mixing
- Because the MoLE layer is the only trainable layer, fine-tuning has few trainable parameters
- Easy-to-use API: `add_mole_to_model`

## Installation
Pending a pip release, `git clone` this repository and run `pip install -e .`.
Pending a pip release, `git clone` this repository and run `pip install -e .`.
6 changes: 4 additions & 2 deletions train.ipynb → examples/basic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import AutoModelForCausalLM\n",
"from peft import PeftModel"
]
Expand Down Expand Up @@ -100,7 +99,10 @@
"metadata": {},
"outputs": [],
"source": [
"mole.add_mole_to_model(model=model, mole_config=mole.MoLEClassifierConfig(2),adapters=['adapter_1', 'adapter_2'], peft_config=model.peft_config)"
"mole.add_mole_to_model(model = model,\n",
" mole_config=mole.MoLEClassifierConfig(2),\n",
" adapters=['adapter_1', 'adapter_2'],\n",
" peft_config=model.peft_config)"
]
},
{
Expand Down

0 comments on commit 3d87264

Please sign in to comment.