-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
LoRA + DeBERTa: loading model gives erratic, non-deterministic results
#2171
opened Oct 22, 2024 by
jchook
2 of 4 tasks
Invalid
none
check for loftq_config attribute in LoraConfig
#2170
opened Oct 22, 2024 by
sirluk
2 of 4 tasks
Prompt Tuning Crash with Llama-3.2 in torch.embedding
#2161
opened Oct 18, 2024 by
hrsmanian
2 of 4 tasks
LoraConfig conflict when using
layers_to_transform
in LlamaModel
#2155
opened Oct 17, 2024 by
Evan02580
2 of 4 tasks
Tensor Expansion Size Mismatch During Forward Pass
#2154
opened Oct 16, 2024 by
VecherVhatuX
2 of 4 tasks
When I use peft to finetune llama2, the gpu memory keeps growing
#2141
opened Oct 10, 2024 by
xuanzhangyang
2 of 4 tasks
PEFT doesn't inject virtual tokens into generate forward pass
#2134
opened Oct 6, 2024 by
Kami-chanw
2 of 4 tasks
Key mismatch when trying to load a LORA adapter into an XLORA model
#2132
opened Oct 5, 2024 by
p4arth
2 of 4 tasks
PeftModelForCausalLM.generate
ignores prompt tuning parameters unless use_cache=False
#2123
opened Oct 2, 2024 by
mattlgarber
2 of 4 tasks
Ineffective Fine-Tuning Bug: Using
get_peft_model()
Before Loading LoRA Produces Outputs Identical to the Base Model
#2115
opened Sep 30, 2024 by
Hoper-J
4 tasks
could not finetune gemma 2 9b with lora and fsdp
#2111
opened Sep 29, 2024 by
imadoualid
2 of 4 tasks
merge_and_unload docs do not clarify behaviour for quantized base models
#2105
opened Sep 26, 2024 by
RonanKMcGovern
2 of 4 tasks
Questions about original_module and modules_to_save.default
#2100
opened Sep 26, 2024 by
dengchengxifrank
2 of 4 tasks
loftq_utils.py depdends on huggingface_hub.errors, which doesn't appear in some versions of huggingface_hub
#2097
opened Sep 25, 2024 by
mashoutsider
2 of 4 tasks
Loading lora weights for FLUX pipeline is extremely slow
#2055
opened Sep 8, 2024 by
nachoal
2 of 4 tasks
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.