Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair #118

Open
Ali94k opened this issue Sep 11, 2024 · 1 comment

Comments

@Ali94k
Copy link

Ali94k commented Sep 11, 2024

I would like to add the following publication:

RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair

Automated Program Repair (APR) has evolved significantly with the advent of Large Language Models (LLMs). Fine-tuning LLMs for program repair is a recent avenue of research, with many dimensions which have not been explored. Existing work mostly fine-tune LLMs with naive code representations and does not scale to frontier models. To address this problem, we propose RepairLLaMA, a novel program repair approach that 1) identifies optimal code representations for APR with fine-tuned models, and 2) pioneers state-of-the-art parameter-efficient fine-tuning technique (PEFT) for program repair. This results in RepairLLaMA producing a highly effective `program repair adapter' for fixing bugs with AI. Our experiments demonstrate the validity of both concepts. First, fine-tuning adapters with program repair specific code representations enables the model to use meaningful repair signals and produce better patches. Second, parameter-efficient fine-tuning helps fine-tuning to converge and clearly contributes to the effectiveness of RepairLLaMA in fixing bugs outside the fine-tuning data distribution. Overall, RepairLLaMA correctly fixes 144 Defects4J v2 and 109 HumanEval-Java bugs, outperforming all baselines.

  • DBLP key: [e.g. journals/corr/abs-2312-15698]
@mechtaev
Copy link
Contributor

mechtaev commented Oct 4, 2024

Thanks! According to the current policy, we only add peer-reviewed papers. We may change it later, but for now I will wait until it appears in a conference/journal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants