Skip to content

[Contribution] Add Parameter-Efficient Fine-Tuning Section #2673

@MaFi77

Description

@MaFi77

Overview

I would like to contribute a new section on parameter-efficient fine-tuning methods (LoRA and Adapters) to the "Fine-Tuning BERT" chapter.

Motivation

The current chapter (chapter_natural-language-processing-applications/finetuning-bert.md) explains how to fine-tune BERT for various tasks, but doesn't address the practical challenges many learners face:

  • Limited GPU memory for fine-tuning large models
  • Storage costs when serving multiple fine-tuned models
  • Long training times with constrained resources

Modern techniques like LoRA (Low-Rank Adaptation) and Adapter modules solve these problems and are widely used in production systems (ChatGPT, Llama, Stable Diffusion). Adding this content would help learners understand contemporary NLP practices.

Proposed Changes

I've prepared a comprehensive section that includes:

  1. LoRA (Low-Rank Adaptation)

    • Mathematical formulation with the low-rank decomposition approach
    • Concrete example: 98% parameter reduction (12K vs 589K parameters)
    • Real-world applications and performance benchmarks
  2. Adapter Modules

    • Architecture details (bottleneck design with residual connections)
    • Parameter efficiency analysis
    • Trade-offs vs LoRA
  3. Comparison Table

    • Full Fine-Tuning vs LoRA vs Adapters
    • Memory, speed, and use-case comparisons
  4. Practical Guidelines

    • When to use each method
    • Resource constraint considerations
  5. Updated Summary & New Exercise

    • Exercise comparing storage/memory requirements
  6. Bibliography References

    • Hu et al. 2022 (LoRA - ICLR)
    • Houlsby et al. 2019 (Adapters - ICML)

Statistics

  • Lines added: 109 (93 in chapter + 16 in bibliography)
  • Files modified: 2 (finetuning-bert.md, d2l.bib)

Patch File

I've created a Git patch file that can be applied directly:

  • Patch file: 0001-Add-section-on-parameter-efficient-fine-tuning-to-BE.patch (attached)

You can apply it with:

git am 0001-Add-section-on-parameter-efficient-fine-tuning-to-BE.patch

[0001-Add_section-on-parameter-efficient-fine-tuning-to-BE.patch](https://github.com/user-attachments/files/23431921/0001-Add_section-on-parameter-efficient-fine-tuning-to-BE.patch)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions