Skip to content
#

finetuning-large-language-models

Here are 21 public repositories matching this topic...

Unlock the potential of finetuning Large Language Models (LLMs). Learn from industry expert, and discover when to apply finetuning, data preparation techniques, and how to effectively train and evaluate LLMs.

  • Updated Oct 20, 2023
  • Jupyter Notebook

This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation (QLoRA) and other parameter-efficient fine-tuning techniques to optimize its performance for specific NLP tasks. The improved model is demonstrated through a Streamlit application, showcasing its capabilities in real-time interactive settings.

  • Updated Apr 18, 2024
  • Jupyter Notebook

This project uses BERT to build a QA system fine-tuned on the SQuAD dataset, improving the accuracy and efficiency of question-answering tasks. We address challenges in contextual understanding and ambiguity handling to enhance user experience and system performance.

  • Updated Jun 1, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the finetuning-large-language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the finetuning-large-language-models topic, visit your repo's landing page and select "manage topics."

Learn more