Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update website #10

Merged
merged 3 commits into from
Aug 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,10 @@
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
},
"flake8.interpreter": [
"/Users/goldpiggy/anaconda3/envs/uniflow/bin/python"
],
"pylint.interpreter": [
"/Users/goldpiggy/anaconda3/envs/uniflow/bin/python"
],
}
11 changes: 6 additions & 5 deletions app/components/Agenda.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@ import Section from './Section';

const agendaItems = [
'Section 1: Introduction to RAG and LLM Fine-Tuning (20 mins)',
'Section 2: Lab1: RAG pipeline (30 mins)',
'Section 3: Lab 2: LLM fine-tuning (30 mins)',
'Section 2: Lab setup (10 mins)',
'Section 3: Lab 1: Advanced Techniques in RAG (40 mins) - Richard Song',
'Break (10 mins)',
'Section 4: The Pros and Cons of RAG and Fine-tuning (30 mins)',
'Section 5: Lab3: RAG + Fine-tuning and Benchmarking (45 mins)',
'Section 6: Summary and Q&A (15 mins)',
'Section 4: Lab 2: LLM Fine-Tuning (40 mins) - Yunfei Bai, Rachel Hu ',
'Break (10 mins)',
'Section 5: Lab 3: RAG and Fine-Tuned Model Benchmarking (30 mins) - José Cassio dos Santos Junior',
'Section 6: Conclusion and Q&A (20 mins)',
];

const Agenda = () => {
Expand Down
2 changes: 1 addition & 1 deletion app/components/Navbar.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ const Navbar = () => {
text-neutral-100
"
>
KDD Workshop 2024
14:00 – 17:00, August 25, 2024
</div>
</div>
</div>
Expand Down
36 changes: 22 additions & 14 deletions app/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -7,26 +7,34 @@ import TextContainer from './components/TextContainer';
export default function Home() {
return (
<div className="pb-20">
<Hero title="The Pros and Cons of RAG and Fine-tuning" subtitle="Workshop time and date - TBD" center />
<Hero
title="Domain-Driven LLM Development: Insights into RAG and Fine-Tuning Practices"
subtitle="Hands On Tutorials at 2024 ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, Barcelona, Spain"
center
/>
<Section title="Abstract">
<TextContainer
text="When building Large Language Model (LLM) applications on domain specific data, there are two prominent methods:
Retrieval Augmented Generation (RAG) and LLM Fine-Tuning (FT). RAG improves LLM responses by searching external
knowledge bases outside of its training data sources. RAG extends the capabilities of LLMs to specific domains
or an organization's internal knowledge base, without the need to retrain the model. On the other hand,
Fine-tuning approach updates LLM weights with domain-specific data to improve performance on specific tasks. The
fine-tuned model is particularly effective to learn new knowledge in a specific domain that is not covered by
the LLM pre-training. This tutorial will walk through the RAG and FT techniques, provide the insights of the
advantages and limitations, and share best practices of adopting the right methodology for your use cases. All
the methods will be introduced in a hands-on lab to demonstrate how the RAG and LLM fine tuning works, and how
they perform to handle domain specific LLM tasks. We will use uniflow and pykoi, an open source python library,
to implement the RAG and FT techniques in the tutorial."
text="To improve Large Language Model (LLM) performance on domain specific applications,
ML developers often leverage Retrieval Augmented Generation (RAG) and LLM Fine-Tuning.
RAG extends the capabilities of LLMs to specific domains or an organization's internal
knowledge base, without the need to retrain the model. On the other hand, Fine-Tuning
approach updates LLM weights with domain-specific data to improve performance on specific
tasks. The fine-tuned model is particularly effective to systematically learn new
comprehensive knowledge in a specific domain that is not covered by the LLM pre-training.
This tutorial walks through the RAG and Fine-Tuning techniques, discusses the insights of
their advantages and limitations, and provides best practices of adopting the methodologies
for the LLM tasks anduse cases. The hands-on labs demonstrate the advanced techniques to
optimize the RAG and fine-tuned LLM architecture that handles domain specific LLM tasks.
The labs in the tutorial are designed by using a set of open-source python libraries to
implement the RAG and fine-tuned LLM architecture."
/>
</Section>
<Agenda />
<Speakers />
<Section title="Slides">
<TextContainer text="Slides will be available here after the workshop." />
<Section title="Contents">
<TextContainer text="Slides: Coming Soon" />
<TextContainer text="Lab Notebooks: Coming Soon" />
</Section>
</div>
);
Expand Down
Loading