You can customize Foundation Models(FMs) on Bedrock through fine-tuning. We provide examples on how to set up the resources, fine-tune and evaluate the customized model, and clean up the resources after running the examples.
-
00_setup.ipynb - Setup for running customization notebooks both for fine-tuning and continued pre-training using Amazon Bedrock. In this notebook, we will create set of roles and an S3 bucket which will be used for other notebooks in this module.
-
02_fine-tune_and_evaluate_llama2_bedrock_summarization.ipynb - In this notebook, we build an end-to-end workflow for fine-tuning, provisioning and evaluating the Foundation Models (FMs) in Amazon Bedrock. We choose Meta Llama 2 13B as our FM to perform the customization through fine-tuning, we then create provisioned throughput of the fine-tuned model, test the provisioned model invocation, and finally evaluate the fine-tuned model performance using fmeval on the summarization accuracy metrics.
-
03_cleanup.ipynb - Clean up all the resources that you have created in the previous notebooks to avoid unnecessary cost associated with the resources.
We welcome community contributions! Please ensure your sample aligns with AWS best practices, and please update the Contents section of this README file with a link to your sample, along with a description.