AI chat model that can convert natural language to code i.e., understand natural language as input and generate the required embedded platform code as the output.
Using Hugging Face Models
This project leverages Hugging Face models for natural language processing tasks. To integrate these models into your code, follow these steps:
Install the library:
pip install accelerate peft transformers
pip install trl
pip install sentencepiece
pip install -U einops
pip install datasets
To interact with the Hugging Face API, you'll need to obtain an API key. Follow these steps:
-
Visit the Hugging Face website and sign in or create an account.
-
Once logged in, navigate to your account settings or developer settings.
-
Generate an API key.
-
Copy the generated API key.
To use the Hugging Face API in your project:
Store the API key in a secure location, such as a configuration file or environment variable.
Use the Hugging Face CLI to login and verify your authentication status.
//Run this command in Terminal
huggingface-cli login
-
Open Terminal
-
Open the python script
train.py
:Load the base model from Hugging face:
//The Base model will be downloaded from hugging face and this "meta-llama/Llama-2-7b-hf" can be found in the hugging face repository of the model model_name = "meta-llama/Llama-2-7b-hf"
Give the path to save the Fine-Tuned model and name
new_model = "./7b1"
Load the dataset for Training
//The dataset should be in the same directory as the 'train.py' scrpit, if not give the full path //The dataset should in JSONL format and it should follow the dataformat of the model, the format can be found in 'dataformat.py' scrpit. dataset = load_dataset("json", data_files="final2i.jsonl")
-
Run the python script
train.py
python3 train.py
To perform inference using the fine-tuned model, use the following steps:
-
Open Terminal
-
Open the python script
inference_pipeline.py
:Load the fine-tuned model and tokenizer in your inference script:
model_name = "./7b1"
-
Run the python script
inference_pipeline.py
python3 inference_pipeline.py
-
Enter the prompt, and wait for the model to generate output.
To perform inference using the fine-tuned model with UI, use the following steps:
-
Open Terminal in DGX server
-
Open the python script
inference_server.py
:Load the fine-tuned model and tokenizer in your inference script:
model_name = "./7b1"
-
Run the python script in the DGX server
inference_server.py
python3 inference_server.py
-
Open Terminal in the Client Side
-
Run the python script in the client side `inference_client.py
streamlit run --server.fileWatcherType none inference_client.py
-
Enter the prompt, and wait for the model to generate output from the server.