clone the repository
git clone https://github.com/Create a virtual environment
conda create -n cpullama python=3.8 -yconda activate cpullamapip install -r requirements.txtpython app.pyDownload the quantize model from the link provided in model folder & keep the model in the model directory:
## Download the Llama 2 Model:
llama-2-7b-chat.ggmlv3.q4_0.bin
## From the following link:
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main