This repo contains boilerplate code and examples on how to use AI models provided via Huggingface.
You will need to generate a Huggingface API Token in order to access the models.
- Login to your Huggingface account
- Click on your Profile badge (top right) --> Access Tokens
- Create a new token, and select the following permissions
- Make calls to the serverless Inference API
- Make calls to Inference Endpoints
- Once you have generated the token, save it in a secure location (see below how to save as a system variable)
The provided functions will ask for your Token the first time you use it by telling you to run the following function
keyring::key_set("huggingface_API")
- Make sure you have the
keyring
package installed - If you are having issues with an incorrect Token, run the function again to update the token