A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner to the end-users.
sh install.sh
- text-davinci-003
- Flan-T5 powered by Google.
- ChatGPT and GPT4 through paid API.
We support three type of prompts from the moment:
- Manual prompts: these prompts are hard-coded and were the first included in this project.
- Awesome Chat GPT prompts: our system also supports this huge HF dataset in a transparent manner.
- Custom prompts: any user can add custom prompts through a file.
- (In progress) Support for awesome-gpt4 prompts.
Users should create a config.txt file like the following to read Open AI bearer:
[auth]
api_key = xxxxxxxxxxxxxxxxxx
from smartygpt import SmartyGPT, Models
if __name__=="__main__":
s = SmartyGPT(prompt="DoctorAdvice", config_file="/home/user/config.txt")
result = s.wrapper("Can Vitamin D cure COVID-19?")
print(result)
Check the Colab or test folder for more examples and functionalities
The main purpose of this project is joining in a single environment all the resources (models, prompts, APIs, etc.) related to LLMs.
Moreover, we also think from an end-user perspective. It is heavily unlikely that a user would introduce a complex context in a query to bias a model response. This library tries to solve this hidding the implementation details to end-users.
More features/models are about to come! Feel free to make a PR, open an issue or to contact me at [email protected]
The software is provided "as is" and "with all faults" without warranties of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose and non-infringement. No warranty is provided that the software will be free from defects or that operation of the software will be uninterrupted. Your use of the software and any other material or services downloaded or made available to you through the software is at your own discretion and risk, and you are solely responsible for any potential damage resulting from their use.