Skip to content

Latest commit

 

History

History
51 lines (39 loc) · 2.9 KB

File metadata and controls

51 lines (39 loc) · 2.9 KB

Text Gen Plugin Prompt Templates

Prompt Templates are YAML files which can be used to customize the prompt sent to your LLM. Templates completly replace the prompt generated by AutoGPT to account for the various ways LLMs expect to complete their conversation.

As of the current release, there is one template with a second in-development.

Monolithic Template

Use this template to modify the single system message sent to your LLM. Unedited, the template closely replicates the single system prompt sent by Auto-GPT to GPT-4 or GPT 3.5 Turbo. Exceptionally skilled LLMs with more than a 2048 token context window may be able to produce a successful response to Auto-GPT with little or no modification. But most LLMs will likely need significant modification to work correctly.

Staged Template (Coming Soon)

If your LLM has a 2048 token context, or is simply unable to create the needed response by Auto-GPT, the Staged Template format will allow you to create a series of prompts to guide the LLM to create the correct response needed by Auto-GPT. This functionality is in active development.

How to Use

  1. Copy the appropriate template to a location on your computer.

  2. Edit the YAML file to adjust any prompt statements sent to the LLM.

  3. Edit your Auto-GPT .env file and add the following line:

    LOCAL_LLM_PROMPT_PROFILE=path/to/yaml/file
    

Is it Necessary to Use a Template?

Strictly speaking, it is not necessary. If one is not configured, the generic monolithic prompt and data format will be sent to the LLM with the following modifications:

  • Typically, an array is sent to OpenAI's API to accommodate the structure of messages used to tell GPT-4 and GPT 3.5 how to behave and respond. The first entry is the System message, subsequent messages are are attributed to "User".
  • When sent to an LLM, the system message and all other messages are converted to text in the same style as a conversation the user might have with the LLM. The agent is referred to by name, and the name "User" is appended to all blocks of test.

Example

OpenAI might receive a message roughly similar to this:

[
    {
        "name": "system",
        "message": "This is the generic prompt and background information sent from Auto-GPT to GPT-4 and GPT-3.5"
    },
    {
        "name": "user",
        "message": "This language is used to tell OpenAI that it is time to respond"
    }
]

This message might be translated to something similar to this:

User: This is the generic prompt and background information sent from Auto-GPT to GPT-4 and GPT-3.5. If the monolithic template is used, your own customzied message will be sent here.
User: Any subsequent statements in the array will be attributed to the user or the agent as it is sent from Auto-GPT.
AgentName: 

AgentName is replaced by the name of the agent introduced in the first User message. It is needed by most LLMs to trigger a response given the context.