Could a Large Language Model (LLM) handle the role of Prompt engineer?
PromptGPT is a systematic approach for creating autonomous agents with Large Language Models (LLMs) with models such as GPT-4, ChatGPT 3.5-turbo, Antropic or similar fine-tuned LLM with Reinforcement Learning with Human Feedback (RLHF).
The core principles of PromptGPT:
- Autonomous workflows: Objective-agnostic, self-learning & self-supportive within its task.
- Simplicity: Prompt Programs are defined only once.
- User interface agnostic or Embedded experience: Jupyter notebooks, Chatbots & Voice assistants.
- Safety: Settings & Exit criteria.
- The LLM API call is made always using single function.
- Prompts are versatile: High-level cognitive tasks are pre-programmed, but low-level tasks get automatically roles, task and resources.
- Overall system remains simplistic even in complex workflows.
- Takes advantages of external resources: API calls, memory etc.
- Simple, zero configuration required.
- Option to add new Prompt programs or edit existing ones manually.
- Safety & Steerability. For example trigger automatic exit for forbidden tasks / roles or exceeding token limits.
- External memory, APIs.
- Self-discover new resources.
PromptGPT is an open source project, where anybody can contribute for external appreciation. Build on top by forking the current version of the repository, make changes to your forked version and then open a pull request to the main repository with the updated version. How to contribute
Released under MIT license.