Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a library of system prompts for a verity of tasks #15

Open
Egalitaristen opened this issue Mar 30, 2024 · 4 comments
Open

Create a library of system prompts for a verity of tasks #15

Egalitaristen opened this issue Mar 30, 2024 · 4 comments

Comments

@Egalitaristen
Copy link
Owner

Most suitable job description is generated by an LLM based on the task and then the system prompts are loaded into each agent to best complete the task.

Alternatively make a system prompt generator LLM but I only have like 70% success with getting good system prompts right now.

A combination might be a good way. Check list first and if job description isn't in the list use system prompt generator.

@Egalitaristen Egalitaristen converted this from a draft issue Mar 30, 2024
@Egalitaristen
Copy link
Owner Author

ExpertPrompting enhances LLMs' answer quality by embedding them with an expert persona. This approach uses in-context learning to craft expert identities tailored to specific instructions, aiming to elicit higher quality responses from LLMs like GPT-3.5. The method significantly augments the answering quality by providing context aligning with the generated content. Source: https://ar5iv.org/abs/2305.14688

Development of ExpertLLaMA, a chat-based assistant using ExpertPrompting. This new model demonstrates superior performance in evaluations against other models, showcasing the effectiveness of the ExpertPrompting strategy in enhancing the capability of LLMs to produce more authoritative and detailed answers. Source: https://ar5iv.org/abs/2305.14688

Exploration of advanced prompting techniques in Prompt Design and Engineering. This includes using explicit ending instructions and a more forceful tone in prompts to ensure compliance from LLMs, aiming to refine interaction for more accurate and relevant output. Source: https://ar5iv.org/abs/2401.14423

The introduction of ExpertPrompting and ExpertLLaMA. This strategy is highlighted as an automatic, generalized, yet straightforward approach to instructing LLMs to answer like distinguished experts. ExpertPrompting is applied to GPT-3.5 to produce a new set of instruction-following data, which is used to train the new open-source chat assistant ExpertLLaMA, achieving 96% of the original ChatGPT's capability according to GPT4-based evaluation. Source: https://ar5iv.org/abs/2305.14688

The effectiveness of carefully designed prompts in enhancing LLM output quality. The ExpertPrompting study emphasizes the impact of structuring prompts to imbue LLMs with an expert identity, leading to more detailed and authoritative responses. This approach underscores the significance of prompt design in improving the authenticity and depth of LLM responses for AI-based applications. Source: https://ar5iv.org/abs/2305.14688

How to think step-by-step: A study investigates the neural mechanisms LLMs utilize for chain-of-thought reasoning, exploring the emergence of induction heads that enable pattern copying, crucial for step-by-step reasoning. Source: https://ar5iv.org/abs/2402.18312

Structure Guided Prompt for multi-step reasoning: Introduces a novel three-step prompting framework aimed at guiding LLMs through multi-step reasoning tasks by converting unstructured text into a graph, enhancing the LLM's ability to navigate and reason with structured information. Source: https://ar5iv.org/abs/2402.13415

The Impact of Reasoning Step Length on LLMs: Examines how expanding or compressing reasoning steps within the Chain-of-Thought (CoT) prompting influences LLMs' performance, suggesting that the length of reasoning steps plays a crucial role in enhancing logical reasoning capabilities of LLMs. Source: https://ar5iv.org/abs/2401.04925

ExpertPrompting: Instructing LLMs to be Distinguished Experts: Discusses a method that leverages In-Context Learning to create detailed and customized descriptions of expert identity for specific instructions, aiming to elicit higher quality answers by embedding them with an expert persona. Source: https://ar5iv.org/abs/2305.14688

@Egalitaristen
Copy link
Owner Author

Also check out: https://github.com/stanfordnlp/dspy

@elsatch
Copy link
Collaborator

elsatch commented Apr 1, 2024

For Claude, check this prompt-library:

https://docs.anthropic.com/claude/prompt-library

@Egalitaristen
Copy link
Owner Author

For Claude, check this prompt-library:

https://docs.anthropic.com/claude/prompt-library

That might be useful for inspiration but the whole point of LocalDev is to use a opensource model. If people want an dev agent that works with OAI or Claude there's already tons of projects. But none of them work well with opensource models because they're all built around OAI API and prompting. This makes the context overflow and the prompts badly adjusted for opensource models, which I think is the main issue that few seem to understand.

I've tried using opensource models with just about every agent out there and I've read the prompts and watched the logs to see where things break.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

2 participants