We collect some useful tips for designing prompts that are collected from online notes and experiences from our authors, where we also show the related ingredients and principles (introduced in Section 8.1).
We abbreviate principles as Prin. and list the IDs of the related principles for each prompt. 1⃝: expressing the task goal clearly; 2⃝: decomposing into easy, detailed sub-tasks; 3⃝: providing few-shot demonstrations; 4⃝: utilizing model-friendly format.
Welcome everyone to provide us with more relevant tips in the form of issues. After selection, we will regularly update them on GitHub and indicate the source.
Ingredient | Collected Prompts | Prin. |
---|---|---|
Task Description | T1. Make your prompt as detailed as possible, e.g., "Summarize the article into a short paragraph within 50 words. The major storyline and conclusion should be included, and the unimportant details can be omitted." | 1⃝ |
T2. It is helpful to let the LLM know that it is an expert with a prefixed prompt, e.g., ''You are a sophisticated expert in the domain of compute science." | 1⃝ | |
T3. Tell the model more what it should do, but not what it should not do. | 1⃝ | |
T4. To avoid the LLM to generate too long output, you can just use the prompt: "Question: Short Answer: ''. Besides, you can also use the following suffixes, "in a or a few words'', "in one of two sentences''. |
1⃝ | |
Input Data | I1. For the question required factual knowledge, it is useful to first retrieve relevant documents via the search engine, and then concatenate them into the prompt as reference. | 4⃝ |
I2. To highlight some important parts in your prompt, please use special marks, e.g., quotation ("") and line break (\n). You can also use both of them for emphasizing. |
4⃝ | |
Contextual Information | C1. For complex tasks, you can clearly describe the required intermediate steps to accomplish it, e.g., "Please answer the question step by step as: Step 1 - Decompose the question into several sub-questions, …'' |
2⃝ |
C2. If you want LLMs to provide the score for a text, it is necessary to provide a detailed description about the scoring standard with examples as reference. | 1⃝ | |
C3. When LLMs generate text according to some context (e.g. making recommendations according to purchase history), instructing them with the explanation about the generated result conditioned on context is helpful to improve the quality of the generated text. | 2⃝ | |
Demonstration | D1. Well-formatted in-context exemplars are very useful to guide LLMs, especially for producing the outputs with complex formats. | 3⃝ |
D2. For few-shot chain-of-thought prompting, you can also use the prompt "Let's think step-by-step'', and the few-shot examples should be separated by "\n'' instead of full stop. | 1⃝3⃝ | |
D3. You can also retrieve similar examples in context to supply the useful task-specific knowledge for LLMs. To retrieve more relevant examples, it is useful to first obtain the answer of the question, and then concatenate it with the question for retrieval. | 3⃝4⃝ | |
D4. The diversity of the in-context exemplars within the prompt is also useful. If it is not easy to obtain diverse questions, you can also seek to keep the diversity of the solutions for the questions. | 3⃝ | |
D5. When using chat-based LLMs, you can decompose in-context exemplars into multi-turn messages, to better match the human-chatbot conversation format. Similarly, you can also decompose the reasoning process of an exemplars into multi-turn conversation. | 3⃝ | |
D6. Complex and informative in-context exemplars can help LLMs answer complex questions. | 3⃝ | |
D7. As a symbol sequence can typically be divided into multiple segments (e.g., i1, i2, i3 → i1, i2 and i2,i3), the preceding ones can be used as in-context exemplars to guide LLMs to predict the subsequent ones, meanwhile providing historical information. | 2⃝3⃝ | |
D8. Order matters for in-context exemplars and prompts components. For very long input data, the position of the question (first or last) may also affect the performance. | 3⃝ | |
D9. If you can not obtain the in-context exemplars from existing datasets, an alternative way is to use the zero-shot generated ones from the LLM itself. | 3⃝ | |
Other Designs | O1. Let the LLM check its generated results before draw the conclusion, e.g., "Check whether the above solution is correct or not.'' | 2⃝ |
O2. If the LLM can not well solve the task, you can seek help from external tools by prompting the LLM to manipulate them. In this way, the tools should be encapsulated into callable APIs with detailed description about their functions, to better guide the LLM to utilize the tools. | 4⃝ | |
O3. The prompt should be self-contained, and better not include the information in the context with pronouns (e.g., it and they). | 1⃝ | |
O4. When using LLMs for comparing two or more examples, the order affects the performance a lot. | 1⃝ | |
O5. Before the prompt, assigning a role for the LLM is useful to help it better fulfill the following task instruction, e.g., "I want you to act as a lawyer''. | 1⃝ | |
O6. OpenAI models can perform a task better in English than other languages. Thus, it is useful to first translate the input into English and then feed it to LLMs | 4⃝ | |
O7. For multi-choice questions, it is useful to constrain the output space of the LLM. You can use a more detailed explanation or just imposing constraints on the logits. | 1⃝ | |
O8. For sorting based tasks (e.g., recommendation), instead of directly outputting the complete text of each item after sorting, one can assign indicators (e.g., ABCD) to the unsorted items and instruct the LLMs to directly output the sorted indicators. | 1⃝ |