Add prompt-formatter functionality #788
Closed
andreadimaio
started this conversation in
Ideas
Replies: 3 comments 1 reply
-
Looking at the API of |
Beta Was this translation helpful? Give feedback.
0 replies
-
This means that if we decide to implement this new feature, it will only be related to the watsonx provider. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for starting this discussion! I am pretty sure that this is something @cescoffier wanted as well, so let's wait until he is back from PTO |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I would like to use this topic to share with you an idea that I have in mind and understand if this is something that could be used for one or more LLM providers present in
quarkus-langchain4j
.Why introduce the
prompt-formatter
?I mainly work with the
watsonx
provider, which allows the developer to use different models. Some of these models (especially those ending with-instruct
) can be enriched with special tags that allow to get better results from the LLM, but this is not done automatically bywatsonx
.To better understand what I'm saying, here's an example.
Suppose that I want to use this prompt with three different models:
llama-3-405b-instruct
/mistral-large
/granite-13b-instruct-v2
To get the best response, the prompt should be written like this:
So what I normally do to get better results from the LLM is to add the tags I need based on the model, but this becomes a problem if I then want to change it, because I have to re-edit the whole prompt. The idea is to have a class that does the work automatically.
PromptFormatter
interfaceI already have a draft implementation (the examples above were automatically generated) which consists of having a
PromptFormatter
interface and a class for each model likeLLamaPromptFormatter
,MistralPromptFormatter
andGranitePromptFormatter
. The idea is to create thePromptFormatter
class based on the model name during the build phase, so that this class is used during the runtime phase to enrich the prompt with the correct tags.chat-model.prompt-formatter
propertyThe prompt-formatter feature can be enable or disabled with the
chat-model.prompt-formatter
property.The idea is to have two different values:
The enabled values need checks during build to throw an exception if misconfigured, for example, if the value is enabled and a prompt contains one of the special tags, an exception must be thrown.
Final consideration
For now, this is just a draft, and I'm sure there's more to add or change to get something that works. But I wanted to share it with you to get your opinion on the idea and to understand if this feature makes sense only for
watsonx
or if it can be added to other providers that allow multiple models (likeollama
orhugging-face
).Beta Was this translation helpful? Give feedback.
All reactions