You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you considered alternative solutions to your feature request?
Yes - and alternatives don't suffice
Related problems
There is a noticeable performance gap between local language models from Ollama (llama3.1:8b, llama3.1:70b) and the cloud-based larger models from OpenAI and Azure.
Describe the feature request
I propose the implementation of more specific, tailored prompts for each best practice in order to enhance performance. Additionally, reference information that feeds into the model context should be customized for each best practice, ensuring that the model can provide more accurate and context-aware responses.
The text was updated successfully, but these errors were encountered:
Checked for duplicates
Yes
Alternatives considered
Yes - and alternatives don't suffice
Related problems
There is a noticeable performance gap between local language models from Ollama (llama3.1:8b, llama3.1:70b) and the cloud-based larger models from OpenAI and Azure.
Describe the feature request
I propose the implementation of more specific, tailored prompts for each best practice in order to enhance performance. Additionally, reference information that feeds into the model context should be customized for each best practice, ensuring that the model can provide more accurate and context-aware responses.
The text was updated successfully, but these errors were encountered: