-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Base model #61
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
Thank you guys for this great model, it is quite capable even in real-world usage scenarios.
However, as every coding model aimed at mainstream languages, this one also has room for improvement when working with more niche languages like Haskell/OCaml/F# etc. I have prepared some experimental datasets and wanted to finetune it, both to boost it performance in the languages I use, and evaluate impact on model's reasoning performance after exposure to functional and logic programming. I've been waiting for the release since reading #3, so is there still a chance to get it?
Also, if the answer is yes, could you may be share any kind of details about your training process and stages, and may be some samples of the datasets, so there are more examples for people who aren't AI/ML professionals, but want to have a mental model about impact of the different kinds of data on various stages on the final results
The text was updated successfully, but these errors were encountered: