Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base model #61

Open
GuuD opened this issue Aug 14, 2024 · 0 comments
Open

Base model #61

GuuD opened this issue Aug 14, 2024 · 0 comments

Comments

@GuuD
Copy link

GuuD commented Aug 14, 2024

Hello,
Thank you guys for this great model, it is quite capable even in real-world usage scenarios.
However, as every coding model aimed at mainstream languages, this one also has room for improvement when working with more niche languages like Haskell/OCaml/F# etc. I have prepared some experimental datasets and wanted to finetune it, both to boost it performance in the languages I use, and evaluate impact on model's reasoning performance after exposure to functional and logic programming. I've been waiting for the release since reading #3, so is there still a chance to get it?

Also, if the answer is yes, could you may be share any kind of details about your training process and stages, and may be some samples of the datasets, so there are more examples for people who aren't AI/ML professionals, but want to have a mental model about impact of the different kinds of data on various stages on the final results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant