You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like to deploy generative AI in support of musical composition. The basic idea is to use AI to generate possible modifications of a phrase (or phrases) generated by the user. In other words, the user would start the composition and the AI would then present different possibilities for enriching the composition to which the user would react and further enhance.
Specifically, we would be working toward accomplishing the following:
Tune open source LLM to create the ideal output for beginners, and for learning
Create an API that translates LLM musical data to Music Blocks data, so that the learner has something to work from
Create a in-app framework for evaluating the AI output, so that the user has several plausible choices and, if they so choose, grade the choices to be used for further improvement
Goals & Mid-Point Milestone
Goals
[model development]
[backend deployment]
[UX design]
[Frontend implementation]
[Goals Achieved By Mid-point Milestone]
[model development]
[backend deployment]
Setup/Installation
No response
Expected Outcome
A "button" added to Music Blocks that invokes an LLM to provide:
phrasing suggestions
orchestration suggestions
The suggestions would be in the form of written text and/or code blocks.
Acceptance Criteria
The mechanics of the interaction are the primary requirement. The quality of the suggestions can be improved over time.
Implementation Details
A dockerized server would run on one of the Sugar servers. That server would interface to one or more LLMs.
The messaging between Music Blocks and the server would be in the form of code snippets (individual phrases) and multiple phrases.
The interface would prompt the user to either submit a phrase or multiple phrases.
Is this an additional feature to an existing project, or is it a separate project that will be integrated with Music Blocks? Regarding the LLM component, will it be implemented as an API from a model provider like Hugging Face, or something else ?
Ticket Contents
Description
We would like to deploy generative AI in support of musical composition. The basic idea is to use AI to generate possible modifications of a phrase (or phrases) generated by the user. In other words, the user would start the composition and the AI would then present different possibilities for enriching the composition to which the user would react and further enhance.
Specifically, we would be working toward accomplishing the following:
Goals & Mid-Point Milestone
Goals
Setup/Installation
No response
Expected Outcome
A "button" added to Music Blocks that invokes an LLM to provide:
The suggestions would be in the form of written text and/or code blocks.
Acceptance Criteria
The mechanics of the interaction are the primary requirement. The quality of the suggestions can be improved over time.
Implementation Details
A dockerized server would run on one of the Sugar servers. That server would interface to one or more LLMs.
The messaging between Music Blocks and the server would be in the form of code snippets (individual phrases) and multiple phrases.
The interface would prompt the user to either submit a phrase or multiple phrases.
Mockups/Wireframes
No response
Product Name
Music Blocks
Organisation Name
Sugar Labs
Domain
Education
Tech Skills Needed
Docker, JavaScript, Machine Learning, Python
Mentor(s)
@pikurasa @Wat
Category
Backend, Frontend
The text was updated successfully, but these errors were encountered: