Matthew Rice, [email protected]
For my final project, I will revist the music generation assignment. I would like to create a real-time version of this project, in that I play piano or bassline "licks", and the model will generate drum tracks to "accompany" my licks. Then I will have a separate model generate new licks and then the original model generate new drum tracks to the new rif, creating an AI jam session. This model expands on the idea of using a VAE to generate music by allowing two such instances of these models to generate music for themselves. I will present this work by bringing my audio equipment to class and starting the "jam session" with a couple licks and then allowing the networks to generate new music.
AI Jam 2 Report.pdf
There were two models used, both from magenta.
Drums
Melody
All the code for this project is in the src folder on this repo. The main files are: App.js, interface.js, MelodyModel.js, and DrumifyModel.js.
example.wav - Example audio recording of a jam session. All breaks are for the generation of new drums/melodies. Output website is published at mhrice.github.io/Ai-Jam-2
This is a JavaScript implementation of my idea. To replicate, you need npm.
Then clone this directory and run npm run install
then npm run start
to start the dev server.
However, all of the functionality of the process has been mapped specifically for my MIDI controller (Novation Launchkey 49), and other modifications might need to be made to work with other controllers.
Big thanks to the following:
Magenta Studio
Magenta.js guide
MIDI Drum Refence
Web Midi
Tone.js