This is my first experience with AI music, which is called Music for memory. I used Google Magenta, Performance RNN to generate and control a theoretically endless piano song. Performance RNN is an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. The model is trained on the Yamaha e-Piano Competition dataset, which contains MIDI captures of ~1400 performances by skilled pianists. If you listen to the music generated using this model, it would be very impressive.
So, in this project, I want to construct a connection between memory and AI music. I always think about the fundamentals of human’s emotions and algorithms should have something in common. Actually, how to make computational arts feel warm and close to the nature of humanity are always my interest.
link: