Deep learning source separation for hip hop and classical music

Source separation papers at ISMIR conference and MML workshop...

During October I have attended the Music and Machine Learning Workshop in Barcelona and the ISMIR conference in Suzhou, China presenting papers on source separation for hiphop and Western classical music. You can check the PhD section on this website if you are new to source separation and you want to understand what I have been doing during the past 4 years.

So what do hiphop and classical music have in common? :) Rather than having an universal model which separates any music piece, we were targeting context-specific approaches which deal with problems which one encounters in particular music genres. For instance, in hiphop the vocal part is not sung and there is wide variety of timbres and production styles. On the other hand, in classical music, the complexity arrises from the multitude of harmonic instruments of similar timbre (depending on the piece). In this case, there are also opportunities which are given by the scores: the Western classical music pieces depart from symbolic representations.

How can the deep learning or data-driven approaches take advantage of these characteristics? According to Ian Goodfellow a way to increase generalization is through data augmentation. So, hip hop and classical music source separation can improve using data augmentation or generation. This is the main idea behind the two papers.

The MML paper, Data augmentation for deep learning source separation of HipHop songs is based on work done by Hector Martel during his undergrad thesis at UPF. He is also a hiphop producer so he proposed a hiphop dataset. You can check out the demo he did for his thesis’ presentation. I uploaded the presentation on slideshare. Here’s a video of the demo:

The ISMIR paper, Monaural score-informed source separation for classical music using convolutional neural networks is a part of my PhD thesis on orchestral music source separation. It’s one of the first papers trying to improve deep learning source separation methods with score information. Similarly to the other deep learning papers, the code is made available through the github repository, and the separated tracks and computed evaluation metrics are on zenodo.

Demos:

·
hiphop, classical music, deep learning, source separation