Deep Convolutional Neural Networks for Musical Source Separation
This repository contains classes for data generation and preprocessing, useful in training neural networks with large datasets that do not fit into memory. Additionally, you can find classes to query samples of instrument sounds from RWC instrument sound dataset. In the ‘examples’ folder you can find use cases for the classes above for the case of music source separation.
an openFrameworks tapping recorder
beatStation was designed as a game with a purpose application in which users can compete between each other in tapping various songs. It can be used by researchers to annotate audio, conduct experiments, or as an inspiration for future apps.
This is audio drum transcription algorithm in pure data, MaxMSP and Max4Live which can transcribe kick, snare, and hi-hat from live drum performances. The software takes live audio or files as input and triggers events for each drum type as output.
This dataset includes audio and annotations useful for tasks as score-informed source separation, score following, multi-pitch estimation, transcription or instrument detection, in the context of symphonic music.The dataset was presented and used in the evaluation of:
M. Miron, J. Carabias-Orti, J. J. Bosch, E. Gómez and J. Janer, “Score-informed source separation for multi-channel orchestral recordings”, Journal of Electrical and Computer Engineering (2016))”