This is a list of projects I worked for



HUMAINT is an interdisciplinary project within the JRC’s Centre for Advanced Studies aiming to understand the impact of machine intelligence on human behaviour, with a focus on cognitive and socio-emotional capabilities and decision making.

Research topics: fairness, accountability and transparency, deep learning, human-robot interaction, children’s robotics, algorithm-supported decision making, data-driven policy making, music and creativity.

The project involves a core team of researchers plus a community of experts in cognitive science, machine learning, human-computer interaction and economics.



PHENICX is a collaborative research project, partially funded by the European commission. In this project, academic researchers, music institutions, musicians and up-and-coming technology companies join forces. Our mission is to make use of all the richness around classical music: the sound you can hear, the players you can see. The characteristics of a piece, and the differences between multiple performances of the same piece. The background stories behind a piece, and the way it is perceived by different types of audiences. Through smart use of technology, we want to use this richness to build a whole new classical concert experience. A concert experience that can guide you through a performance, with information tailored to varying expertise levels. A concert experience that allows you to get an impression of a piece before a concert, enriches the experience during a concert, and lets you revisit the concert after it was played, allowing you to discover new things about it. A concert experience that even may initiate a social discussion based on your impressions and the impressions of other attendees. All of this is not meant to defy the traditional concert experience, but to offer you new engaging experiences on top of it.



What are the properties of sound signals that induce the experience of groove in listeners? In particular, how do systematic patterns of signal properties (timing, metrical structure, loudness, etc.) relate to the experience of groove at several levels of the metrical structure?

To answer these questions, in ShakeIt, we follow an analysis/synthesis approach, and explore complementarities between three lines of work:

  • Automatic analysis of groove features from audio. In particular, focusing on automatic learning from examples of what we call the “groove archetype” of certain music styles.

  • Empirical experiments with human participants to validate/invalidate these features and to investigate if there are other relevant features for the perception of groove.

  • Implementation of a software for real-time generation/manipulation of polyphonic rhythmic sequences conveying the groove of a certain style, or to gradually change the groove feel of a rhythmic sequence at run time.