My presentation of the EEGsynth at IRCAM, was put online by their professional audiovisual staff (you can take a look at my slides separately here). I got the sense that people were definitely interested in basic cognitive neuroscience, and that therefor my efforts explaining EEG recording 101 was appreciated. Although I tried to explore the potential of the EEGsynth with some of our own examples, with such an advanced signal-processing and artistic audience it was also clear that next time I could go into far more detail about the technical and scientific details: how we transform EEG into control signals, and what that would tell us about both mental/brain activity as well as potentials for control and self-awareness. I am of-course excited to go more into these topics next time I have the chance. Not only science and technology, but also our explanations develop in a step-by-step manner. In fact, many more idea’s are germinating just below the surface waiting to be dug up and exposed.
Jean-Julien Aucouturier and his CREAM lab (his cream-team?) were just starting their collaboration with composer and artist-in-residency Emanuele Palumbo, who for his piece Arthaud Overdrive designed something similar to the EEGsynth, namely an Arduino board sending OSC to a series of Max patches, to control the electronics of the work with data flowing from the musicians’ physiology sensors (blood rate, respiration, etc.). The end of the piece shows an interesting bit of auto-entrainment, where some musicians use a metronome (in ear phones) that’s derived in real-time from other musicians’ blood and respiration rate.
I also spend quite some time with Pablo Arias, a PhD candidate at CREAM, who explained me his exciting work on real-time emotional transformation of both speech and facial expressions. This research is related to previous work of the lab, which resulted in DAVID (Da Amazing Voice Inflection Device).
Like an auto-tune, but for emotions” (Brian Resnick, for Vox.com)
DAVID (Da Amazing Voice Inflection Device) is a free, real-time voice transformation tool able to “colour” any voice with an emotion that wasn’t intended by it’s speaker. DAVID was especially designed with the affective psychology and neuroscience community in mind, and aims to provide researchers with new ways to produce and control affective stimuli, both for offline listening and for real-time paradigms. For instance, we have used it to create real-time emotional vocal feedback in Aucouturier et al, 2016.
Technically, DAVID is implemented as an open-source patch for the close-source audio processing plateform Max (Cycling’74), and, like Max, is available for most Windows and MacOS configurations. DAVID is available for free to the Forum community after registration (follow download link to proceed).
Don’t forget: we will be back at IRCAM in November (8-11) to join a Music Hacking symposium/hackathon at IRCAM. More details to follow as soon as I get them.