Here’s how Jason uses Muse to create music with his brain
Imagine attending a musical performance where the artist’s own brainwaves and heart rate shape the music in real-time, offering a truly immersive experience. Jason Snell is doing just that.
As a software developer and artist, Jason has spent years developing a unique system that transforms live biometric data into dynamic musical compositions. His work with the Muse headband has led to the creation of bio-responsive performance art, including brainwave techno, ambient soundbaths, and symphonies.
In this interview, Jason shares his project’s evolution, the inspiration behind his work, and how the Muse headband plays a crucial role in the educational and therapeutic applications of his musical neurofeedback system.
Jason's journey with Muse
Can you tell us a bit about your project and how the Muse headband has influenced its development?
Over the last six years, I’ve developed dozens of software iterations that process biometric data from the Muse headset and convert it into MIDI (digital music information). This data is then routed to Ableton or electronic music hardware like drum machines, synthesizers, and effects pedals.
I routinely use this software system for art performances, using my live biometrics to shape the sound and the visuals or lighting. The result is a bio-responsive performance art experience. These performances have taken the form of brainwave techno, ambient, biofeedback soundbaths, and brainwave symphonies.
I’ve gravitated to the Muse because of its price, ease of use, transportability, and range of sensors. Besides the EEG sensors (brainwaves), I also use the PPG (heart data) and accelerometer (movement). Being able to co-process multiple biosensors creates a more dimensional model for the output, resulting in a richer, more immersive sound and lightscape.
What inspired you to start developing a musical neurofeedback system that integrates EEG data?
In 2009, I created a generative music app, which evolved into a generative AI system. It ended up making better music than me (haha), and wrote an album released on an avant-garde record label I had aimed for years. Feeling a bit beaten by machines, I focused next on dancers and motion sensors, where their real-time movement composes the music. I worked with dancers in Berlin, LA, Iowa, and New York, witnessing their bodies move with empowerment and joy through the biofeedback loop.
In late 2017, I dreamed of making music from DNA sequences, leading to the project name Primary Assembly and turning my attention inward. Shortly afterward, I saw an ad for the Muse headset and realized I could use the code frameworks from my motion sensor project. I shifted my system's inputs from algorithms to movement and then to brainwaves, and the concept of making music with my brain excited me.
With nearly 30 years of experience in computer programming, electronic music production, and meditation, these paths merged. I ordered the Muse, and 36 hours later, I had a prototype. I put on the Muse, cleared my mind, and a note played on my synthesizer. This was my ‘Eureka!’ moment. I knew if I could produce a single note, I could eventually create a symphony.
Learn how music affects brain function and performance here >>
Can you describe the evolution of your project from the initial iOS prototypes to the current standalone app you're developing?
The first iterations of the brainwave system were like DJing with my brain. I used muscle tension and alpha relaxation to shape the FX and EQ of looping stems on an Elektron Octatrack, then peripheral left or right focus to crossfade mixes. This allowed me to perform hour-long techno sets using only my mind.
Next, I explored composing with mental state changes by inserting and removing beats from a looping sequencer. I also demonstrated "biomorphic" compositions, which sounded like ambient IDM music, at conferences and university talks.
When the pandemic hit, I spent my extra time at home working with the new Muse 2 and its PPG heart rate sensor. I developed a techno performance where the tempo was controlled by my heart, leading to the smoothest, most natural-feeling BPM changes. When the music was more exciting, the BPM went up; when it was more restful, the BPM slowed down. I also projected my live brainwaves on my body, exploring which mental states corresponded to different colors in a projection map aesthetic.
In 2022, I joined a one-year intensive graduate program at NYU's Interactive Media Arts Low Res, studying in NYC, Berlin, and Shanghai. This allowed deeper research into neuroscience, psychoacoustics, and brain entrainment, leading to new performances like 'biofeedback soundbaths.' This year, I received grants to develop multi-person EEG systems, resulting in a 'brainwave symphony' I recently presented at the Smithsonian Hirshhorn.
You’ve mentioned transcendent states during performances. Can you elaborate on these moments and their significance to your work?
During my 2019 telekinetic techno tour, I experienced radical shifts in my state of mind, completely sober. Having performed for over two decades, this was a new experience. When my brain composed music and I felt the bass waves in real time, it created a sensation-perception loop called musical neurofeedback.
At my second show, while meditating on stage, the music got deeper, pulling me into my consciousness. I was so engrossed in shaping and perceiving the music that I began to dream while awake. When I came back to my senses, I realized I was performing in front of an audience.
Later on that tour, I noticed a small gap between hearing a Beta note and experiencing the sensation of my brain revving up into Beta. This led me into research on perceptual “binding,” where our brain combines different sensations into a single event.
At another show, I experienced a transcendent state. My brain was hearing what it was making, and my body was feeling the bass waves, pushing me into a gamma state. I had never wept on stage before, but I was overwhelmed with gratitude.
During my NYU grad program, I researched these transcendent states to see if they were repeatable in me and others. I investigated how repetitive beats impact the brain, from techno to ancient Mayan rituals, and developed a psychoacoustic sound design to lead people from an awake state to a relaxed meditation, and down into a hypnotic, waking dream state. I plan to explore the other states more in the future.
Discover what brainwaves like Beta mean here >>
What potential educational applications do you see for your musical neurofeedback system?
I conducted a middle school pilot program in my home state of Iowa where band and orchestra students used Muse headsets and iPads to see and hear their brainwaves. This was part of a larger social-emotional learning program designed to help students become more self-aware of their stress levels and mental processes. Hearing their own brain states provides students with information on something that is usually invisible and difficult to feel internally. Over time, they can learn what a meditative alpha state feels like and how to achieve it. Neurofeedback, especially musical neurofeedback (which is useful if the person’s eyes are closed and can’t watch visual feedback), is like turning on the lights in a dark room.
As students start to understand the shape and size of their state of consciousness, they become better equipped to navigate it. This type of introspection can be valuable in education, meditation, and mental health.
Check out Jason's latest creation:
What potential therapeutic applications do you see for your musical neurofeedback system, and how do you envision these impacting people?
I’m working with neuroscientists and neurotherapists to explore how musical neurofeedback could help with various difficulties and improve quality of life. I recently received a Roddenberry Foundation grant for clinical studies on my sound design to compare its effectiveness to music therapy and non-musical neurofeedback.
Preliminary studies with disabled children have shown promising results. One child, initially unable to speak, expressed himself through music from his brainwaves and later showed increased mobility and speech capabilities.
I’m also discussing with a gaming company integrating bio and neurofeedback into gaming engines for veterans with PTSD. Providing tools to understand and regulate their mental and physical states can be life-changing across therapeutic models.
What are the next steps for your project?
I received grant funding to study sounds and movements used in various cultures to induce meditative or transient states. I’m curious if there are commonalities, like repetitive beats matching the tempo of adult hearts. This research will culminate in art performances and workshops.
I’m also developing a web-based therapeutic build of my software, with the MVP sound design based on my NYU thesis work. Additionally, I have an upcoming integration with the education department at the Metropolitan Museum of Art in NYC.
You can follow Jason's project by checking out his Instagram or subscribing to his YouTube channel.
We're looking for inspiring stories from Musers like you! 🧠 Feel free to share yours by contacting us here.