Endel is a platform that leverages AI and personal data-points to generate soundscapes that impact and enhance mental states.
Popular use cases include productivity and focus, meditation and relaxation, and sleep. Soundscapes are personalized based on real-time inputs such as location, weather, heart rate, and circadian rhythm.
I’ve been using it while I work for a few hours a day since January 2020. This post is part review and part exploration of generative music as a technology to augment mental states.
Music for Airports
Several months ago, I found myself looking for new music to accompany me on a work trip to Boston. I ran across a post on Reddit about a Japanese ambient album, Kankyō Ongaku: Japanese Ambient, Environmental & New Age Music 1980-1990.
What caught my eye was that one of the commenters said that the music was made with the belief that music is a physical phenomenon that can interact with the world around you.
I thought that kind of made sense — sound waves bouncing off different objects and interacting with the environment seemed legit.
I loaded up several of the best ambient albums the internet could recommend and went on my trip, and ended up strolling through the airport while listening to Brian Eno’s Ambient 1: Music for Airports.
It was a great experience, like living with a soundtrack. Everything felt more cinematic and elevated.
Over time, I developed some go-to favorites. The problem with familiarity is that I began to focus on the music instead of letting the music enhance the situation.
Generative music seeks to solve this familiarity issue by creating something perpetually new.
Generative Music is an Experience
Proponents believe that generative music experiences can unlock an instance-based listening experience that has the potential to transcend static recordings.
At first glance, it’s a little like being into jam bands. My experience with Grateful Dead fans is that they love to collect recordings — not for their fidelity — but for the iterative jams. These recordings are highly sought after because the band took familiar songs in new directions.
Generative music, especially when it uses existing tracks as seeds, can be thought of as a robot jam band that performs for you on demand.
Side note: many video games have adaptive soundtracks, which serve to creative immersive experiences and emotional highpoints for gamers. That video link is an absolutely fascinating must-watch.
Bronze, Jai Paul, and Generative Experiences
Firmly in the robot jam band category is the collaboration between Artist Jai Paul and Bronze.ai, a small team of AI research scientists and musicians that seeks to use AI to “fundamentally extend the capabilities of recorded music.”
The result is a never-ending, unique listening experience that has to be heard to be believed.
You can listen here.
Details on how the Bronze platform works are scarce. The official website only offers these hints:
Bronze is a new technology that allows music creators to utilize AI and machine learning as creative tools for composition and arrangement. Bronze is also an audio file format which will revolutionize music playback, enabling artists to release non-static, generative, and augmented music.
In an email, the team also had this to say regarding the Jai Paul example:
…the creative decisions in arranging this piece were made by humans and the AI models were used simply to arrange, improvise and perform the piece on playback. We made a creative decision that given the nature of the production and the fact people were already familiar with the song, we would build a model unique to the piece that creates an endless arrangement on each listen, improvising around the instrumental form in many different ways and periodically dropping into a more fixed structure and improvising in a much more subtle way around the lead vocal.
It’s just so interesting and fun to listen to. I think (and hope) that we see a lot more of these artist-driven generative experiences.
Regarding the promise of generative music, musician, composer, and producer, Arca, had this to say:
“When you publish an album, that’s the way people will hear it forever more. When you play a song live, it’s unpredictable and ephemeral.”
“There’s something freeing about not having to make every single microdecision, but rather, creating an ecosystem where things tend to happen, but never in the order you were imagining them.”
As for Endel, it isn’t quite music to my ears, and I can’t listen to it as such. It’s more like a static-y, white-noise soundtrack with temporary undulations and sporadic, melodic overtones.
I don’t think we’ll see users sharing recorded Endel sessions. For most, generative experiences are interesting once, and that’s the point. They feel more personal and intimate, and in Endels' case, the here and the now are the inputs that the AI is riffing on.
Use Cases for Endel and Personalized Soundscapes
Speaking to Time, Endel CEO Oleg Stavitsky said that users have successfully used Endel for ADHD, insomnia, and tinnitus.
As for scientific literature on music and its effects on the human animal, these claims seem well supported.
Here’s a study that suggests that background music can increase worker satisfaction and productivity, particularly music that doesn’t have lyrics.
The abstract for “The future of music in therapy and medicine” states:
The understanding of music’s role and function in therapy and medicine is undergoing a rapid transformation, based on neuroscientific research showing the reciprocal relationship between studying the neurobiological foundations of music in the brain and how musical behavior through learning and experience changes brain and behavior function. Through this research the theory and clinical practice of music therapy is changing more and more from a social science model, based on cultural roles and general well-being concepts, to a neuroscience-guided model based on brain function and music perception. This paradigm shift has the potential to move music therapy from an adjunct modality to a central treatment modality in rehabilitation and therapy.
I spent hours reviewing medical studies on PubMed, and that’s the best overview I can provide.
Endel has the tech and the science to back it up, but how does this team bring the product to market? It’s not exactly an easy sell to the masses.
Notes on Marketing Strategy
I was surprised to learn that Endel isn’t developed by a team of musicians but rather a team of scientists and engineers who leverage the body of scientific literature that pertains to music.
As reported by dozens of top-tier outlets, Endel was signed to a record deal with Warner Music Group. Endel will create 20 albums for the record label.
It’s a nice angle, but I’m not sure how it makes sense in the context of a go-to-market strategy. This deal is very much about the end product, the “music,” and most of the messaging put out by the company outside of this is about the platform and the technology.
The company understands this as well, quoting from the release:
“Warner approached us and we were hesitant at first because it counters what we’re doing here,” Endel’s co-founder and sound designer Dmitry Evgrafov tells Rolling Stone. “Our whole idea is making soundscapes that are real-time and adaptive. But they were like, ‘Yeah, but can you still make albums?’ So we did it as an experiment. When a label like Warner approaches you, you have to say ‘Why not.'”
There’s a live streaming 24/7 Twitch channel, and the WMG licensed records are available on Spotify. It is interesting to note that Endel doesn’t have a first-party presence on major music streaming platforms, which most certainly have their users. Popular YouTube channels with similar utility and 24/7 streams include ChilledCow, which syndicates music to Spotify, et al.
Like the Jai Paul/generative experience mentioned previously, Endel also partnered with musician Grimes to release AI Lullaby, a limited edition sleep soundscape, which you can listen to here.
DTC or OEM
At present, the team seems to be feeling out both distribution methods. Talks of partnerships with automobile makers and other OEM integrations sound promising.
But at the same time, Endel has invested, and appears to be successful with App Store distribution, which intuits a direct-to-consumer orientation. I paid for a lifetime membership, but the current prices are $49 per year.
It’s unclear to me which avenue is better, but I would say that I’m cautiously optimistic they’ll be able to do both.
By leveraging different available data (based on integration), you could see the product’s variants. Automobiles could allow the platform to tune into ambient noise levels, traffic patterns and speed, and respond to a conversation (or lack thereof) within the vehicle.
Still, these are things you could do directly within an app — so who knows.
I love Endel, and I think that if you give it a chance, you’ll like it, too. It’s a promising riff on an existing idea and one that I suspect we’ll see much more of in the coming years.
The macOS app has a few bugs — the most annoying of which is after opening the program, most users will want to minimize it to the tray to continue listening while doing other things, only the music stops, and you have to open it using the menu icon to start playing again. My life would be 3% better if they fixed this.