A new release from human-AI band “The elephants and the”. (Yes, our band name is also AI-generated!)
Listen to and share the song on your platform of choice:
All band members are or have been affiliated with the Music, Science and Technology Research Cluster in the Department of Music at University of York, which is one of the best places in the world to study music in all its diverse forms.
Band members in pseudo-random order:
- Jemily Rime co-wrote the song using the AI outputs as a starting point, and also recorded the vocal track.
- Alex (Zongyu) Yin developed an audio-conditioned, sequence-to-sequence model that generates lyrics in a specifiable genre.
- Lynette Quek created the imagery (image and moving image) for the song utilising machine learning, generative engines, and Processing code.
- Mark Hanslip created an audio dataset of himself playing tenor saxophone and trained a WaveGAN to generate short audio clips, from which the riffs and solos you hear in the song were assembled.
- Liam Maloney provided production, mix and mastering for the finished track.
- Tom Collins' MAIA Markov algorithm provided melodies, chords, bass lines, and drum beats. Tom has a new open access book coming out called “Coding music and audio for the web: Empowerment through programming”. Register your interest here!
Details of how we wrote the song with AI can be found here.
“Circus” contains the world's first tonal AI riff and solo generated in the waveform domain. Use the mini interface below to remix the sax material in and around the chorus of “Circus”.
We've noticed, but haven't fixed, a bug on tablet that causes the interface not to appear. Here's an old-fashioned, immutable audio clip in case the mini interface didn't show up for you: