Press play to start a generative composition based on track 2/1 of Brian Eno’s Ambient 1: Music for Airports record. You can experiment by adjusting the duration of the loops.


Brian Eno’s original track was built using seven loops of tape playing simultaneously. Each tape played back a single sung note on a different key. Since each loop had a different duration, notes would come together, overlap forming chords, then drift apart in multiple unexpected combinations.

He explained, in a 1996 interview:

“One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank’s studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurable – they are not likely to come back into sync again.”

This piece is an example of both ambient music and generative art. These are useful starting points for thinking about structure in music and the challenges and opportunities for AI assisting in composition.

Structureless Ambient music

Structure is mentioned many times in other chapters. Structure enables rhythm, patterns, melodies, repetition, anticipation, and other elements that hook people’s attention and engage them emotionally, intellectually, or physically. But art is a framework for exploration, and sure enough, over the years, many people thought of “structureless” music (or at least music where the structure is not evident).

Ambient is a musical genre that emphasizes mood and atmosphere over structure. Back in 1917, composer Erik Satie created the idea of “Furniture music”: short and repeatable minimalistic compositions that should stay in the background. Music that can become part of the environment “without imposing itself.” During the 50s, artists working with taped sounds (a type of music called Musique concrète) often created “soundscapes” that departed from the traditional idea of a song. The ambient genre took shape during the 60s and 70s, with minimalism and experimental music that was mostly based on synthesizers. Brian Eno provided the name, which finished defining it.

Compositions like the ones produced by the interactive above seem to have no structure. The effect is similar to staring at moving clouds: for a moment, the brain recognizes a shape or pattern, which then melts away or gives way to something new and unexpected. This allows for a calm and meditative engagement.

… Then again, we used a completely predictable series of loops. Their structure is very easy to see visually, but acoustically a whole cycle takes so much time that it’s not possible to hear the repetition or even to recognize it. If we select a few minutes of sound arbitrarily and call it a “song,” we detach it further from its structure.

In this case, our music is created by a program. You can read its code (we provide the link below) and see exactly how it works. You could also say that this program is the structure underlying the music. Seeing it only requires a different standpoint. The use of autonomous systems to create art is called generative art. It can be a software system but also mechanical, biological, chemical, mathematical, etc.

Generative art matches well with the idea of music that melds with the environment and doesn’t call attention to itself. Our brains are particularly good at detecting human activity, and a human’s expressive gesture can have a hard time passing undetected, appearing random. In generative art, the artist works one level above as the designer of a system that produces the artwork.

Composing with Transformers

Musical structures are sometimes related to culture and genre: scales, chords, and rhythmic grooves, for example. But songs are self-referential: They have recurring motifs, phrases, and sections (like verse and chorus). There’s a narrative aspect to songs, where they have a beginning, middle, and end, expressed through devices like repetition, contrast, and variations. So, part of the structure of a song is in relation to things outside of it (genre, for example), and part of the structure is in relation to itself (verse-chorus-bridge structure, for example).

In simpler neural networks, each note that the AI generates is related at most by a few notes that came right before. We can say that the neural network only has a “short-term memory” that might remember the last few bars, but not the whole song. Songs created this way can have sections that sound realistic, but as a whole, they have no beginning or end, and if you listen for a long time, they just continue changing without getting anywhere. In human-made songs, notes can be related to other notes that appeared at any point in the song, no matter how distant. They require an AI with “long-term memory.”.

Scientists are experimenting with a type of neural network called Transformer to solve this problem. Transformers use a mechanism called self-attention, which means they can learn the relationship between an element (e.g., a note) and all the other elements (e.g., other notes) in a sequence (e.g., a song). It’s called “attention” because it’s based on focusing on the relationships that matter most, just like the human brain uses “attention” to concentrate on the most important things at any given time. They use their “memory” more efficiently like humans do.

Transformers are not a type of neural network specific to music. They were originally created to handle language-related tasks but showed great versatility in handling all types of problems, and they are considered to have good potential to become universal. You might have heard about “Generative Pre-trained Transformer 3”, or GPT-3, a very popular AI that is currently being used for all types of text-related tasks, from translation to writing software. At this time (2023), two extremely popular AI applications are based on GPT-3: OpenAI’s ChatGPT and the Dall-E image generator.

These are two recent examples of neural networks that use transformers to create full songs. Follow the links to hear examples of the songs they can create:

Food for thought

In generative art, the role of the artist is to design, build and configure autonomous systems that produce artwork. Ideally, the system provides characteristics that the artist can not provide directly: endless variations, true randomness, structures beyond our capacity to detect them, etc.

What opportunities does AI bring to generative art? In what ways can artists apply their artistic expression and point of view in the design of AI-based systems? How will AI technology and tools evolve to accommodate artistic work better?


The interactive version of Ambient 1: Music for Airports 2/1 was developed by Tero Parviainen, with adaptations by IMAGINARY.

Full credits and license information can be found in our GitHub repository.


Text is available under the Creative Commons Attribution License . Copyright © 2022 IMAGINARY gGmbH.