Cover AI-generated robots dancing in box jellyfish movements (Photo: courtesy of Derek Lee / Halo Studio Ltd)

The world’s first symbiotic AI and human orchestra can “think” and “play” like people. But its creators have bigger plans for neurology beyond dazzling concert hall audiences with the impossible

Opera legend Maria Callas, who died in 1977, will never record a new aria. Or will she?

This past summer, computer scientist professor Guo Yike and the head of the department of music professor Johnny Poon used AI technology and computer programming to create a recording of Mendelssohn’s On Wings of Song in the voice of the late New York soprano. It was so realistic that those who were invited to listen to the recording were fooled. “They said to me, ‘Oh, I didn’t realise Callas recorded this song,’” Poon says with a laugh.

Callas’s “recording” was one of the university’s art tech team’s first test products in the lead-up to the August launch of the Turing AI Orchestra—the first human-AI ensemble in the world. Named after British mathematician Alan Turing, one of the people often credited with the invention of the computer, the orchestra, which combines human musicians and AI systems in performances, debuted in a concert at Hong Kong City Hall, where a human orchestra performed the 1986 Mandarin song Pearl of the Orient by Taiwanese singer songwriter Lo Ta-yu alongside AI-generated choral singing. On the stage was Poon conducting a human orchestra. Lifelike human singing rang in harmony with the orchestral music, but the choir was nowhere to be seen: the “voices” were all synthetically created.

Don't miss: ‘The Jungle Book’ Reimagined by Akram Khan Is a Warning About Climate Change

The Turing AI Orchestra works like this: Guo has written an algorithm that sets parameters for a computer program to generate art, be that sounds, images or videos of dance movements. Guo and Poon then feed it with a large amount of raw data from the internet—such as ten hours of recordings of Callas’s performances; real dancers’ or animals’ movements; or, in the case of the City Hall concert, hours and hours of recordings of opera singers and choirs performing Pearl of the Orient.

At the concert, Poon wore a motion capture suit that recorded his body movements as he conducted the orchestra. These movements acted as codes that fed into the AI program’s formula, which were then converted to lifelike singing at different volumes and tempos. Based on Poon’s movements and the song’s lyrics, the same AI system also generated videos of two dancing robots which were projected onto the backdrop during the performance.

This collaboration between AI and humans marks another milestone in the development of Hong Kong’s art tech. For decades, multimedia artists have pushed the creative boundaries of art experiences through virtual reality, extended reality, augmented reality and real-time animation. Poon thinks that these projects so far have been about maximising the use of existing technologies and off-the-shelf products to come up with new art experiences; his program is different. “Here, we are talking about how we’re going to create next generation technology for the future,” he says.

Guo and Poon’s goal here is that, by feeding the AI program with a large amount of raw data, it will categorise patterns of human emotions and behaviour. For instance, choirs usually crescendo when the conductor raises their arm, and sad songs in minor keys usually come with lyrics that convey unhappiness. Poon and Guo hope that the program will automatically be able to generate a crescendo when Poon, wearing the motion capture suit, raises his arm, without anyone sending direct instructions to the AI generator. In other words, they want the AI to learn from examples and then behave like humans.

In case you missed: Vienna Philharmonic’s Chairman on The Orchestra’s Vast History, Upcoming Hong Kong Concerts

To test whether his AI program is “learning”, Poon invited members of the Hong Kong Dance Company to film a wide variety of movements that contrasted with the mood of the music. “We purposely told the dancers that we did not want them to dance to the music—otherwise, there would be no way to tell whether the machine was learning,” he explains. The dance recordings and music were then uploaded to the machine. The result displayed during the debut concert was startling: without direct instructions from Poon, the AI robot dancers automatically reached upwards and opened up their bodies every time the music crescendoed, as if they “understood” and “performed” in accordance with the intensifying volume of the song. In other words, they seemed to be able to behave like emotional beings.

Poon suggests that human learning and behaviours may be more similar to machine learning than we think. “Why do I hear a piece of music and want to cry when in fact the music itself isn’t emotional? All great art seems to share a certain pattern, which I think is the secret to human artistry.” He believes that our emotional response to the music is our brain reacting to certain patterns and structures, so his AI program’s learning through pattern identification isn’t too far from how we distinguish good art from bad. “It’s an attempt to crack the human code to uncover features or patterns that are not immediately apparent to us,” he adds.

In case you missed: Hong Kong Dance Company’s Brand New Dance Drama Takes You Back to the City’s Glamourous Past

Since the AI orchestra’s debut in summer, the program has been applied to a range of local arts experiences: visitors to the Hong Kong Museum of Art over the past few months were able to listen to a celestial performance by real human and AI voices in front of baroque painter Marcello Venusti’s The Last Judgement; Poon’s upcoming Christmas concert is expected to involve pop stars without their physical presence.

Poon believes art tech will help tap into neuroscientific discoveries, which in turn will benefit the science, education and even business sectors. To work towards that, in the next five years, HKBU is setting up Hong Kong’s first MRI (magnetic resonance imaging) lab outside a medical school or hospital to collect and analyse brain data for research related to creativity.

“The world has become transdisciplinary,” he says. “Art tech is probably the most efficient way for people to work together to solve problems.”

NOW READ

11 Art Exhibitions in Hong Kong Not to be Missed in November

D’Strict, South Korean Art Tech Group, Brings Immersive Exhibition to Hong Kong

Art x Technology: This Outdoor Installation Is Front and Centre at the London Design Festival

Tatler Asia
© 2022 Tatler Asia Limited. All rights reserved.