℗ Luna Park

Musical Theater

  • Title: Luna Park
  • Composer: Georges Aperghis
  • Duration : 1h
  • Collaboration: Computer Music Design, Sound Design, Research
  • For: 2 voices, 2 flutes/voices and electronics
  • World Premiere: Festival Agora, IRCAM, EsPro, Paris, 8-10 Juin 2011
  • Performance: Strasbourg, MUSICA, 7 Octobre 2011
  • Performance: Hamburg,Klangwerktage, 1 Decembre 2011
  • Performance: Varsovie, 27 septembre 2012
  • Performance: Athens, 21 Février 2014
  • Performance: Köln, 5 mai 2014
  • Performance: Paris, Manifeste2014, 15 Juin 2014
  • more info

Listen to a concert excerpt:

«Luna Park» is a musical theater piece of about one hour, composed by Georges Aperghis, set design by Daniel Levy, and whose computer music design has been realized by Grégory Beller. The piece, on its use of technology at different levels, uses concatenative synthesis paradigms both in its formal design and deployed processes. Thanks to live computer processes, concatenative synthesis and prosodic transformations of voice are manipuled, and controlled by gesture data incoming from accelerometer sensors made specifically for the piece. The world premier of “Luna Park” took place in Paris, at IRCAM’s Espace de Projection, on June 10th 2011 and during the 2011 edition of Ircam’s Agora Festival. The four performers of this premier were Eva Furrer (on flute, Octabase and voice), Johanne Saunier (voice and danse), Mike Schmidt (base flute and voice), and Richard Dubelsky (air percussion and voice). In “Luna Park”, the percussionist Richard Dubelsky literally speaks with his hand gestures performing Spokhands, a new system developed for the piece.

For Luna Park, I developed a speech synthesizer the musicality of which can be defined. Besides the text given in input of the synthesizer, we give a score relative to the syllables. It allows to draw literally the desired prosody. In the following example, the text was generated by Markov chains applied to a sentence at various levels: letters, phonemes, syllables, words, prosodic groups. We hear the original sentence at the end.

Besides a succession of consecutive sounds constituting the new sentence, the synthesizer gives us all meta information relative to each of the segments. We can use this information to do context dependent transformation, e.g. to control additional transformations (transposition, stretch…), to strengthen the desired musical aspect. This process has been used to synthesize the last 7 sentences of the concert excerpt (above).

More information:

  • [Beller11b] Beller, G., Aperghis, G., « Gestural Control of Real-Time Concatenative Synthesis in Luna Park », P3S, International Workshop on Performative Speech and Singing Synthesis, Vancouver, 2011, pp. 23-28
  • [Beller11c] Beller, G., « Gestural Control Of Real Time Concatenative Synthesis », ICPhS, Hong Kong, 2011
  • [Beller11d] Beller, G., « Gestural Control of Real-Time Speech Synthesis in Luna Park », SMC, Padova, 2011
  • [Beller11a] Beller, G., Aperghis, G., « Contrôle gestuel de la synthèse concaténative en temps réel dans Luna Park : rapport compositeur en recherche 2010 », 2011
Share