℗ A Piece Of Work

Contemporary theater

  • Title: A Piece Of Work
  • Director: Annie Dorsen
  • Collaboration: Computer Music Design, Sound Design, Network Design, Sound Engineering
  • Duration: 1h20
  • For: One solo actor and electronics (alternatively Scott Shepherd or Joan MacIntosh)
  • World Premiere: On The Boards, Seattle, USA, 21-23 Février 2013
  • Performance: Black Box Teater, Oslo, Norway, 14 et 15 Mars 2013
  • Performance: BIT Teatergarasjen, Bergen, Norway, 20 et 21 Mars 2013
  • Performance: BRUT, Vienna, Austria, 5 et 6 Avril 2013
  • Performance: Rotterdamse Schouwburg, Rotterdam, Netherlands, 27 et 28 Septembre 2013
  • Performance: Théâtre de la Villette, Paris, France, 22 et 23 Novembre 2013
  • Performance: BAM, NYC, USA, 18 -21 Décembre 2013
  • more info

A machine-made Hamlet by Annie Dorsen

To sleep. To dream. In Shakespeare’s Hamlet, we confront the most fundamental questions of our humanity. But what exactly do we confront when the play is shuffled, reordered, and rewritten by a computer? In this daring marriage of live acting and artificial intelligence, Obie Award-winning director Annie Dorsen delivers a provocative parsing of Shakespeare’s work. Based on a sophisticated algorithm that generates a new version of the play nightly—words, visuals, lighting, music, and all—A Piece of Work features one actor, alternating between Obie winner Scott Shepherd (Elevator Repair Service’s Gatz, The Wooster Group’s Hamlet) and theater legend Joan MacIntosh, a looming computer screen, and a chorus of synthesized voices channeling this uncanny text, refashioned in the automated image of our digital times.

Among many developments experimented for that show, I created a software that handles many data-stream coming from lights, sounds, scenography/video and a text generator. As this traditionally separated theater operators are networked, the data going back and forth can be perceived by different perceptual means. For instance, the words output by the generator can trigger automatically sounds or the voice intensity of the actor can modulate lights and so on…

Another innovation made is a new music generator called: EBMS: Emotional-Based Music Synthesizer. It has been designed by reverse engineering, based on many research papers on relevant musical features for analyzing emotions in music. For instance, it has been noticed in the literature, that sad music often exhibits low tempi with narrow melodies at low intensities. So I compiled these information and make a generative algorithm that only takes an emotion tag as an input and that provides highly archetypal “emotional” music. As emotion tags are given by the text generator, the related generation of the music is always different from one show to another.

We also use musicalization of the voice in a scene where the actor plays piano with his voice, sonifying the prosody. Finally, a huge part of the work was dedicated to the control of the Apple’s speech synthesizer that I embedded in MaxMSP. As two third of the piece is spoken by the machine, I needed a very responsive polyphonic Text-To-Speech synthesizer that allows for many different voices (at least 20 in original Hamlet) while keeping intelligibility, and that can sing sometimes.

Share