HOME

ARTICLES

NET GALLERY

---

---

 

Augmented Body and Virtual Body

 

Suguru Goto

 


"Augmented Body and Virtual Body" was performed in, "Utopiales," a festival in
Nantes, France in November 9, 2005.
Suguru Goto: Concept and Composition
Yann Bertrand: 3D Image
Ippei Hosaka: BodySuit Performance
François Leonarte: Stage Direction
Michèle Trotta: Coordinator
François Leonarte and Antonin Artaud: Voice
Texts: "A Thousand Plateaus" by Gilles Deleuze & Felix Guattari, and "To Have
Done With The Judgment Of God" by Antonin Artaud

Abstract

This work is intended to apply to the system, which combines "BodySuit" and "RoboticMusic," as well as its possibilities and its uses in an artistic application. "BodySuit" refers to a gesture controller in a Data Suit type. "RoboticMusic" refers to percussion robots, which are applied to a humanoid robot type.

1. Introduction

The system, which I used in this work, contains both a gesture controller and automated mechanical instruments at the same time. In this system, the Data Suit, "BodySuit" controls the Percussion Robots, "RoboticMusic" in real time. "BodySuit" doesn’t contain a hand-held controller. A performer, for example a dancer wears a suit. Gestures are transformed into electronic signals by sensors. "RoboticMusic" contains 5 robots that play different sorts of percussion instruments. The movement of the robots is based
upon the gestures of the percussionist. Working together with "BodySuit" and "RoboticMusic," the idea behind the system is that a human body is augmented by electronic signals in order to be able to perform musical instruments interactively. This system was originally conceived in a project to realize a performance / musical theater composition.

2. General Description

This system is originally intended to be utilized in a project, which is entitled, "ArtificialBody and Real Body." The theme is to explore this dualism and the relationship between artificiality and reality of human body in a context of musical theater. Artificiality and reality sometimes seem to be conflicted with each other, but they can work together, or their meaning can be transformed for an audience depending on the context. The context provokes the audience to play with the ideas of reality and artificiality. A performance involving "RoboticMusic" and "BodySuit" challenges the audience by confusing the line between virtual and reality. I am a composer and intend to create this composition, which emphasizes the importance of performance aspects with this system.

This project originally started in 2002. This system is intensively experimented with and was shown on several occasions during 2005. The last performance was realized in, "Utopiales," a festival in Nantes, France in November 2005 (Fig.1).

Fig.1: A performance with "BodySuit" and "RoboticMusic". A photo from its rehearsal.
© Utopiales


"BodySuit" was first created by an electronic engineer Patrice Pierrot, in 1997. Although it was originally conceived to work with "RoboticMusic," it had to wait many years until "RoboticMusic" was ready. Meanwhile, many possibilities of "BodySuit" were explored, for instance, it was experimented with to control computer generated sounds and video images (Fig. 2).

Fig.2: BodySuit can also control sounds and video images in real time.


"RoboticMusic" was created in 2003. The original concept and the design were done by me, and the robots were realized by a humanoid robot specialist, Fuminori Yamazaki, of the iXs Research Corporation in Japan. The project is still a work in progress; the goal is to eventually form a robot orchestra.

A gesture of performer with "BodySuit" is translated to gestures of "RoboticMusic." Instead of having a computer-generated sound, one can interactively have an acoustic percussion sound. One of the important elements is the relationship and the communication method explored within this system. One may consider "BodySuit" and "RoboticMusic" as a relationship between a conductor and an orchestra, where dance-like gestures merely trigger instruments. In other words, this is an instrument that relies on physical gestures.

Another point is the method of translation used by the computer. For example, signals from "BodySuit" are transformed by Mapping Interface and Algorithm in a computer, and then are sent to "RoboticMusic." One gesture may trigger one attack on one instrument. However, it is also possible to trigger 5 instruments at the same time. Otherwise complex musical data, which is automatically generated by a computer and then reproduced by "RoboticMusic," is altered by gestures with "BodySuit" to modify the parameters of algorithm in real time.

3. Detailed Description of Percussion Robots – "RoboticMusic"

RoboticMusic contains 5 robots (Fig.3), which play percussion instruments, such as a Gong, Bass Drum, Snare Drum, Tom-Tom, or Cymbal. These instruments can be replaced as long as the replacement instruments can be played with Mallets (Fig.3).

Fig.3: RoboticMusic. From the left to right, Gong, Bass Drum, Tom-toms, Snare, Pipes.
© Raphaël Chipault


One of robots plays numerous pipes, and rapidly spins to create Flute-like sounds, which are generated while the air goes through them. These pipes are different lengths according to the pitches one desires. As it spins faster, the pitches become higher as following an overtone series (Fig.4).

Fig.4: Pipe Robot appears behind the Cymbal Robot on this photo. Pipe robot changes
the pitches according to the speed of spins.


The latest technology of humanoid robots is applied to this, but "RoboticMusic" doesn’t walk on two feet, nor does it contain eyes, a mouth, etc. "RoboticMusic" does contain robot’s arms. The gesture of a human percussionist is modeled in order to have musical sound and expression. Yet a robot can perform without any rest, more precisely and faster than a human being. Max, Cycling’74 is utilized as an interface and to generate musical data. With this, one can also send basic parameters to the robots, such as a position of the robot’s arm, an offset position, intensity (how hard it hits) and so on. This sends the signals to another computer with Linux via UDP. Software in Linux is developed by iXs Research Corporation. This has an important roll, since it controls the movement of robot. From the computer with Linux to the robots, these are connected via USB. Each robot has its own interface, which is connected with an actuator and a sensor. The robot has a special sort of springs to imitate a human muscle. Each holds a mallet at the end of his arm (Fig.5).

Fig.5: There is a special sort of springs in the arm of the robot. At the end of this, it
holds a mallet.


The major advantage to "RoboticMusic" is that it interactively plays an acoustic instrument with the aide of a computer. There is no problem to play complex rhythm and it easily goes beyond the limit of human performance capabilities. Therefore, it gives new potentialities in a composition for acoustic instruments.

Another point is an acoustic sound. While a computer generated sound has many capabilities, an acoustic instrument has rich sonority and enormous possibilities of expression, especially from the point of view of a composer. When it is played on a stage, the vast possibilities of the acoustic aspect are obvious when compared to sound coming from speakers. Another benefit is that the audience may observe both sound
and its gesture of performance.

To master one instrument is huge task for a musician, but to play together with others in an ensemble is another difficulty. Having 5 robots, one may extend the new possibilities of ensemble. For example, "RoboticMusic" allows 5 different tempos at the same time, or intricate accelerando and ralentando, but theses are exactly synchronized in music.

There is not only an artistic advantage with "RoboticMusic", but also a research aspect. As one works more with a robot, which works with the gestures of a musician, one can discover how a human gesture contains complex movement, although it sometimes looks fairly simple, for instance, the gesture to hit a percussion instrument. A musician knows how to play an instrument, but he may find it difficult to explain exactly how he controls each part of his muscles and bones, and how he increases and reduces speed and intensity instinctively within a very short instant.

When one hears the word, "robot", one is perhaps reminded of an industrial robot, or maybe sometimes a robot in a science fiction movie. However, it is not the case here. This is due to the latest development of artificial intelligence and is the case of application to hardware. This has a lot to do with the robot, which performs instruments with a human-like gesture. In particular it refers to the humanoid type of robot that contains sensors and advanced programming, which allows the robot to control itself automatically. It differs from the slave type robot in a factory, and at last we can profit from this in the field music. One may consider these robots as collaborators with humans.

4. Detailed Description of Gesture Controller – "BodySuit"

"BodySuit" has 12 sensors, which are placed on each joint of the body, such as a wrist, an elbow, a shoulder on the left and right arm an ankle, a knee, and the beginning of the left leg and right leg. The bending sensors are placed on the outer sides of the arms and on the front sides of the legs and fixed on a suit. Each sensor is connected with a cable to a box, and then it is connected with A/D interface. A performer wears this suit, but doesn’t hold a controller or any instruments in his hands (Fig.6).

Fig.6: Upper half of body with BodySuit. The 12 bending sensors are placed on each
joint of body.

Therefore, his gesture doesn’t have to be based upon playing an instrument, but could be liberated to become a larger gesture, like a mime. This allows for collaboration with a person in a different field, for instance a dancer or an actor.

The audience easily observes this larger movement. That is to say it can be well adapted to a performance and musical theater situation. One may consider this as a body instrument. This efficiently works in a percussionistlike gesture. This is one of the best controller conjunctions with RoboticMusic.

Since this is not like a physical controller or instrument, which is held by hands, it allows to be collaborated with the idea, " Augmented Body" or "Extended Body" in the work. His body is amplified by electric signals to control something remotely or to be extended from his abstract gesture to a meaningful gesture.

5. The system in "Augmented Body and Virtual Body"

The system of BodySuit and RoboticMusic is much explored within the last work, "Augmented Body and Virtual Body"(Fig.7).

Fig.7: The performance of "Augmented Body and Virtual Body" in Nantes, France, in November 2005.
© Utopiales

In this work, the gestures with "BodySuit" are translated or altered by the algorithm in a computer, and then are sent to "RoboticMusic." The interesting point is an idea of programming in order to alter one gesture to another state. For example, one single movement with a left elbow appears to be hitting a percussion instrument in the air. This triggers "RoboticMusic," which plays 5 percussion instruments from left to right in a space with a gradual slight delay on each. However, with the rising of a right shoulder, it changes the amount of delay to play in the order of left to right alternatively. The other case is one gesture translated into one gesture. A gesture with an arm in "BodySuit" is completely imitated by an arm of a robot, like someone copying another person. The method of communication has a great deal of importance here. In this sense, "BodySuit" and "RoboticMusic," two of them, but with a computer, three of them should be perhaps regarded as one system.

Since this is done in stage work, the fact, "gesture vs. gesture" should be much considered. The gesture with "BodySuit," which doesn’t emit any sound by itself, is related with the gesture with "RoboticMusic," which is intended to create sound. In terms of interaction, the visual aspect between one gesture and another gesture provides clearer feedback and brings a different and interesting dimension. Incidentally, this communication does not refer to the one between the first person to another. In this case, the communication means the point of view from a third person to observe this relationship objectively.

6. Application of work
This system was applied to a performance / musical theaters work in a composition (Fig.8).

Fig.8: This system is applied to a music theater work. There is a 3D image (by Yann Bertrand) behind RoboticMusic.
© Utopiales


At the end, this provides a lot of further and newer possibilities than I originally expected. While the communications of gesture with "BodySuit" and the gesture with "RoboticMusic" are observed, one notices different phases, which are the interaction and its perception, and the interaction with its consciousness. These two poles are the important keys in this field. With this articulated visual and oral experience in this work, one may cognate different experiences that constantly deal with something to expect, to understand, to notice, and to perceive.

Furthermore, the relationship between gesture and sound are also regarded by a very different view with this system. In other words, the idea, "music to see, visibility to hear" brings a different context in a theatrical performance.

While the concept, "Extended Body" was conceived to be realized with this system, the theme "Augmented Body and Virtual Body" is meant to question what a human body is and what his own identity is with this. Man and Machine seems to be dualistic, in which one may think that these are conflicted each other. Here, they coexist within this system, in fact, it is more correct to say that these are regarded as being one, "Extended Body". Therefore, our identity is not merely within our own body, but may communicate with outside and may be extended.

7. Conclusion

Historically, art has been always relating with a society where it exists, and has been always profiting from contemporary culture. As mentioned earlier, the robot here means the application of the latest development of artificial intelligence. Robotized instruments may have a lot of possibilities with this. At this point, this "Extended Body" is a reflection of society, especially where I grew up, in Japan. This is not an abstract image, but is
just practically realized technically and aesthetically.

These technical possibilities and the aesthetic points have created further new potentialities by this system; however, this is not my only goal. While interacting with this system and myself, creating a new language and its perception are the most important goal in terms of a composition.

Furthermore, one may consider again that his own identity is not merely limited within his possessed body. While doing this "Extended Body", one may perhaps ask, if such an identity really exists or not, since there is not much limit or borderline to extend. Music has to progress anyway, or it goes further by itself, whether we want it to or not. Likewise, a definition of music is limitless. Perhaps co-existence with this "Extended Body" may help to develop new possibilities in the composition of music.

8. Acknowledgments

I would like to thank Patrick Gyger (Utopiales), Fuminori Yamazaki (iXs Research Corporation), Patrice Pierrot, Alain Terrier (IRCAM), and IRCAM for their assistance in the realization of this project.

 

REFERENCES

[1] iXs Research Corp. http://www.ixs.co.jp/
[2] Utopiales Festival http://www.utopiales.org/2005/dossier-presse-utopiales-2005.doc
[3] Suguru Goto http://suguru.goto.free.fr
[4] Suguru Goto: Virtual Musical Instruments: Technological Aspects and Interactive Performance Issues, IRCAM. Trends in Gestural Control of Music, 2000, Paris, France
[5] Suguru Goto: The Aesthetics and Technological Aspects of Virtual Musical Instruments: The Case of the SuperPolm MIDI Violin, Leonardo Music journal, Vol.9, 1999, California, U.S.A

This paper has been previously published in the Proceedings of the International Conference on New Interfaces for Musical Expression-NIME 06, IRCAM-Centre Pompidou, Paris, France, June 4-8 2006, under the title "The Case Study of An Application of The System, 'BodySuit' and 'RoboticMusic' - Its Introduction and Aesthetics."

 

Suguru Goto is a composer and multi-media artist, considered as a Japanese new generation composer. He has received numerous prizes and fellowships such as Boston Symphony Orchestra Fellowship, Koussevitzky Prize from the Tanglewood Music Center, the first prize at the Marzena International Composition Competition in Seattle, U.S.A., and was awarded the "Berliner Kompositionaufträge 1993" by the senate administration for cultural affair, and a prize by the IMC International Rostrum of Composers in UNESCO, Paris. His compositions have been performed in major festivals, such as Resonaces/IRCAM, Sonar, CICV-Les Nuits Savoueuses, ICC, Electrofolie , International Theater Festival Berezillia, Les Rencontres Internationales Paris Berlin, Haus der Kultures der Welt - Haimat Kunst, ISEA2002, NIME 2004, Olhares-Outono, Ressonancias and Audiovisionen etc. His recent works involves new technologies in experimental performing art.

 

 

 

HOME

ARTICLES

NET GALLERY

---

---

 

FYLKINGEN'S NET JOURNAL

- © 2006 all rights reserved -