HOME

ARTICLES

NET GALLERY

--

--

 

GoingPublik: Mobile Multimedia as Mixed Reality

 

Art Clay

 

 

INTRODUCTION


This paper wishes to convey in the small the experience of a mutual collaboration between science and art and in the large the fruits of that collaboration, a mobile scoring synthesis system driven by global information input, both technically and artistically. Technologies in the area of human interface design which reconsider and extend the desktop metaphor as a means of computer interaction in the light of the progress made in hardware and software technology recently are discussed. These include subsets stemming from the implementation of a new general purpose graphical user interface and multimedia framework, zoom-able and textual user interfaces and translucent free-form windows.

Artistic issues rooted in many works of the artist will be briefly touched upon in order to show where the artist's interests lay and the directions being pursued in general. These include the use of transparency for modularity in design and optical phenomena to bring about kinetic relationships between elements of that modularity. Both aspects finding implementation separately in acoustic and electronic works preceding GoingPublik. Finally, how the diverse interests manifesting in the arts and sciences were then brought together coherently into a score synthesis tool to be employed in a sonic art work whose central element revolves around the possibilities of a mobile-multimedia system, are concluded with.

 

HISTORIC MODELS


Historical examples of works of art showing an interest in using transparency as a kinetic element visually or as a basis for modularity in the creation of art work can be found in the output of László Moholy-Nagy, Marcel Duchamp and John Cage. A few brief description of such will suffice:

 
Figure 1. One of the Light Modulator studies from Moholy Nagy's from 1925. The final realization of the Light-Space Modulator, a Gesamtkunstwerk composed of color, light, and movement, was finally realized in 1930 after many years of preparation and with the help of an engineer and a technician.  

• The viewer, while looking at Duchamp's Large Glass at the Philadelphia Art Museum which is positioned in front of a window so that one can look through the glass and out the window at a fountain in the courtyard, comes to realize that the real world and the art world mix here symbolically in pleasant agreement.

• When using the transparencies needed to realize Variations I-III, performers come to realize that Cage has not provided them with a finished score, but with a creative tool for the construction of one and that there will never be a single definitive version of the work.

• Any artist today working with modern tools will see that Moholy-Nagy's Light Modulator, which is constructed out of plexiglas and which employs kinetic movement to interact optically with a light source, serves as an early metaphor for digital art, which basically consists of illuminated color spaces on a monitor.

 

CREATING SUGGESTION: THE RSS TOOL


A closer examination of the above mentioned works and the topic points already presented would reveal, that it is quite easy to imagine why a sound artist would be interested in using ideas of kinetic art and light to create a digital tool with which sound art could be conveyed and realized in real time by a performer. Clearer is how a group of computer scientists might use their software and hardware abilities to assist the artist in the invention of such a tool. This tool, the Realtime Scoring System (RSS) was developed at the Swiss Federal Institute of Technology Zurich (ETHZ) and used for the first time in a performance of GoingPublik at the 2004 Swiss Composers Alliance Festival held in Monthey, Switzerland.

In the hands of the performer, the RSS became a creative tool with which the contents of the score and therefore the outcome in terms of sound could be controlled to a large extent directly by the performer's interaction with it. Using the tool also guaranteed that each performance of the work would differ from one another and in this way always provide a fresh and unique experience for the performers themselves. Also, by introducing a performer interested in developing such skills to a 'score system' which is not determinate, but to one that embraces methods of suggestion for creating music, it was found, that in a pedagogical sense, the RSS could also be used as a potent tool to train the cognitive processes needed for improvising music.

 

THE DISTRIBUTIVE ENSEMBLE


GoingPublik is a sound art project for a distributive ensemble of trombones. Sound art as opposed to music does not place emphasis on the psychological relationships between sounds, but on their independence from one another. As the American Composer, John Cage has often put it, Sound Art is an art form in which the sounds are let to come into being for themselves, thereby letting them be appreciated for their own qualities, whether they be pure harmonic sounds or dissonant noisy ones. Compositional quantities and qualities being then based on functions of time rather than harmonic ones, interest held by timbre and rhythmic contrasts rather than harmonic growth.

 

 

Figure 2. A moment from the 'inside' performance of GoingPublik in Monthey, Switzerland which used a soft GPS simulator as a substitute for live satellite signals. Performers from left to right are Thierry Madiot (F), Günter Heinz (D) and Roland Dahinden (CH). The trombones were dismantled and spread across the performance space to emphasize the effect of distribution. The 3d compass sensors responded to all movements used to reassemble the instruments back together.

 

The core idea in the project is a strategy of mobility and this is accomplished employing a wearable computer system running the software based electronic scoring system as its central element. The program itself basically allows for what might be termed 'composed improvisation' which permits improvisational elements within a compositional structure. This is accomplished by electronically monitoring the performer's physical behaviour during performance. The program then responds by making suggestions to and even demands on the performer to various degrees and at various times.

Since each of the performers is equipped with the same electronic scoring system and because the system revolves around universally shared inputs such as geographical positions obtained via satellites and sensors using the earth's magnetic field, all have a common denominator and are thereby virtually linked. Despite the physical distribution of the performers in space, it is possible to have a commonly shared compositional palette and, at moments of close proximity between performers, to obtain instantaneous synchronized sonic elements. Both aspects needed for creating sonic structure within the work.

 

SCORE SYNTHESIS & MIXED REALITY


Our initial approach to score synthesis was to use images taken from the immediate environment using a live-camera and then to combine these live-images with a 2d matrix constructed out of vector based graphics via superimposition. Performers would then be able to interpret irrational, or non-musical elements such as images of natural and man-made elements like trees or street signs. This being accomplished by using the rationality of the vectors of the matrix which places the live-images coming in through the camera into definite sonic domains of frequency and time.

 

 
Figure 3. Albrecht Dürer's illustration of an artist using a grid system to obtain more natural geometrical perspective.
The system consists of a viewing device, a matrix and the object to be rendered. Notice while looking at the illustration that Dürer created it with one point perspective, placing the viewer in a position similar to the artist's who is sketching at the grid.
 

Reading through the matrix of vector lines is guided by a 'Conduction-Line' which scrolls through the matrix from left to right, spends an equal amount of time between any two lines and changes tempo in relation to walking speed of the performer. One is reminded here of the grid system of the German, Renaissance Painter Albrecht Dürer, who used a framed wire grid to achieve a proper projection of what the eye was looking at from a fixed viewing point and of the actual process of 'scanning' an image that an artist would undertake when using the Dürer's Grid System to aid in making a drawing with a more correct perspective.

 

In the final form of the RSS used in GoingPublik, a live image feed from a camera was no longer used, but was substituted by a small library of images of the surroundings stored in memory. The idea of a concise library of images proved to be easier to handle by cutting down computing time and allowed for more control of what actual images could be used. Although limited in choice, the library could be relied upon to yield a rich and complex collage-like score. At the same time it became possible to match the images between all of the performers on a common contextual basis (similar images for west, for example). So, although the mixed reality concept no longer takes place on the visual level between live-image and vector graphics, it does takes place between the image library, the vector graphics and the live-movement of the performer within the performance space. Again mixed reality: A realtime element being superimposed on and interacting with a stored element.

 

MACHINE STRATEGIES FOR COMPOSITION


All of the performers involved in the performance of GoingPublik are equipped with a belt integrated wearable computer, a Global Point System (GPS) receiver connected to the serial port and a three dimensional digital compass communicating to the computer using wireless Bluetooth technology. Either a SmartPhone or an airborne mouse is used as a possible input devices. A high resolution head-mounted display with a small screen area of 640 x 480 pixels connects the viewer's eye to the activities of the software running on the computer. The GoingPublik Software runs on the open source 'Bluebottle' system developed at the Institute for Computer Systems at the ETH Zurich.

The wearable computer is the central point of the setup. Its function is to analyse the input from the sensors to calculate and render the realtime score that is presented in the displays for interpretation by the performer. The input device is used to set the initial settings of the program and to make changes to the software during the performance if needed. The function of the sensors, the Q-bic and its software might be understood as a whole in that they act synergetically to provide strategies for so called 'instant composing'. All designed as appropriate to a machine but executable by a human and without imposing hierarchies on a needed sense of free choice.

 

STRATEGIES: LINES & IMAGES


The position of the performer is determined by referencing the Global Point System. The GPS information is then given further as simple Cartesian coordinates, or pair of xy values. These values reference a position within a pre defined area of the performance space. The position of the performer within the area influences the position of the vectors of the matrix, therefore continuously adjusting the 'resolution' of the matrix used to indicate the sonic domains of frequency and time.

 

The vector system of the matrix consists of 'Range-Lines' moving on the horizontal plane in relation to the performer's coordinates on the North-South axis and 'Time-Lines' moving on the vertical plane in relation to the performer's coordinates on the West-East axis. Each of the vector systems move independently from one another and both are based on separate computer algorithms. The algorithms are designed to generate and manipulate the lines in realtime in a pre defined way as the performers change their positions within the performance space.

 

 

Figure 4. The MatrixWindow as depicted in the viewing glasses. The red lines are the matrix vectors seen here superimposed over the North and East images. The vertical lines, which move to the left and right irregularly, represent the time domain and the horizontal ones, which move upward and downward regularly, the pitch domain. The blue conduction-line is seen to the left as it moves through the matrix. The Pie-Menu depicted is for setting a parameter in the GPS simulator.

 

There is a maximum of nine Time-Lines which in turn designate seven rhythmic spaces. The contrary movement of the Time-Lines away from one another modulate these spaces between equidistant and non-equidistant states. The time taken by the Conduction-Line to scan through the space between two Time-Lines is always a constant value in milliseconds. The value of the constant however being dependent on one of four tempo settings (Rest, Relax Work & Hurry) which are all based on the walking speed of the performer measured in meters per minute. The graphic material contained in the score image will then be 'studied' by the performer to different degrees due to differences in space between any two Time-Lines.

The speed of the Conduction-Line as it travels through the rhythmic space therefore makes a quantitative difference in the amount of time the performers may 'stay' on an area of the image. The whole system of lines and images works then in conjunction as a variable space-time notation concept for determining rhythm.


 

Figure 5. A diagram illustrating the hardware setup used. The Q-bic is depicted at the centre and consists of a main board (X-Scale 400 MHz Processor, 256 MB SDRAM, 32 MB Flash Memory) and an extension board (Bluetooth Sender and Receiver, 2 USB Host & Client Ports, GPIO Pins and External Flash Slot). From the top left and clockwise are the display glasses, the 3d Compass Sensor, the GPS device, a screenshot of the main window and the SmartPhone used for interfacing.

 
In contrast to the movement of the Time-Lines, the movement of the Range-Lines brings about equidistant spaces which designate range. There is a maximum of six Range-Lines which in turn designate seven range spaces. These spaces indicate the ranges available to the performer, limiting and expanding the tonal range as the performer changes position within in the performance space. For example, when all of the Range-Lines are present in the matrix window, seven ranges spaces are indicated. The spaces would then designate seven ranges ordered from bottom to top as follows: Outside, Very Low, Low, Middle, High, Very High and Outside. These ranges are to be subjectively defined and may be interpreted freely. The performer decides as to what 'outside' of the instrument might mean.

 

As mentioned, the image chosen over which the vectors form a matrix is drawn from a library of only four images. Each image has been assigned to a direction and appears in the score whenever the performer is facing that direction. The compass measures this with a resolution of eight possible directions, or 'headings'. This information decides which of one of the four images will be used for interpreting the score and whether the images will be superimposed on another. Single images are rendered at the poles of the compass and the images overlap with one another outside of these positions.

Outside of measuring heading, the compass also measures 'pitch' (forward and backward tilt) and 'roll' (side to side tilt). The degree and direction of the distortion of the image is directly dependent on these compass variables. The larger the intensity of pitch and roll the larger the distortion of the image will be. The size of the displayed image is dependent on the quality of the performer's walking activity measured by calculating out an average speed using GPS information over a given period of time. If the performer is 'standing' more than 'walking', the image will enlarge up to 200% of its original size and If the performer is 'walking' more than 'standing', the image will shrink back down to its original size.

 

STRATEGIES: ICONS & MORE


Apart from the parameterization being carried out by the domain vertices of the matrix system, the RSS provides the performer with a second set of compositional methods by suggesting and even sometimes demanding certain actions. These 'hints' are in the form of three groups of icons located above the score area at the top of the screen. Depending on the performer's walking speed, the time spent doing so, and a random component, two of the icon groups, 'Go-Icons' and 'Stop-Icons', suggest and if ignored demand speed-ups or slow-downs. Related actions are associated with each of the icons in order to artistically integrate the changes in walking activity, regulate the tempo between all performers in general and to integrate the performer's environment sonically into the work.
 
 

Figure 6. A schematic showing the relationships between the sensor systems and the component packages of the GP software. Software components are depicted within the rectangle; the sensor systems are depicted to the left; elements drawn into screen areas of the display glasses are depicted to the right.

 

 

 
 

Figure 7. A flow chart showing the selection process formodification icons that takes place when the conduction lineswitches to a 'new page'. The 'stop' and 'go' icons aremodified following more stringent rules involving the speedlevel within a 30 seconds sliding window, the time the speedlevel was held and a random component. By following thediagram from top to bottom, one can easily come to understandhow a decision determined using simple probabilities.

In addition to the icon groups mentioned above, there is third group of icons, the Mod-Icons. Based on the rate of heading change, walking speed and a random component, the Mod-Icons suggest or demand by their appearance, how the score is to be read through by the performer. An algorithm was used to set up an equation between walking activity of the performer to the interpretation of the electronic score. This relationship was determined by borrowing aesthetic concepts of stone paths in Japanese gardens. By drawing the parallel between reading through a score and walking though a garden, it was possible to generate and control parameters of 'style'.

These parameters are PHRASE (the division of the matrix material into units), PATH (the form of the curve used to read through the matrix) and PLAY (the degree of density in playing while reading through the matrix). Here the movement of the eye over the image from left to right and through the matrix system is confined by the above series of phrasing rules. By doing so, contrapuntal differences between the performers are brought about, so that 'sonic windowing' is created through which unoccupied audio space and variation in voice density are guaranteed at all times.

 

 

CONCLUSION: CREATIVITY & CROSSOVER


The focus of the work between the artists and the scientists involved was more or less placed on the process of creating a tool which was not only to be used in a new sonic art work, but also to be used as an aid in the developing cognitive processes needed for improvisation in music.

The situation and the processes involved in creating the tool already served as a model for the transferral of knowledge and skill between component players in the project, because at its base, digital art is no more and no less than the creative application using computers and at the heart of working with computers is teamwork. Critical points in the creative process between both artists and scientists involved occurred when any one of us had to 'cross over' into another domain not their own, thus being forced to share and to acquire new knowledge. The project's realization stands therefore as a record of a collaborative partnership between practitioners from different backgrounds and the transferral processes involved became one of the major cognitive issues to be confronted with when using the tool.

Specific to the use of the RSS as a pedagogical tool for improvisation. It was believed that the experience provided by the tool was easily usurped into the collective knowledge of the performer using it. As a consequence, it was proven that any experience made using such a tool to develop creativity in improvisation by fostering cognitive processes can hardly be eradicated later. That which was learned would manifest itself as part of the conceptual repertoire of the performer and this usurpation would be communicated further with or without the 'tool' to others through the creative act of working or playing together.

The effectiveness of the tool itself was believed to stand in direct relationship to the results obtained by using it: If new cognitive processes brought about changes in perception, then these changes could then be measured in the degree in which the artists have discarded, at least in part, their general tendency in style, content or method. The final conclusion by the artist, was that not only the performers were coaxed by the tool to stray off of their own 'beaten paths' and bring about something new, but also the composer, by attempting to bridge a gap between 'composition' and improvisation' by the use of mobile, realtime score synthesis, was brought into new unexplored territories of art and science.


 

ACKNOWLEDGMENTS
Thanks to Prof. Dr. Jürg Gutknecht, ETHZ (Direction), Thomas Frey, ETHZ (Software) Prof. Dr. Paul Lukowicz (Q-bic), Stijn Ossevort (Belt Design), Dr. Tom Stricker (System Consulting), Dr. Emil Zeller (ARM Aos Porting), Mazda Mortasawi (.NET & Bluetooth Programming), Miro Tafra (Consulting), Sven Stauber (USB) and Dr. Dennis Majoe of MASC, London (Sensor Systems).

 


Sound Artist Art Clay(born in New York, lives in Basel, Switzerland) has worked in Musik, Video & performance. Specialist in the performance of self created works with the use of intermedia. Appearances at international festivals, on radio and television in Europe, USA and Japan. Extensive compositions for acoustic and electronic mediums in many genre including dance, performance and theater. He has written works for newly invented instruments of his own design and for traditional acoustic and electro-acoustic instruments. Instrument designs include: "Air Bow", a virtual string instrument; "Mirrorum", an optical sound generator; "Spaceball", an intermedia controller for both sound and image after synergetic principles of Buckminster Fuller. Compositions for Imke David, Fritz Hauser, Günter Heinz, Roland Dahinden, Malcolm Goldstein amongst others. Hoerspiel and Theater productions with the Swiss writer Urs Jaeggi (Selection): "Cantate and Aria for a Glass" (1993); " Broken Words" (National PEN-Meeting Heidelberg 1996); "Spinoza ist", New Media Theater (State Theater Freiburg, Germany 2000) Music theater with Text from Kurt Scwitters/Gertrude Stein and Andrej Gromyko: "Gespannte Gefaehrtin", Erratum Ensemble (CH-Tournee 2002). "Art Clay, Musik fuer Dinge", Kulturbeitrag Schweizer Fernsehen ("10 vor 10" Programm, 2000). Together with Jürg Gutknecht, he directs the 'Digital Art Weeks' Program held at the ETH in Zurich.

 

 

 

HOME

ARTICLES

NET GALLERY

--

--

 

FYLKINGEN'S NET JOURNAL

- © 2004 all rights reserved -