18.

Digital Literary Studies: Performance and Interaction

David Z. Saltz

Computers and the performing arts make strange bedfellows. Theater, dance, and performance art persist as relics of liveness in a media-saturated world. As such, they stand in defiant opposition to the computer's rapacious tendency to translate everything into disembodied digital data. Nonetheless, a number of theorists have posited an inherent kinship between computer technology and the performing arts (Laurel 1991; Saltz 1997). While "old" media such as print, film, and television traffic in immaterial representations that can be reproduced endlessly for any number of viewers, the interactivity of "new" media draws it closer to live performance. Philip Auslander has argued persuasively that digital agents such as internet "chatterbots" have the same claim to "liveness" as flesh-and-blood performers (Auslander 2002). Indeed, one might argue, following Brenda Laurel, that every user's interaction with a computer is a unique "performance," and moreover it is one that, like theater, typically involves an element of make-believe. When I throw a computer file in the "trash" or "recycling bin," I behave much like an actor, performing real actions within an imaginary framework. I recognize that the "trash bin" on my screen is no more real than a cardboard dagger used in a play; both are bits of virtual reality (Saltz 2006: 213). Indeed, theater theorist Antoine Artaud coined the term "virtual reality" to describe the illusory nature of characters and objects in the theater over fifty years before Jaron Lanier first used that term in its computer-related sense (Artaud 1958: 49).

It is no wonder, then, that performance scholars and practitioners have looked to digital technology to solve age-old problems in scholarship, pedagogy, and creative practice. This chapter will begin with a review of significant pedagogical and scholarly applications of computers to performance, and then turn to artistic applications.

Hypermedia and Performance Pedagogy

A work of dance, theater, or performance art is a visual, auditory, and, most of all, corporeal event. Only in the 1980s, when low-cost personal computers acquired the ability to store and manipulate images, sounds, and finally video, did computers begin to offer an effective way to represent the phenomenon of performance. Larry Fried-lander's Shakespeare Project anticipated many subsequent applications of digital technology to performance pedagogy. Friedlander began to develop the Shakespeare Project in 1984 using an IBM InfoWindow system. He adopted HyperCard in 1987 while the software was still in development at Apple. Because personal computers then had very crude graphics capabilities and no video, Friedlander adopted a two-screen solution, with the computer providing random access to media stored on a laserdisc. The laserdisc contained hundreds of still images and, more important, six video segments, including two contrasting filmed versions of one scene each from Hamlet, Macbeth, and King Lear. The Shakespeare Project used this video material in three ways. In a Performance area, students could read the Shakespearean text alongside the video, switch between film versions at any time, jump to any point in the text, and alternate between a film's original audio track and a recording of Friedlander's interpretation of the actors' "subtext." In a Study area, students participated in interactive tutorials covering aspects of Shakespearean performance such as characterization and verse. Finally, in a Notebook area, students could extract digital video excerpts to incorporate into their own essays. In each case, the computer made it possible for students to read a performance almost as closely and flexibly as they could a printed text.

CD-ROM editions of plays released in the 1990s — most notably the 1994 Voyager edition of Macbeth and the 1997 Annenberg/CPB edition of Ibsen's A Doll House —incorporate core elements of Friedlander's design, keying the play's script to multiple filmed versions of select scenes. In addition, these CD-ROMs provide a rich assortment of critical resources and still images. The Voyager Macbeth also includes an audio recording of the entire play by the Royal Shakespeare Company and a Karaoke feature that allows the user to perform a role opposite the audio. These CD ROMs take advantage of the ability acquired by personal computers in the 1990s to display video directly, obviating the need for a laserdisc player and second monitor. This approach is far more elegant, compact, and cost-efficient than using laserdiscs, but the video in these early CD-ROM titles is much smaller and lower in quality than that of a laserdisc. By 2000, faster computer processors and video cards, along with more efficient video compression schemes and widespread DVD technology, had finally closed the gap between personal computers and laserdisc players.

Modeling Performance Spaces

The projects considered so far rely on the multimedia capabilities of computers, that is, a computer's ability to store and retrieve text, images, and audio. Other projects have exploited the power of computers to generate complex simulations of 3D reality. Performance scholars began to explore the use of 3D modeling in the mid-1980s to visualize hypotheses about historical theater buildings and staging practices. In 1984, Robert Golder constructed a 3D computer model of the 1644 Theatre du Marais, and Robert Sarlós used computer models to visualize staging strategies for a real-world re-creation of the medieval Passion Play of Lucerne. This technique became more common in the 1990s when high-end CAD (computer-aided design) software became available for personal computers. Theater historians used 3D modeling software to reconstruct structures such as the fifth century bce Theater of Dionyos (Didakalia) and Richelieu's Palais Cardinal Theater (Williford 2000). One of the most ambitious of these projects is an international effort, led by Richard Beacham and James Packer, to reconstruct the 55 bce Roman Theater of Pompey. The computer model is painstakingly detailed, with every contour of every column and frieze being modeled in three dimensions. As a result, even using state-of-the-art graphics workstations, a single frame takes approximately one hour to render at screen resolution (Denard 2002: 36).

None of the 3D modeling projects described above allows a user to navigate the virtual spaces in real time; the models are experienced only as a series of still images or pre-rendered animations. These projects are geared toward research, with the goal of generating new historical knowledge and testing hypotheses. Consequently, the quality of the data is more important than the experience of the user. When the emphasis is on teaching rather than research, however, the tendency is to make the opposite trade-off, favoring interactivity over detail and precision. The most significant effort along those lines is the THEATRON project, also under the direction of Richard Beacham, with funding from the European Commission. THEATRON uses Virtual Reality Modeling Language (VRML) to allow people to explore models of historically significant theater structures over the web. The first set of walkthroughs, including such structures as the Ancient Greek Theater of Epidauros, Shakespeare's Globe, and the Bayreuth Festspielhaus, became commercially available in 2002.

The THEATRON walkthroughs provide an experience of immersion, conveying a clear sense of the scale and configuration of the theater spaces. These spaces, however, are empty and static, devoid of any sign of performance. Frank Mohler, a theater historian and designer, has adopted an approach that focuses not on the architecture per se, but on technologies used for changing scenery. Mohler has made effective use of simple animations to simulate the appearance and functioning of Renaissance and Baroque stage machinery.

Digital Simulations of Live Performance

Models of theaters and scenery, no matter how detailed, immersive, or interactive, simulate only the environment within which performances take place. There have also been attempts to use computer animation techniques to simulate the phenomenon of performance itself, for both pedagogical and scholarly purposes. Again, Larry Friedlander produced one of the earliest examples, a program called TheatreGame created in conjunction with the Shakespeare Project. This software was innovative for its time and attracted a good deal of press attention. TheatreGame allowed students to experiment with staging techniques by selecting crude two-dimensional human figures, clothing them from a limited palette of costumes, positioning set pieces on a virtual stage, and finally moving the virtual actors around the stage and positioning their limbs to form simple gestures. The goal was to allow students with no theater experience or access to real actors to investigate the effects of basic staging choices.

At the same time Friedlander was developing TheatreGame, Tom Calvert began to develop a similar, but vastly more sophisticated, application geared toward choreographers. The project started in the 1970s as a dance notation system called Compose that ran on a mainframe computer and output its data to a line printer. In the 1980s, Calvert replaced abstract symbols describing motions with 3D human animations and dubbed the new program LifeForms. The human models in LifeForms are featureless wireframes, but the movements are precise, flexible, and anatomically correct. Life-Forms was designed as a kind of word processor for dance students and practicing choreographers, a tool for composing dances. In 1990, the renowned choreographer Merce Cunningham adopted the software, bringing it to international attention. In the early 1990s, LifeForms became a commercial product.

Motion capture technology offers a very different approach to performance simulation. Motion capture uses sensors to track a performer's movements in space and then maps those movements onto a computer-generated model. While applications such as LifeForms are authoring tools for virtual performances, motion capture provides a tool for archiving and analyzing real performances. The Advanced Computer Center for Art and Design (ACCAD) at Ohio State University maintains a high-end optical motion capture system dedicated to research in the performing arts. In 2001, ACCAD began to build an archive of dance and theater motion data by capturing two performances by the legendary mime artist Marcel Marceau. This data, which includes subtle details such as the performer's breathing, can be transferred onto any 3D model and analyzed in depth. The Virtual Vaudeville Project, a digital re-creation of a performance in a late-nineteenth-century American vaudeville theater, combines many elements of the projects discussed above: 3D modeling of theater architecture, animated scenery, performance simulation using motion capture, along with simulations of audience response and hypermedia content. The goal is to develop reusable strategies for using digital technology to reconstruct and archive historical performance events. Viewers enter the virtual theater and watch animated performances from a variety of positions in the audience and on stage. Professional actors re-create the stage performances, and these performances are transferred to 3D models of the nineteenth-century performers using motion and facial capture technology. A prototype of the project has been developed for a high-performance game engine of the sort usually used to create commercial 3D action games, and a more widely accessible version of the project, created with Quicktime and Shockwave, is available on the web.

Computer simulations of performance spaces and performers are powerful research and teaching tools, but carry inherent dangers. Performance reconstructions can encourage a positivist conception of history (Denard 2002: 34). A compelling computer simulation conceals the hypothetical and provisional nature of historical interpretation; vividly simulated theaters and performances produce the sensation that the viewer has been transported back in time and is experiencing the performance event "as it really was." But even if all of the physical details of the simulation are accurate, a present-day viewer's experience will be radically different from that of the original audience because the cultural context of reception has changed radically. Some projects, such as the Pompey Project and Virtual Vaudeville, are making a concerted effort to counteract these positivistic tendencies, primarily by providing hypermedia notes that supply contextual information, providing the historical evidence upon which the reconstructions are based, and offering alternatives to the interpretations of and extrapolations from the historical data used in the simulation. Whether such strategies will prove sufficient remains to be seen.

Computers in Performance

My focus so far has been on applications of computers to teaching and research in the performing arts. Digital technology is also beginning to have a significant impact on the way those art forms are being practiced. For example, the computer has become a routine part of the design process for many set and lighting designers. Throughout the 1990s, a growing number of designers adopted CAD software to draft blueprints and light plots and, more recently, employed 3D modeling software (sometimes integrated into the CAD software) to produce photo-realistic visualizations of set and lighting designs.

Computers are also being incorporated into the performances themselves. The earliest and most fully assimilated example is computer-controlled stage lighting. Computerized light boards can store hundreds of light cues for a single performance, automatically adjusting the intensity, and in some cases the color and direction, of hundreds of lighting instruments for each cue. This technology was introduced in the late 1970s, and by the 1990s had become commonplace even in school and community theaters. Similarly, set designers have used computerized motion control systems to change scenery on stage — though this practice is still rare and sometimes has disastrous results. For example, the initial pre-Broadway run of Disney's stage musical Aida featured a six-ton robotic pyramid that changed shape under computer control to accommodate different scenes. The pyramid broke down on opening night and repeatedly thereafter (Eliott 1998). Disney jettisoned the high-tech set, along with the production's director and designer, before moving the show to Broadway.

Computer-controlled lighting and scenery changes are simply automated forms of pre-computer stage technologies. A growing number of dance and theater artists have incorporated interactive digital media into live performance events. Such performances can have a profound impact on the way the art forms are conceived, collapsing the neat ontological divide that once separated (or seemed to separate) the live performing arts from reproductive media such as film and video. During period from the 1980s to the middle of the 1990s, a number of artists developed innovative approaches to incorporating interactive digital technologies into live performance. Much of this work has been documented in the Digital Performance Archive (DTP), a web-based database created by Nottingham Trent University and the University of Salford encompassing hundreds of dance and theater performances produced in the 1990s. After the dot-com bubble burst in 2001, some of the intense, utopic energy that drove experiments with performance and digital technology dissipated, and most work created in the early 2000s draws on aesthetic and technological innovations developed in the late 1990s, taking advantage of technology that has become more mainstream and less expensive.

George Coates Performance Works in San Francisco was one of the first and most prominent theater companies to combine digital media with live performance to create stunning, poetic visual spectacles. In 1989, George Coates founded SMARTS (Science Meets the Arts), a consortium including companies such as Silicon Graphics, Sun Microsystems, and Apple Computer, to acquire the high-end technology required for his productions. In a series of productions starting with Invisible Site: A Virtual Sho in 1991, Coates perfected a technique for producing the vivid illusion of live performers fully integrated into a rapidly-moving 3D virtual environment. The spectators wear polarized glasses to view huge, high-intensity stereographic projections of digital animations. The projections that surround the revolving stage cover not only the back wall but the stage floor and transparent black scrims in front of the performers. The digital images are manipulated interactively during the performances to maintain tight synchronization between the live performers and the media.

Another pioneer in the use of virtual scenery is Mark Reaney, founder of the Institute for the Exploration of Virtual Realities (i.e. VR), at the University of Kansas (Reaney 1996; Gharavi 1999). In place of physical scenery, Reaney creates navigable 3D computer models that he projects on screens behind the performers. The perspective on Reaney's virtual sets changes in relation to performers' movements, and a computer operator can instantly transform the digital scenery in any way Reaney desires. In 1995, i.e. VR presented its first production, Elmer Rice's expressionist drama The Adding Machine. For this production, Reaney simply rear-projected the virtual scenery. For Wings in 1996, Reaney had the spectators wear low-cost head-mounted displays that allowed them to see stereoscopic virtual scenery and the live actors simultaneously. Starting with Telsa Electric in 1998, Reaney adopted an approach much like Coates', projecting stereoscopic images for the audience to view through polarized glasses. Nonetheless, Reaney's approach differs from Coates's in a number of important ways. While Coates authors his own highly associative works, Reaney usually selects pre-existing plays with linear narratives. Reaney's designs, while containing stylized elements, are far more literal than Coates's; and the technology he employs, while more advanced than what is available to most university theaters, is far more affordable than the state-of-the-art technology at Coates's disposal.

Scenic and video designer William Dudley has brought similarly complex virtual scenery to the commercial stage. He first used projected video in 2002 for the National Theatre (London) production of Tom Stoppard's nine-hour epic Cost of Utopia, directed by Trevor Nunn. Dudley surrounded the performers on three sides with a curved projection surface filled with moving 3D imagery of the play's interior and exterior settings; to preserve the illusion of immersion in the projected images, Dudley created entrances and exits in the screens for the actors. Dudley has incorporated similar technology for successful West End productions of Terry Johnson's Hitchcock Blond (2003) and Andrew Lloyd Webber's Woman in White (2004, followed by a less successful Broadway run in 2005).

One of the most sophisticated applications of virtual scenery to date was a production of André Werner's opera The Jew of Malta commissioned by the Munich Biennale in 2002. The actors' changing positions and gestures were tracked precisely with infrared cameras. Not only did the virtual scenery respond to the actors' movements — for example, moving forward and backward and rotating in tandem with the performer — but virtual costumes were projected onto the performers' silhouettes (Media Lab Madrid 2004).

A number of dance performances have experimented with similar interactive 3D technologies. One of the earliest and most influential examples is Dancing with the Virtual Dervish/Virtual Bodies, a collaboration between dancer and choreographer Yacov Sharir, visual artist Diane Gromala, and architect Marcos Novak first presented at the Banff Centre for the Arts in 1994. For this piece, Sharir dons a head-mounted display and enters a VR simulation of the interior of a human body, constructed from MRI images of Gromala's own body. The images that Sharir sees in the display are projected on a large screen behind him as he dances.

The examples of digitally enhanced performance considered above are radically different in their aesthetics and artistic goals, but all establish the same basic relationship between the media and live performers: in each case, the media functions as virtual scenery, in other words, as an environment within which a live performance occurs. There are, however, many other roles that media can assume in a performance.1 For example, the media can play a dramatic role, creating virtual characters who interact with the live performers. A number of choreographers, including prominent figures such as Merce Cunningham and Bill T. Jones, have enlisted motion capture technology to lend subtle and expressive movements to virtual dancer partners (Dils 2002). Often, as in the case of both Cunningham's and Jones's work, the computer models themselves are highly abstract, focusing the spectators' attention on the motion itself. In a 2000 production of The Tempest at the University of Georgia, the spirit Ariel was a 3D computer animation controlled in real time by a live performer using motion capture technology (Saltz 2001b). In 2006, Cindy Jeffers and Meredith Finkelstein, founders of an art and robotics collective called BotMatrix, created and remotely controlled five robotic actors for Heddatron, a play produced by the Les Freres Corbusier theater company in New York (Soloski 2006). Claudio Pinhanez has applied artificial intelligence and computer-vision techniques to create fully autonomous computer characters. His two-character play It/I, presented at MIT in 1997, pitted a live actor against a digital character (Pinhanez and Bobick 2002).

A key goal of Pinhanez' work is to produce an unmediated interaction between the live performer and digital media. While this goal is unusual in theater, it is becoming increasingly common in dance, where there is less pressure to maintain a coherent narrative and so creating an effective interaction between the performer and media does not require sophisticated artificial intelligence techniques. Electronic musicians created a set of technologies useful for creating interactive dance in the 1980s in the course of exploring the use of sensors in new musical instrument interfaces. The Studio for Electro-Instrumental Music, or Steim, in the Netherlands was an early center for this research, and continues to facilitate collaborations between dancers and electronic musicians. One of the most widely used tools for creating interactive performances using sensor inputs and MIDI synthesizers, samplers, and lighting devices is the software application Max. Max Puckette initially developed Max in 1986 for the UNIX operating system, and David Zicarelli subsequently adapted it for the Macintosh and later Windows, adding the ability to perform complex real-time manipulations of audio and video through a set of extensions called, respectively, MSP (introduced in 1997) and Jitter (introduced in 2003). A number of dancers have also created interactive performances using the Very Nervous System (VNS), a system first developed by the Canadian media artist David Rokeby in 1986. The VNS, which integrates with Max, uses video cameras to detect very subtle motions that can trigger sounds or video.

One of the first dance companies formed with the explicit goal of combining dance with interactive technology was Troika Ranch, founded in 1993 by Mark Coniglio and Dawn Stoppiello. Troika Ranch has developed its own wireless system, MidiDancer, which converts a dancer's movements into MIDI data, which can be used to trigger sounds, video sequences, or lighting.

A particularly successful application of this kind of sensor technology was the 2001 production L'Universe (pronounced "loony verse") created by the Flying Karamazov Brothers, a troupe of comedic jugglers, in conjunction with Neil Gershenfeld of the Physics and Media Group at MIT. Gershenfeld created special juggling clubs with programmable displays, and used sonar, long-range RF (radio frequency) links and computer vision to track the positions and movements of the four performers. This technology was used to create a complex interplay between the performers and media, with the jugglers' actions automatically triggering sounds and altering the color of the clubs.

Brenda Laurel and Rachel Strickland's interactive drama Placeholder is one of the best-known attempts to create a performance in which the spectators interact directly with the technology. As two spectators move through a ten-foot diameter circle wearing head-mounted displays, they interact with a series of digital characters, with a character called the Goddess controlled by a live performer, and, to a limited extent, with each other (Ryan 1997: 695–6). The challenge of creating a rich and compelling narrative within this kind of interactive, open-ended structure is immense. While a number of writers have tackled this challenge from a theoretical perspective (see, for example, Ryan 1997 and Murray 1997), the promise of this new dramatic medium remains largely unfulfilled.

Telematic Performance

In 1932, in a remarkable anticipation of internet culture, theater theorist and playwright Bertolt Brecht imagined a future in which radio would cease to be merely a one-way "apparatus for distribution" and become "the finest possible communication apparatus in public life, a vast network of pipes" (Brecht 1964: 52). By the early 1990s, it had become possible to stream video images over the internet at very low cost, and performance groups were quick to exploit video streaming technologies to create live multi-site performance events. In a 1994 production of Nowhere Band, George Coates used free CU-SeeMe video-conferencing software to allow three band members at various locations in the Bay Area to perform live with a Bulgarian bagpipe player in Australia for an audience in San Francisco (Illingworth 1995). In 1995, Cathy Weiss used the same software to create an improvised dance performance at The Kitchen in New York with the real-time participation of a video artist in Prague and a DJ in Santa Monica (Saltz 2001a).

In 1999 the Australian Company in Space created a live duet between a dancer in Arizona and her partner in Australia (Birringer 1999: 368–9). In 2001, the opera The Technophobe and Madman took advantage of the new high-speed internet network to create a multi-site piece of theater. Half of the performers performed at Rensselaer Polytechnic University, while the other half performed 160 miles away at New York University. Two separate audiences, one at each location, watched the performance simultaneously, each seeing half the performers live and the other half projected on a large projection screen (see Mirapaul 2001). The Gertrude Stein Repertory Theater (GSRT) is a company dedicated to developing new technologies for creating theater. In their production of The Making of America (2003), adapted from Gertrude Stein's novel, performers in remote locations work together in real time to create live performances in both locations simultaneously, with the faces and bodies of actors in one location being projected via video-conferencing on masks and costumes worn by actors in the second location. The GSRT draws a parallel between this process, which they call "Distance Puppetry," and Japanese performance traditions such as bunraku and ningyo buri that also employ multiple performers to portray individual characters.

Telematic performance acquires its greatest impact when spectators interact directly with people at the remote site and experience the uncanny collapse of space first-hand. In the 1990s, new media artist Paul Sermon created a series of interactive art installations joining physically separated viewers. For example, in Telematic Dreaming a viewer lies down on one side of a bed and on the other side sees a real-time video projection of a participant lying down on a second, identical bed in a remote location. Other installations place the remote participants on a couch, around a dining room table, and at a séance table (Birringer 1999: 374). A more provocative example of a performance event that joins a live performer to the internet is Stelarc's 1996 Ping Body: an Internet Actuated and Uploaded Performance, in which a muscle stimulator sent electric charges of 0–60 volts into Stelarc to trigger involuntary movements in his arms and legs proportionate to the ebb and flow of internet activity.

Net Performance

In 1979, Roy Trubshaw invented a multiple-user text-based environment for interactive adventure games, which he called a Multi-User Dungeon, or MUD. In 1989, Jim Aspnes modified MUD software to emphasize social interaction rather than combat, and in 1990 Steven White and Pavel Curtis created an object-oriented MUD, called a MOO, that gave users tools to create their own virtual objects and spaces. Theater artists soon began to experiment with MOOs as online, collaborative environments to stage real-time performances. In 1994, Antoinette LaFarge founded the Plaintext Players to produce "directed textual improvisations that merge writing, speaking, and role-play" (Plaintext Players). Performers log into a multi-user environment from any location around the world and perform together simply by typing dialogue and descriptions of actions, gestures, and expressions. Some audiences experience these performances online, while others watch them projected or listen to speech-to-voice synthesis in gallery settings. Other early examples of MOO theater include Rick Sacks' MetaMOOphosis and Steve Schrum's NetSeduction, both of which took place in 1996 in ATHEMOO, an environment sponsored by the Association of Theatre in Higher Education (Stevenson 1999: 140).

Between 1997 and 2002, Adriene Jenik and Lisa Brenneis created a series of "Desktop Theater" performances using the Palace visual chat software, a free application that allows users to navigate freely through a labyrinth of virtual rooms hosted on distributed servers. Unlike purely textual MOO environments, the Palace features a crude graphic interface with static pictorial backgrounds and low-resolution two-dimensional avatars. In performances such as the 1997 waitingforgodot.com, performers cut and pasted their dialogue into speech bubbles attached to the avatars while the online audience, consisting mostly of unwitting Palace "chatters," commented on the performance, joined in, or left in confusion (Jenik 2001: 99). Faster internet connectivity and more powerful processors are giving rise to much more sophisticated 3D multi-user environments such as Second Life that offer rich new opportunities for real-time online performances.

Conclusions and Queries

The use of computers in the performing arts does not merely add a new tool to an old discipline. It challenges some of our most basic assumptions about performance. First, it blurs the boundaries between performance disciplines. For example, instead of producing traditional musical scores, many digital composers have created interactive computer algorithms that generate sequences of music and video in response to performers' improvised gestures. Should we regard such performers as instrumentalists, as dancers, or as composers in their own right, who directly produce the music and imagery the audience experiences? Second, digital technology blurs the boundaries between scholarship and creative practice. Is someone who extrapolates a complete set design, script, and performance from shreds of historical evidence to create a virtual performance simulation an artist or a scholar? When someone develops new artificial intelligence algorithms in order to create a dramatic interaction between a digital character and a live actor, is that person functioning as a computer scientist or an artist whose medium just happens to be computers? Finally, digital technology is challenging the very distinction between "liveness" and media. When a live performer interacts with a computer-generated animation, is the animation "live"? Does the answer depend on whether the animation was rendered in advance or is being controlled in real time via motion capture or with artificial intelligence software? Or do we now live in a world, as performance theorist Philip Auslander suggests, where the very concept of liveness is losing its meaning?

Note

1  Elsewhere I have distinguished between twelve types of relationships a production can define between digital media and live performers (Saltz 2001b).

References and Further Reading

Artaud, Antonin (1958). The Theater and its Double (Mary Caroline Richards, Trans.). New York: Grove Widenfeld (Original work published 1938).

Auslander, Philip (1999). Liveness: Performance in a Mediatized Culture. London: Routledge.

Auslander, Philip (2002). "Live from Cyberspace: or, I was sitting at my computer this guy appeared he thought I was a bot." Performing Arts Journal 24.1: 16–21.

Berghaus, Günter (2005). Avant-Garde Performance: Live Events and Electronic Technologies. Basingstoke, UK: Palgrave Macmillan.

Birringer, Johannes (1998). Media and Performance: Along the Border. Baltimore: Johns Hopkins University Press.

Birringer, Johannes (1999). "Contemporary Performance/Technology." Theatre Journal 51: 361–81.

Brecht, Bertolt (1964). "The Radio as an Apparatus of Communications." In John Willett (Ed. and Trans.). Brecht on Theatre. New York: Hill and Wang, pp. 51–3.

Carver, Gavin, and Colin Beardon (Eds.) (2004). New Visions in Performance: The Impact of Digital Technologies. Lisse: Swets and Zeitlinger.

Denard, Hugh (2002). "Virtuality and Performativity: Recreating Rome's Theatre of Pompey." Performing Arts Journal 70: 25–43.

Didaskalia (2004). "Recreating the Theatre of Dionysos in Athens." <http://www.didaskalia.net/StudyArea/recreatingdionysus.html>.DigitalPerformanceArchive.<http://dpa.ntu.ac.uk/dpa_site/>.

Dils, Ann (2002). "The Ghost in the Machine: Merce Cunningham and Bill T. Jones." Performing Arts Journal 24.1: 94–104.

Dixon, Steve (1999). "Digits, Discourse And Documentation: Performance Research and Hypermedia." TDR: The Drama Review 43.1: 152–75.

Dixon, Steve (2007). Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. Cambridge, MA: The MIT Press.

Eliott, Susan (1998). "Disney Offers an 'Aida' with Morphing Pyramid." New York Times October 9: E3.

Fridlander, Larry (1991). "The Shakespeare Project: Experiments in Multimedia." In George Landow and Paul Delany (Eds.). Hypermedia and Literary Studies. Cambridge: MIT Press, pp. 257–71.

Gharavi, Lance (1999). "i.e. VR: Experiments in New Media and Performance." In Stephen A. Schrum (Ed.). Theatre in Cyberspace. New York: Peter Lang Publishing Inc., pp. 249–72.

Giannachi, Gabriella (2004). Virtual Theatres: An Introduction. London: Routledge.

Golder, John (1984). "The Theatre Du Marais in 1644: A New Look at the Old Evidence Concerning France's Second Public Theatre." Theatre Survey 25: 146.

Gromala, Diane J., and Yacov Sharir (1996). "Dancing with the Virtual Dervish: Virtual Bodies." In M. A. Moser and D. MacLeod (Eds.). Immersed in Technology: Art and Virtual Environments. Cam-bridge, MA: The MIT Press, pp. 281–6.

Illingworth, Monteith M. (1995). "George Coates: Toast of the Coast." Cyberstage. <http://www.cyberstage.org/archive/cstage12/coats12.htm>.

Jenik, Adriene (2001). "Desktop Theater: Keyboard Catharsis and the Masking of Roundheads." TDR: The Drama Review 45.3: 95–112.

Laurel, Brenda (1991). Computers as Theater. Reading, MA: Addison-Wesley.

Laurel, Brenda, Rachel Strickland, and Rob Tow (1994). "Placeholder: Landscape and Narrative in Virtual Environments." ACM Computer Graphics Quarterly 28.2: 118–26.

MediaLab Madrid (2004). "The Jew of Malta: Interactive Generative Stage and Dynamic Costume, 2004." <http://www.medialabmadrid.org/medialab/med-ialab.php?l 0&a a&i 347>.

Menicacci, Armando and Emanuele Quinz (2001). La scena digitale: nuovi media per la danza. [The Digital Scene: New Media in Dance]. Venezia: Marsilio.

Meisner, Sanford (1987). On Acting. New York: Vintage Books.

Mirapaul, Matthew (2001). "How Two Sites Plus Two Casts Equals One Musical." New York Times February 19: E2.

Mohler, Frank (1999). "Computer Modeling as a Tool for the Reconstruction of Historic Theatrical Production Techniques." Theatre Journal 51.4: 417–31.

Murray, Janet H. (1997). Hamlet on the Holodeck : the Future of Narrative in Cyberspace. New York: Free Press.

Pinhanez, Claludio S., and Aaron F. Bobick (2002). "'It/I': A Theater Play Featuring an Autonomous Computer Character." Presence: Teleoperators and Virtual Environments 11.5: 536–48.

Plaintext Players. <http://yin.arts.uci.edu/-players/>.

Reaney, Mark (1996). "Virtual Scenography: The Actor, Audience, Computer Interface." Theatre Design and Technology 32: 36–43.

Ryan, Marie-Laure (1997). "Interactive Drama: Narrativity in a Highly Interactive Environment." Modern Fiction Studies 42.2: 677–707.

Saltz, David Z. (1997). "The Art of Interaction: Interactivity, Performativity and Computers." Journal of Aesthetics and Art Criticism 55.2: 117–27.

Saltz, David Z. (2001a). "The Collaborative Subject: Telero-botic Performance and Identity." Performance Research 6.4: 70–83.

Saltz, David Z. (2001b). "Live Media: Interactive Technology and Theatre." Theatre Topics 11.2: 107–30.

Saltz, David Z. (2005). "Virtual Vaudeville." Vectors 1.1 <http://www.vectorsjournal.org>.

Saltz, David Z. (2006). "Infiction and Outfiction: the Role of Fiction in Theatrical Performance." In David Krasner and David Z. Saltz (Eds.). Staging Philosophy: Intersections between Theatre, Performance and Philosophy. Ann Arbor: University of Michigan Press, pp. 203–20.

Sarlós, Robert K. (1989). "Performance Reconstruction: the Vital Link between Past and Future." In Bruce A. McConachie and Thomas Postlewait (Eds.). Interpreting the Theatrical Past. Iowa City: University of Iowa Press, pp. 198–229.

Schrum, Stephen A. (Ed.). (1999). Theatre in Cyber-space: Issues of Teaching, Acting, and Directing. New York: Peter Lang.

Soloski, Alexis (2006). "Do Robots Dream of Electronic Lovborgs?" New York Times February 5: 2:8.

Stevenson, Jake A. (1999). "MOO Theatre: More than just Words?" In Stephen A. Schrum (Ed.) Theatre in Cyberspace. New York: Peter Lang Publishing Inc., pp. 135–46.

Theatron. <http://www.theatron.org>.

Virtual Vaudeville. <http://www.virtualvaudeville.com>.

Watts, Allan (1997). "Design, Computers, and Teaching." Canadian Theatre Review 91: 18–21.

Williford, Christa (2000). "A Computer Reconstruction of Richelieu's Palais Cardinal Theatre, 1641." Theatre Research International 25.3: 233–47.