Top Banner
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=ndcr20 Download by: [Glasgow School of Art Library] Date: 17 August 2017, At: 03:31 Digital Creativity ISSN: 1462-6268 (Print) 1744-3806 (Online) Journal homepage: http://www.tandfonline.com/loi/ndcr20 Enheduanna – A Manifesto of Falling: first demonstration of a live brain-computer cinema performance with multi-brain BCI interaction for one performer and two audience members Polina Zioga , Paul Chapman , Minhua Ma & Frank Pollick To cite this article: Polina Zioga , Paul Chapman , Minhua Ma & Frank Pollick (2017) Enheduanna – A Manifesto of Falling: first demonstration of a live brain-computer cinema performance with multi-brain BCI interaction for one performer and two audience members, Digital Creativity, 28:2, 103-122, DOI: 10.1080/14626268.2016.1260593 To link to this article: http://dx.doi.org/10.1080/14626268.2016.1260593 © 2016 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group Published online: 14 Dec 2016. Submit your article to this journal Article views: 634 View related articles View Crossmark data Citing articles: 1 View citing articles
21

Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Mar 26, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Full Terms & Conditions of access and use can be found athttp://www.tandfonline.com/action/journalInformation?journalCode=ndcr20

Download by: [Glasgow School of Art Library] Date: 17 August 2017, At: 03:31

Digital Creativity

ISSN: 1462-6268 (Print) 1744-3806 (Online) Journal homepage: http://www.tandfonline.com/loi/ndcr20

Enheduanna – A Manifesto of Falling: firstdemonstration of a live brain-computer cinemaperformance with multi-brain BCI interaction forone performer and two audience members

Polina Zioga , Paul Chapman , Minhua Ma & Frank Pollick

To cite this article: Polina Zioga , Paul Chapman , Minhua Ma & Frank Pollick (2017) Enheduanna– A Manifesto of Falling: first demonstration of a live brain-computer cinema performance withmulti-brain BCI interaction for one performer and two audience members, Digital Creativity, 28:2,103-122, DOI: 10.1080/14626268.2016.1260593

To link to this article: http://dx.doi.org/10.1080/14626268.2016.1260593

© 2016 The Author(s). Published by InformaUK Limited, trading as Taylor & FrancisGroup

Published online: 14 Dec 2016.

Submit your article to this journal Article views: 634

View related articles View Crossmark data

Citing articles: 1 View citing articles

Page 2: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Enheduanna – A Manifesto of Falling: first demonstration of a livebrain-computer cinema performance with multi-brain BCI interactionfor one performer and two audience membersPolina Zioga a,b, Paul Chapman b, Minhua Ma a and Frank Pollick c

aSchool of Art, Design and Architecture, University of Huddersfield, Huddersfield, UK; bDigital Design Studio, GlasgowSchool of Art, Glasgow, UK; cSchool of Psychology, University of Glasgow, Glasgow, UK

ABSTRACTThe new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development ofapplications across health, entertainment and the arts, while an increasinginterest in multi-brain interaction has emerged. In the arts, there is already anumber of works that involve the interaction of more than one participantswith the use of EEG-based BCIs. However, the field of live brain-computercinema and mixed-media performances is rather new, compared toinstallations and music performances that involve multi-brain BCIs. In thiscontext, we present the particular challenges involved. We discussEnheduanna – A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance that enables the real-time brain-activityinteraction of one performer and two audience members; and we take acognitive perspective on the implementation of a new passive multi-brainEEG-based BCI system to realise our creative concept. This article alsopresents the preliminary results and future work.

KEYWORDSBrain-Computer Interface;multi-brain interaction;mixed-media; live brain-computer cinemaperformance; audienceparticipation

1 Introduction

Since 2007, the introduction of new commer-cial-grade Electroencephalography (EEG)-based Brain–Computer Interfaces (BCIs) andwireless devices has led to a phenomenal devel-opment of applications across health, entertain-ment and the arts. At the same time, anincreasing interest in the brain activity andinteraction of multiple participants, referred toas multi-brain interaction (Hasson et al. 2012,114), has emerged. Artists, musicians and per-formers have been amongst the pioneers inthe design of BCI applications (Nijholt 2015,

316) and while the vast majority of theirworks use the brain-activity of a single partici-pant, a survey by Nijholt (2015) presents earlierexamples that involve multi-brain BCI inter-action in installations, computer games andmusic performances, such as: the BrainwaveDrawings (1972) for two participants by NinaSobell; the music performance Portable Goldand Philosophers’ Stones (Music From theBrains In Fours) (1972) by David Rosenboom,where the brain-activity of four performerswas used as creative input; the Alpha Garden(1973) installation and the Brainwave Etch-a-

© 2016 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis GroupThis is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/Licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the originalwork is properly cited, and is not altered, transformed, or built upon in any way.

CONTACT Polina Zioga [email protected] School of Art, Design and Architecture, University of Huddersfield, Huddersfield, UK

DIGITAL CREATIVITY, 2017VOL. 28, NO. 2, 103–122https://doi.org/10.1080/14626268.2016.1260593

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 3: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Sketch (1974) drawing game by Jaqueline Hum-bert, both for two participants.

Nowadays, there is a new increasing numberof works that involve the simultaneous inter-action of more than one participants or perfor-mers with the use of EEG-based BCIs. Theemergence of these applications is not coinciden-tal. On the one hand, amongst artists and perfor-mers the notion of communicating andestablishing a feeling of being connected witheach other and the audience is part of their anec-dotal experience. On the other hand, in recentyears with the advancement of neurosciencesand new EEG technology they have been enabledto realise works and projects as a manifestationof their intra- and inter-subjective experiences(Zioga et al. 2015, 105). While, in a parallelcourse, in the fields of neuroscience and exper-imental psychology, has emerged a new andincreasing interest in studying the mechanisms,dynamics and processes of the interaction andsynchronisation between multiple subjects andtheir brain activity, such as brain-to-brain coup-ling (Zioga et al. 2015, 104). Recent applicationsinclude the computer games Brainball (Hjelmand Browall 2000); BrainPong (Krepki et al.2007); Mind the Sheep! (Gürkök et al. 2013)and BrainArena (Bonnet, Lotte, and Lecuyer2013). Amongst the relevant installations are:Mariko Mori’s Wave UFO (2003), an immersivevideo installation (Mori, Bregenz, and Schneider2003); the MoodMixer (2011–2014) by GraceLeslie and TimMullen, an installation with visualdisplay and ‘generative music composition’ oftwo participants’ brain-activity (Mullen et al.2015, 217) and a series of projects, like Measur-ing the Magic of Mutual Gaze (2011), The Com-patibility Racer (2012) and The Mutual WaveMachine (2013), by the Marina Abramovic Insti-tute Science Chamber and the neuroscientist DrSuzanne Dikker (Dikker 2011). Whereas, in thefield of music performances are included: theDECONcert (Mann, Fung, and Garten 2008),where forty-eight members of the audiencewere adjusting the live music; the MultimodalBrain Orchestra (Le Groux, Manzolli, and

Verschure 2010) that involves the real-time BCIinteraction of four performers; Ringing Minds(2014) by Rosenboom, Mullen and Khalil thatinvolved the real-time brain-activity of fourmem-bers of the audience combined to a ‘multi-personhyper-brain’ (Mullen et al. 2015, 222); and TheSpace Between Us (2014) by Eaton, Jin, and Mir-anda. In the latter the brainwaves of a singer and amember of the audience are measured and pro-cessed in real-time, separately or jointly, as anattempt of bringing the ‘moods of the audienceand the performer closer together’ (Eaton 2015)with the use of a system that ‘attempts to measurethe affective interactions of the users’ (Eaton,Wil-liams, and Miranda 2015, 103).

However, to our knowledge, the use of multi-brain BCIs in the field of live brain-computermixed-media and cinema performances1 forboth performers and members of the audienceis rather new and distinct from applicationslike those discussed above. Live cinema (Willis2009) and mixed-media performances (Auslan-der 1999, 36) are historically established cat-egories in the broader field of the performingarts and bear distinct characteristics that essen-tially differentiate them from music perform-ances with the addition of ‘dynamicalgraphical representations’ (Mullen et al. 2015,212) or VJing practices. In this context, wepresent in Section 2 the neuroscientific, com-putational, creative, performative and exper-imental challenges of the design andimplementation of multi-brain BCIs in mixed-media performances. Accordingly, we discussin Section 3 Enheduanna – A Manifesto of Fall-ing, the first demonstration of a live brain-com-puter cinema performance, as a completecombination of creative and research solutions,which is based on our previous work (Ziogaet al. 2014, 2015) with the use of BCIs in per-formances that involve audience participationand interaction with a performer (Nijholt andPoel 2016, 81). This new work enables for thefirst time, to our present knowledge, the simul-taneous real-time interaction with the use ofEEG of more than two participants, including

104 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 4: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

both a performer as well as members of theaudience in the context of a mixed-media per-formance. In Section 3.1 we discuss the cogni-tive approach we followed, while in Section3.2 we present the new passive multi-brainEEG-based BCI system that was implementedfor the first time. In Section 3.3 we discuss thecreative concept, methods and processes.Finally, Section 4 includes the preliminaryresults, a discussion of the difficulties weencountered and future work.

2 The challenges of the design andimplementation of multi-brain BCIs inmixed-media performances

Today, the new wireless EEG-based devices pro-vide the performers with greater kinetic andexpressive freedom, especially when comparedto previous wired systems and electrodes usedby artists and pioneers like Alvin Lucier(Music For Solo Performer 1965), ClaudiaRobles Angel (INsideOUT 2009) and others.At the same time, they also offer a variety ofconnectivity solutions. They enable the compu-tational and creative processing of a wide rangeof brain- and cognitive-states, according to thetasks executed, in consistency with the drama-turgical conditions and the creative concept ofthe performance (Zioga et al. 2014, 223). How-ever, the design and implementation of multi-brain BCIs in the context of mixed-media per-formances is linked to a series of neuroscientific,computational, creative, performative andexperimental challenges.

2.1 Neuroscientific

Although EEG is a very effective technique formeasuring changes in the brain-activity withaccuracy of milliseconds, one of its technicallimitations is the low spatial resolution, as com-pared to other brain imaging techniques, likefMRI (functional Magnetic Resonance Imaging),meaning that it has low accuracy in identifyingthe precise region of the brain being activated.

Additionally, the design and implementationof the EEG-based BCIs presents particular diffi-culties and is dependent on many factors andunknown parameters, such as the unique brainanatomy of the different participants wearingthe devices during each performance, or thetype of sensors used. Other unknown parametersinclude the location of the sensors, which mightbe differentiated even slightly during each per-formance, and the ratio of noise and non-brainartifacts to the actual brain signal being recorded.The artifacts can be either ‘internally generated’ or‘physiological’—electromyographic (EMG) fromthe neck and face muscles, electrooculographic(EOG) from the eye movements and electrocar-diographic (ECG) from the heart activity; and‘externally generated’ or ‘non-physiological’—spikes from equipment, cable sway and thermalnoise (Nicolas-Alonso and Gomez-Gil 2012,1238; Swartz Center of Computational Neuro-science, University of California San Diego 2014).

2.2 Computational

As Heitlinger and Bryan-Kinns (2013, 111)point out, the Human–Computer Interaction(HCI) research has mainly focused on ‘users’abilities to complete tasks at desk-bound com-puters’. This is still evident also in the field ofBCIs and the application development acrossgames, interactive and performance art. Oftenthis is necessary, such as in cases where SteadyState Visual Evoked Potential (SSVEP) para-digms are used, for which users need to focustheir attention at visual stimuli on a screen,for periods of several seconds repeated multipletimes. Similar conditions are also encounteredin live computer music performances wherethe performers are limited by the music tasksthey need to accomplish. However, for thebrain–computer interaction of performersengaging with more intense body movementand making more active use of the performancespace, like actors/actresses and dancers, thedesign of the BCI application needs to be liber-ated from ‘desk-bound’ constraints.

DIGITAL CREATIVITY 105

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 5: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

At the same time, the use of the performancespace itself is also limited due to the availabletransmission protocols, such as Bluetooth,which is very common amongst the wirelessBCI devices (Lee et al. 2013, 221) and has typi-cally a physical range of 10 m.

Additionally, the new low-cost headsets thatare used by an increasing number of artistscreating interactive works have proven to bereliable for the real-time raw EEG data extrac-tion. At the same time, they also includeready-made algorithmic interpretations, filtersand ‘detection suites’ which indicate the user’saffective states, such as ‘meditation’ or ‘relax-ation’, ‘engagement’ or ‘concentration’, whichvary amongst the different devices and manu-factures. However, the algorithms and method-ology upon which the interpretation and featureextraction of the brain’s activity is made are notpublished by the manufactures (Zioga et al.2014, 225) and therefore their reliability is notequally verified. Towards this direction newresearch is trying to understand the correlatesof individual functions, such as the attentionregulation, and compare them to published lit-erature (van der Wal and Irrmischer 2015, 192).

Moreover, multi-brain applications designedfor simultaneous real-time interaction of bothperformers and members of the audience in astaged environment, like in mixed-media andlive cinema performances, are rather new andhave involved so far up to two interactingbrains. What kind of methods need to be devel-oped and what tools to be used in order to visu-alise, both independently as well as jointly, thereal-time brain-activity of multiple participantsunder these conditions? (Zioga et al. 2015, 111).

2.3 Creative and performative

The use of interactive technology in stagedworks presents major creative and performativechallenges, especially when audience membersbecome participants and co-creators. Thisoccurs because the aim is to achieve ‘a compre-hensive dramaturgy’ with ‘a high level of

narrative, aesthetic, emotional and intellectualquality’. At the same time a great emphasis isplaced ‘on the temporal parameter’ of the inter-action, which needs to be highly coordinatedcomparing to interactive installations that inmost cases can be activated whenever the visitorwants (Lindinger et al. 2013, 121).

In the case of BCIs, the different systems arecategorised as ‘active’, ‘re-active’ and ‘passive’,according to their interaction design and thetasks involved. The passive derive their outputsfrom ‘arbitrary brain activity without the pur-pose of voluntary control’, in contrast to ‘active’BCIs where the outputs are derived from brainactivity ‘consciously controlled by the user’,while the ‘reactive’ BCIs derive their outputsfrom ‘brain activity arising in reaction to exter-nal stimulation’ (Zander et al. 2008). At thesame time, one of the first creative challengesthat the performer/s might face during amixed-media performance that involves differ-ent tasks is the cognitive load. For example, ifthe performer/s need to dance, act and/orsing, it is highly difficult to execute at thesame time mental tasks, such in the case ofactive or re-active BCIs.

Additionally, if we would like to involve boththe performer/s’ as well as the participatingmembers’ of the audience real-time brain-inter-action, then we would need to approach the BCIdesign in a way that addresses the dramaturgi-cal, narrative and participatory level, in orderto induce ‘feelings of immersion, engagementand enjoyment’ (Lindinger et al. 2013, 122).

Moreover, in real-time audio-visual andmixed-media performances, from experimentalunderground acts to multi dollar music concertstouring around the world in big arenas, livenessis a key element and challenge. In the case ofperformers using laptops and operating soft-ware, the demonstration of liveness to the audi-ence can be approached in various ways (Ziogaet al. 2014, 226). Last but not least, in the historyof arts when a new medium is introduced, thefirst works are commonly oriented around thecapabilities and the qualities of the medium

106 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 6: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

itself. The use of BCIs in interactive art has notbeen an exception. However, a negative mani-festation of this tendency is technoformalism,the fetishising of the technology (Heitlingerand Bryan-Kinns 2013, 112), when the artists’focus on the medium is made on the expenseof the artistic concept and the underpinningideas of the creative process. In the case of theuse of BCIs how could this be avoided andhow can the specific technologies serve the pur-pose of the creative concept?

2.4 Experimental

When EEG studies are conducted in a labenvironment there is greater flexibility and free-dom compared to studies attempted in a publicspace and moreover under the tight conditionsof a mixed-media performance. In a lab exper-iment the allocation of a time that is suitablefor the research purposes and convenient forthe participants is easier and can expand duringa longer period. While in a performance settingthe study needs to be organised according to theevent and venue logistics. Additionally, a labenvironment is a more informal and privatespace comparing to a public performancevenue, where apart from the researcher/s andthe participants, other spectators are also pre-sent, a fact that increases considerably thepsychological pressure for a successful process.Moreover, conducting an EEG study withinthe frame of a public mixed-media perform-ance, involves many additional elements thatneed to be coordinated and precisely synchro-nised, like for example live video projectionswith live electronics and the real-time acqui-sition and processing of the participants’ data.

3 Enheduanna – A Manifesto of Falling:a live brain-computer cinemaperformance as a combination ofcreative and research solutions

Enheduanna – AManifesto of Falling (2015) is anew interactive performative work, an event

that combines live, mediatised representations,more specifically live cinema, and the use ofBCIs (Zioga et al. 2015, 107). The projectinvolves a multidisciplinary team. The key col-laborators of the project include the first author,as the director, designer of the Brain–ComputerInterface system, live visuals and BCI performer;the actress and writer of the inter-text AnastasiaKatsinavaki; the composer and live electronicsperformer Minas Borboudakis; the co-authorsas the BCI research supervisory team; the direc-tor of photography Eleni Onasoglou; the MAXMSP Jitter programmer Alexander Horowitz;Ines Bento Coelho as the choreographer; thecostumes designer Ioli Michalopoulou; the soft-ware engineer Bruce Robertson and HananMakki as the graphic designer.

The performance is an artistic research pro-ject, which aims to investigate in practice thechallenges of the design and implementationof multi-brain BCIs in mixed-media perform-ances as mentioned in Section 2 and developaccordingly a combination of creative andresearch solutions. It involves a large pro-duction team of professionals, performers andpublic audiences in a theatre space. The conceptof the performance is based on the followingelements: the aesthetic, visual and dramaturgi-cal vision of the director; the thematic ideaand the inter-text written by the actress; aswell as the design and implementation of thepassive multi-brain EEG-based BCI System bythe authors. More specifically, the work, withan approximate duration of 50 minutes,involves the live act of three performers, a livevisuals (live video projections) and BCI perfor-mer, a live electronics and music performer, anactress and the participation of two members ofthe audience, with the use of a passive multi-brain EEG-based BCI system (Zioga et al.2015, 108). The real-time brain-activity of theactress and the audience members control thelive video projections and the atmosphere ofthe theatrical stage, which functions as an alle-gory of the social stage. The premiere tookplace at the Theatre space of CCA: Centre

DIGITAL CREATIVITY 107

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 7: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

for Contemporary Arts, Glasgow, UK, from29 to 31 July 2015 (CCA 2015) involving differ-ent audience and participants each day (seehttp://www.polina-zioga.com/performances/2015-enheduanna---a-manifesto-of-falling for ashort video trailer of the performance). Thetechnical specifications of the space included astage with an approximate 60 cm height; avideo projection screen with approximatedimensions of 370 cm × 400 cm; an HD videoprojector; and a 4.1 sound system with twoactive loudspeakers located at the back of thestage, two more at the far end of the spaceand one active sub bass unit underneath themiddle front of the stage.

The thematic idea of the performanceexplores the life and work of Enheduanna (ca.2285-2250 B.C.E.), an Akkadian Princess, thefirst documented High Priestess of the deity ofthe Moon Nanna in the city of Ur (present-dayIraq), who is regarded as possibly the firstknown author and poet in the history ofhuman civilisation, regardless of gender (Ziogaet al. 2015, 107). In her most known work, ‘TheExaltation of Inanna’ (Hadji 1988), Enheduannadescribes the political conditions under whichshe was removed from high office and sent intoexile. She speaks about the ‘city’, power, crisis,falling and the need for rehabilitation. Herpoetry is used as a starting point for a conversa-tion with the work of contemporary writers,Adorno et al. (1950), Angelou (1995), Laplancheand Pontalis (1988), Pampoudis (1989), Woolf(2002) and Yourcenar (1974), that investigatethe notions of citizenry, personal and social ill-ness, within the present-day international, socialand political context of democracy. The pre-miere was performed in English, Greek, andFrench with English supertitles (CCA 2015).

This new work enables for the first time thesimultaneous real-time interaction with theuse of EEG of more than two participants,including both a performer as well as membersof the audience in the context of a mixed-mediaperformance.

3.1 The cognitive approach

As we mentioned in Section 2.3, if we would liketo involve the performer/s’ as well as the audi-ence participants’ real-time brain-activity, weneed to approach the BCI design in a way thataddresses the dramaturgical, narrative and par-ticipatory level. At the same time, the actress’performance in Enheduanna – A Manifesto ofFalling involves speaking, singing, dancing andintense body movement, sometimes even simul-taneously as it is common in similar stagedworks. This results to a very important chal-lenge: the cognitive load she is facing andbecause of which it is not feasible for her to exe-cute at the same time mental tasks such as thoseused for the control of ‘re-active’ and ‘active’BCIs i.e. focusing her attention at visual stimulion a screen for periods of several seconds andrepeated multiple times or trying to imaginedifferent movements (motor imagery). Forthis reason, the cognitive approach in thedesign of the performance was focused on the‘arbitrary brain activity without the purposeof voluntary control’ (Zander et al. 2008) andthe BCI system developed is passive for boththe performer, as well as for the audience par-ticipants. This is a feasible solution for theactress’ cognitive load, but it also presentstwo opportunities. It allows us to directly com-pare the brain-activity of the performer and theaudience. It also enables us to study and com-pare the experience and engagement of theaudience in a real-life context, which is multi-dimensional and bears analogies to free view-ing of films, extensively studied in the interdis-ciplinary field of neurocinematics (Hassonet al. 2008, 1). This way we pursue a doubleaim: the development of a multi-brain EEG-based BCI system, which will enable the useof the brain activity of a performer and mem-bers of the audience as a creative tool, but alsoas a tool for investigating the passive multi-brain interaction between them (Zioga et al.2015, 108).

108 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 8: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

3.2 The passive multi-brain EEG-basedBCI system

The passive multi-brain EEG-based BCI systemconsists of the following parts (Figure 1): theperformer and the audience participants; thedata acquisition; the real-time EEG data proces-sing and feature extraction; and the MAX MSPJitter programming.

3.2.1 ParticipantsThe participants of the study included theactress and six members of the audience, twofor each performance: the dress rehearsal, thepremiere and the second public performance.The audience participants were recruiteddirectly by the first author in the close geo-graphical proximity. The inclusion criteriawere general adult population, aged 18–65years old, both female and male. The exclusioncriteria were not to suffer from a neurologicaldeficit and not to be receiving psychiatric orneurological medication. The participants wereasked to avoid the consumption of coffee, tea,high caffeine drinks, cigarettes and alcohol, aswell as the recreational use of drugs for atleast twelve hours prior to the study. They

were asked to arrive at the space one hourbefore the performance, when the venue wasclosed for the public. This gave sufficient quitetime to discuss any additional questions, answera preliminary brief questionnaire and start thepreparation (Figure 2). Following the openingof the space to the public, they were asked towatch the performance like any other spectatorand in the end to complete a final brief ques-tionnaire. The recruitment and the preparationof the actress followed a similar process,inclusion and exclusion criteria, similarinformed consent and questionnaires.

As we mentioned in Section 2.4, when EEGstudies are conducted in a lab environmentthere is greater flexibility and freedom compar-ing to studies attempted in a public space andmoreover under the tight conditions of amixed-media performance. In the case of Enhe-duanna – A Manifesto of Falling the process wedescribe here, which includes informing theparticipants and answering their questions afew days prior to the study and also allowingfor an hour without the presence of other spec-tators has been adequate and helped to mini-mise the psychological pressure to both theparticipants as well as the researcher.

Figure 1. The passive multi-brain EEG-based BCI system. Vectors of human profiles designed by Freepik. ©2015 Theauthors.

DIGITAL CREATIVITY 109

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 9: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

The audience participants were seated side byside together with a technical assistant. Theywere positioned facing the stage, at the left endof the third row, near the director and theMAX MSP Jitter programmer, but also withinthe standard 10 m range of the Bluetooth con-nection with the main computer, where theEEG data were collected and processed. Atthe same time, they had an excellent view ofthe entire performance and the video projections,while both the audience members as well as theactress were free from ‘desk-bound’ constraints.

3.2.2 EEG data acquisitionThe second part of the system involves the useof commercial grade EEG-based wirelessdevices. More specifically, the participants arewearing the MyndPlay Brain-BandXL EEGHeadsets, which were also used during thedesign phase of the system, in order to ensurethat important parameters remain the same.

The headset has two dry sensors with oneactive, located in the prefrontal lobe (Fp1)(MyndPlay 2015). The choice of the specificdevice was based on the following criteria:

(1) Low cost—feasible for works that involvemultiple participants.

(2) Easy to wear design—crucial for the timeconstraints of a public event.

(3) Lightweight—convenient for use over pro-longed periods such as during aperformance.

(4) Aesthetically neutral—easier integrationwith the scenography and other elementssuch as the costumes.

(5) Dry sensors—same as [1].(6) Position of the sensors on the prefrontal

lobe—broadly associated with cognitivecontrol.

The three devices (the actress’ and the twoaudience participants’) are always switched onand connected to the main computer oneafter another and in the same order, so thatto ensure that they are always assigned thesame COM ports. The raw EEG data areacquired at a sampling frequency of 512 Hzwith 32 sample count per sent block and arebeing transmitted wirelessly to the main a com-puter via Bluetooth.

Figure 2. The live visuals and BCI performer (first author) preparing two audience participants prior to the perform-ance at CCA: Centre for Contemporary Arts Glasgow, 30–31 July 2015. ©2015 The authors and Catherine M. Weir.

110 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 10: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

3.2.3 Real-time EEG data processingIn the third part of the system the real-timedigital processing of the raw EEG data fromeach participant and device is performed withthe OpenViBE software (Renard et al. 2010),which enabled us to solve efficiently the designof a simultaneous real-time multi-brain inter-action of more than two participants. This isachieved by configuring multiple ‘AcquisitionServers’ (OpenViBE 2011a) sending their datato corresponding ‘Acquisition Clients’ (Renard2015) within the same Designer Scenario(OpenViBE 2011b). This also automaticallyenables the simultaneous receiving, processingand recording of the data and the synchronisa-tion with the live video projections, as we willdemonstrate further along.

The processing continues by selecting theappropriate EEG channel and by using algor-ithms that follow the frequency analysis methodwe designed a custom-based feature extraction.Taking into consideration the challenges pre-sented in Section 2, such as the unique brainanatomy of the different participants wearingthe device and the location of the sensors,which might be differentiated even slightlyduring each performance, but also the EEG’slow spatial resolution, our methodology focuseson the oscillatory processes of the brain activity.With the use of band-pass filters, the 4–40 Hzfrequencies that are meaningful in the con-ditions of the performance, are selected. Morespecifically, we process the theta (4–8 Hz) fre-quency band associated with deep relaxationbut also emotional stress, the alpha (8–13 Hz)associated with relaxed but awake state, thebeta (13–25 Hz) and the lower gamma (25–40 Hz) that occur during intense mental activityand tension (Thakor and Sherman 2013, 261).The <4 Hz frequency, which corresponds tothe delta band, and is associated with deepsleep, is rejected, in order to suppress low passnoise, EOG and ECG artifacts. Also the 40 Hzand above frequencies are rejected, in order tosuppress EMG artifacts from the body musclemovements, high pass and line noise from

electrical devices. This way we improve theratio of actual brain signal to the noise andnon-brain artifacts being recorded. The proces-sing continuous by applying time-based epoch-ing, squaring, averaging and then computingthe log(1 + x) of each selected frequency. Atthe same time, both the raw as well as the pro-cessed EEG data of each participant are beingrecorded as CSV files in order to be analysedoff line after the performance (Zioga et al.2015, 110). The final generated values are sentto the MAX MSP Jitter software using OSC(Open Sound Control) Controllers (Caglayan2015).

3.2.4 Max MSP jitter programmingThe fourth part of the system involves the MAXMSP Jitter programming. The processed valuesof each participant, imported as OSC messageswith the use of separate ‘User Datagram Proto-col (UDP) receivers’ (Cycling 2016), are scaledto RGB colour values from 0 to 255. Morespecifically, the processed data from the 13–40 Hz frequency (beta and lower gamma) aremapped to the red value, the data from the 8–13 Hz (alpha) band are mapped to the greenvalue and from the 4–8 Hz (theta) to the bluevalue. Then, these RGB values of each partici-pant are imported either separately or jointly,depending on the stage of the performance(see Section 3.3) into a ‘swatch’ (max objectsdatabase 2015).

The ‘swatch’ provides 2-dimensional RGBcolour selection and display and combines inreal-time the values creating a constantly chan-ging single colour. The higher the incomingOSC message value of any given processed fre-quency, the higher the respective RGB valuebecomes and the more the final colour shiftstowards that shade. The generated colour isthen applied as a filter, with a manually con-trolled opacity of maximum 50%, to pre-ren-dered black and white video files reproduced inreal-time. The resulting video stream is projectedon a screen. Its chromatic variations not onlycorrespond to a unique real-time combination

DIGITAL CREATIVITY 111

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 11: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

of the three selected brain activity frequenciesof multiple participants, but also serve as visual-isation of their predominant cognitive states,both independently as well as jointly.

3.3 The creative concept, methods andprocesses

We started by presenting the cognitiveapproach and the passive multi-brain EEG-based BCI system, leaving the creative concept,methods and processes for the current section.However, these three directions have influencedeach other and were co-designed during theentire preparation phase of the project. Also,the use of the BCI system has influenced almostevery aspect of the creative production, such asthe choreography, the scenography and eventhe lighting and the costumes design. Neverthe-less, for the purposes of this article we will focuson directing and live cinema, the interactivestorytelling and narrative structure and thelive visuals.

3.3.1 Directing and live cinemaAs we explained at the beginning of Section 3, alive brain-computer cinema performance is anevent that combines live, mediatised represen-tations, more specifically live cinema, and theuse of BCIs (Zioga et al. 2015, 107). Morespecifically, live cinema is defined as ‘[…]real-time mixing of images and sound for anaudience, where […] the artist’s role becomesperformative and the audience’s role becomesparticipatory.’ (Willis 2009, 11). In the case ofEnheduanna – A Manifesto of Falling, thereare three performers, the live visuals performer,the actress and the live electronics performer.The actress’ activity and the participatory roleof the audience members are enhanced andcharacterised by the use of their real-timebrain-activity as a physical expansion of thecreative process, as an act of co-creating andco-authoring, and as an embodied form ofimprovisation, which is mapped in real-timeto the live visuals (Zioga et al. 2015, 108). At

the same time, the live visuals performer has asignificantly different role than those usuallyencountered in a theatre play. She is the direc-tor, video artist and BCI performer on stage, amulti-orchestrator facilitating and mediatingthe interaction of the actress and the audienceparticipants. Additional elements borrowed bythe practices of live cinema include the use ofnon-linear narration and storytelling approachthrough the fragmentation of the image, theframe and the text (Zioga et al. 2015, 108).

Moreover, one of the preliminary directingpreoccupations has been to avoid technoform-alism and create a work not just orientatedtowards an entertaining, ‘pleasurable, playfulor skilful’ result and interaction, but aiming toa ‘meaning-making’ of historical and socio-pol-itical themes (Heitlinger and Bryan-Kinns 2013,113). The goal is to bring together the thematicidea of the life of Enheduanna with the passivebrain-interaction of the actress and the audi-ence as two complementary elements. This hasbeen achieved in the conceptual as well as theaesthetic level.

A conceptual and dramaturgical non-lineardialogue is created between: the poetry of Enhe-duanna (Hadji 1988) who speaks about power,crisis, falling and citizenry; the work of the con-temporary writers that investigate socio-politi-cal themes; and the passive brain-interactionof the participants as an allegory of the passivecitizenry and its role in the present-day contextof democracy.

In the aesthetic level, the basic directing strat-egy is to create multiple levels of storytelling andinteraction. The texts in the three languages ofthe original literature references, Greek, Englishand French, are either performed live by theactress or her pre-recorded voice is reproducedtogether with the live electronics. This symboli-cally creates the effect of three personalities: thelive narrator and two other female commenta-tors, perceived as either external or as twoother sides of her consciousness. The basicdirecting strategy also involves associating thedifferent aspects of the real-time brain-activity

112 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 12: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

of the participants to the colour of the livevisuals. However, the visualisation is not uniformthroughout the performance. It presents vari-ations that follow the interactive storytelling pat-tern and the narrative structure presented in thefollowing section.

3.3.2 Interactive storytelling and narrativestructureAs Ranciere (2007, 279) argues:

Spectatorship is not the passivity has to beturned into activity. It is our normal situation.We learn and teach, we act and know as spec-tators who link what they see with what theyhave seen and told, done and dreamt [… ]We have to acknowledge that any spectatoralready is an actor of his own story and thatthe actor also is the spectator of the samekind of story.

The interactive storytelling and the narrativestructure in Enheduanna –AManifesto of Fallingconsists of two parts introduced to the audiencewith vignettes projected during the live visuals

(Figures 3 and 4): in part 1, titled ‘Me’, the inter-action is based solely on the actress’ brain-activity (Figure 5), while in part 2, titled ‘You/We’, it is based first on either of the two audiencemembers’ brain-activity (Figures 6 and 7) andthen on the combination of the actress and oneof the audience member (Figure 8).

The two parts of the performance are furtherdivided into five scenes. The first three corre-spond to part 1 and the last two to part 2:scene 1 ‘Me Transmitting Signals’; scene 2 ‘MeRising’; scene 3 ‘Me Falling’; scene 4 ‘YouMeasuring the F-Scale’ and scene 5 ‘We’.

While in part 1 we are introduced andimmersed in the story of Enheduanna in amore traditional theatrical manner, with theactress being perceived by the audience as athird person (she), in part 2 we witness hertransformation to becoming a second person(you), when the audience members areaddressed (Dixon 2007, 561).

This transformation/transition is promotedwith the use of different theatrical elements.

Figure 3. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, vignette introducing part1 ‘Me’. ©2015 The first author.

DIGITAL CREATIVITY 113

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 13: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

The actress is present performing throughoutpart 1, but at the end of scene 3 she leavesthe stage. The lights fade out and two soft

spots above the two audience participants areturned on throughout scene 4 and the firsthalf of scene 5. This way the performance

Figure 4. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, vignette introducing part2 ‘You/We’. ©2015 The first author.

Figure 5. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, scene 3 ‘Me Falling’, atCCA: Centre for Contemporary Arts Glasgow, 30–31 July 2015. ©2015 The authors and Catherine M. Weir.

114 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 14: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

becomes a cinematic experience, while the useof light underlines the control of the livevisuals by the audience’s brain activity(Figure 6).

The actress reappears on stage at the begin-ning of scene 5, addressing directly the audience(Figure 7), while their real-time brain-inter-action gradually merges and their averaged

Figure 6. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, scene 4 ‘You measuringthe F-scale’, at CCA: Centre for Contemporary Arts Glasgow, 30–31 July 2015. ©2015 The authors and CatherineM. Weir.

Figure 7. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, first half of scene 5 ‘We’,at CCA: Centre for Contemporary Arts Glasgow, 30–31 July 2015. ©2015 The authors and Catherine M. Weir.

DIGITAL CREATIVITY 115

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 15: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

values control the colour filter applied to the livevideo stream (Figure 8).

The narrative and dramaturgical structurewe described fulfils the directing vision andaim of bringing together the thematic idea ofthe performance with the use of the interactiontechnology in a coherent and comprehensivemanner. It associates the real-time brain-activity of the participants to the colour of thelive visuals, which will further discuss in the fol-lowing section, within a consistent storytellingprocess and by this it also serves as an evidenceof liveness.

3.3.3 Live visualsAs we mentioned previously, the live visuals areperformed with the MAX MSP Jitter softwareand consist of two main components: the pre-rendered black and white .wmv video files andthe RGB colour filter generated by the proces-sing and mapping of the participants’ EEGdata. The video shootings took place on locationin Athens, Greece, while the editing and post-production was made with the Adobe AfterEffects software (Adobe 2016).

Regarding the generation of the colour filter,the choice of mapping the selected EEG fre-quency bands to the specific RGB colours, asdescribed in Section 3.2.4, was based on the his-torically established in the western world cul-tural associations of specific colours withcertain emotions. As we mentioned previously,the blue colour is controlled by the theta fre-quency band, which is associated with deeprelaxation; the green by the alpha band, associ-ated with a relaxed but awake state; and the redcolour is controlled by the beta and lowergamma frequency bands that occur duringintense mental activity and tension. This waythe transition of the participants from relaxedto more alert cognitive states is visualised inthe colour scale of the live visuals as a shiftfrom colder to warmer tints. Taking intoaccount that all the other production elementsare in black, white and grey shades, includingthe costumes and the lighting design, the gener-ated RGB colour filter not only creates theatmosphere of the visuals, but also of the thea-trical stage. It becomes a real-time feedback ofthe cognitive state of the participants and sets

Figure 8. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, second half of scene 5‘We’, at CCA: Centre for Contemporary Arts Glasgow, 30–31 July 2015. ©2015 The authors and Catherine M. Weir.

116 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 16: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

the emotional direction for the overallperformance.

Furthermore, the configuration of the live-visuals with the MAX MSP Jitter software isnot an entirely automated process. In consist-ency with the interactive storytelling and narra-tive structure, certain features are subject tomanual control:

(1) Selection of processed brain-activity for thegeneration of the RGB colour filter—theactress’ in scene 1, 2 and 3; one of the audi-ence participants in scene 4; the actress’ andone of the audience participants’ in scene 5,separately and then averaged.

(2) Selection and triggering of video files andcorresponding scenes—performed incoordination with the live electronics per-former who functions as a conductor, lead-ing the initiation of the different scenes andthe synchronisation between the audio andthe visuals.

(3) The RGB colour filter saturation level—i.e.,decreasing in cases that the level is high fora prolonged period of time thus becomingunpleasant for the audience or increasingwhen the level is low therefore creating anon-visible result.

(4) The RGB colour filter opacity level—0%during the video vignettes and increasedup to 50% during the main scenes. Thisallows the video vignettes to function asaesthetically neutral intervals orienting theaudience in regards to the narrative struc-ture and the brain-activity interaction.

(5) The RGB colour filter split in two equalparts—at the beginning of scene 5, the filteron the right half of the screen maps theactress’ brain-activity (marked as ‘Me’)and on the left maps one of the audiencemember’s activity (marked as ‘You’)(Figure 7). Towards the middle of thescene, the two parts merge and the filter isaveraged (marked as ‘We’) (Figure 8).This way not only their respective cognitivestate is visualized, but also a real-time

comparison between them, enriching theinteractive storytelling and reinforcing theaudience perception of liveness.

4 Discussion and future work

In the present article, we described the chal-lenges of the design and implementation ofmulti-brain BCIs in mixed-media and livecinema performances. We presented ‘Enhe-duanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance, as a completecombination of creative and research solutions,which are summarised in Table 1. These well-documented solutions can function as generalguidelines; however, we expect that otherartists might also investigate individualisedapproaches, customised and consistent to thespecific context of their performances.

The first demonstrations of the performancewere very well received by the audiences andprovided us with valuable and most interestingfeedback from the participants. For example,the actress reported that when she was facingthe screen she could observe and identifyspecific colour changes corresponding tospecific phases of her acting. One of the audi-ence participants reported that although hecouldn’t understand exactly the brain-activityinteraction, he ‘felt somehow connected’ andthat in specific parts of the performance the col-ours expressed his feelings. The collected datafrom the pre- and post-performance question-naires aimed more specifically to reveal:whether the participants were able to identifywhen and how their brain-activity was control-ling the live video projections; and what werethe most special elements of the performanceaccording to them. As we mentioned in Section3.2, we also collected the participants’ raw EEGdata, the quantitative and statistical analysis ofwhich most interestingly revealed a correlationwith their answers to the questionnaires. Forexample, the audience participants highlightedpart 2 ‘You/We’ as a special aspect of the

DIGITAL CREATIVITY 117

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 17: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

performance, during which their cognitive andemotional engagement was clearly increasedtoo and also in a similar manner. We willbe able to report more detailed results in thefuture.

The first demonstrations of the performancealso gave us the opportunity to observe in real-life conditions challenges that we have not pre-dicted before and at the same time to considerthe direction of our future work.

In the computational level, one of the firstissues was that the BCI devices would connectsuccessfully via the Bluetooth to the computer,only if they were assigned in each session thesame COM Ports. In order to ensure this, wehad to switch on and connect them, one afteranother and in the same order. This wasespecially crucial, since the current version ofthe OpenViBE driver is searching for devicesassigned to COM Ports 1-16, so a headset

Table 1. ‘Enheduanna – A Manifesto of Falling’ Live Brain-Computer Cinema Performance: the solutions to thechallenges of the design and implementation of multi-brain BCIS in mixed-media performances. ©2015 Theauthors.Neuroscientific. Type of sensorsUse of identical headsets and sensors from the design phase to the implementation (3.2.2).

. Unique brain anatomy of different participants wearing the devices

. Location of the sensors during each performance

. EEG low spatial resolutionDigital processing focus on the oscillatory processes (3.2.3).

. Ratio of noise and non-brain artifacts to the actual brain signalUse of band-pass filters combination for rejecting artifacts, low and high pass noise (3.2.3).

Computational

. Application design for non-desk-bound computer user

. Limited Bluetooth physical rangeArrangement of space with BCI performer positioned between audience participants and stage (3.2.1).

. Raw EEG data versus ‘detection suites’Use of processing software for custom-based feature extraction (3.2.3).

. Independent and joint real-time multi-brain interaction and visualisation for more than two participantsConfiguration of multiple acquisition servers & clients (3.2.3). Mapping of EEG data to RGB colour values (3.2.4) separately, jointly split &averaged (3.3.3).

Creative and Performative

. Performer/s’ cognitive loadFocus on passive ‘arbitrary brain activity without the purpose of voluntary control’ (3.2.3).

. Meaningful BCI system design for performer/s and audience alikeDesign of BCI system for direct comparison of participants’ brain-activity & offline comparison to free viewing of films (3.1).

. LivenessMapping of participants’ brain-activity to the narrative and dramaturgical structure (3.3.2).Application of RGB colour filter separately, jointly split (actress versus audience participants) & averaged (3.3.3).

. TechnoformalismConceptual and aesthetic combination of creative direction and interaction technology (3.3.1).

Experimental. Recruitment of participants. Coordination of study during a public eventRecruitment introductions prior to study. Preparation without the presence of public (3.2.1). Processing software for simultaneousreceiving, processing and recording of the EEG data and synchronisation with the live mediatised material (3.2.3).

118 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 18: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

with a greater value is not being recognised.Another computational issue we experiencedwas the noticeable disconnections of the actress’headset. By trying different devices, we couldverify that this was not occurring due to hard-ware malfunctioning. Also, since the actresswas always within a distance of 10 m, weassume that it is not a problem related to thephysical range of the Bluetooth neither. Never-theless, this requires further investigationincluding experimenting with different com-munication protocols.

In the experimental level, one of the difficul-ties that was not predicted, is for the actress notto consume any high caffeine drink at least 12hours before each performance and in thiscase for three consecutive days. As sheexplained, for performers that even consumemoderate amounts of caffeine like her that is,one to two cups of coffee per day, it is commonpractice to drink a cup one hour before the per-formance in order to feel fresh and energised. Inthe case of Enheduanna – A Manifesto of Fall-ing, in order to help the actress perform at ahigh level and at the same time implement theexperimental conditions, we realised a pro-gramme of rehearsals that started one monthearlier and during which she gradually reducedthe caffeine consumption. However, we under-stand that this might not be feasible in allcases and therefore needs reconsideration.

Apart from the unpredicted difficulties weencountered, we can already envision the futurepotential of our study. On the one hand, thecombination of solutions we presented can beadjusted and applied also to different contexts,like other formats of interactive works of art,games, but also neurofeedback applications formultiple participants. On the other hand, thepassive multi-brain EEG-based BCI systemdescribed in this article has a lot of potentialfor further development. Its architecture canallow different algorithmic processing andinterpretation of the EEG data and it can alsoincorporate virtual and mixed-reality sensorsdata. Additionally, although at this stage we

focused on passive brain-interaction for reasonsdescribed in detailed in the previous sections,we acknowledge that in the future hybrid BCIsthat combine different paradigms might enrichthe performative experience.

Our future work involves as main objectives:stabilising the system and increasing the num-ber of the participants in real-life conditions;developing and evaluating more performances;reporting the detailed results of the behaviouraland EEG data and investigating for possible evi-dence that might support or not the hypothesisof brain-to brain coupling between performer/sand audience participants (Zioga et al. 2015,104).

Note

1. A live brain-computer mixed-media perform-ance is defined as a real-time audio-visual andmixed-media performance with use of BCIs(Zioga et al. 2014, 221).

Acknowledgements

This research was made possible with the GlobalExcellence Initiative Fund PhD Studentship awardedby the Glasgow School of Art. ‘Enheduanna – AManifesto of Falling’ Live Brain-Computer CinemaPerformance has been developed in the DigitalDesign Studio (DDS) of the Glasgow School of Art(GSA), in collaboration with the PACo Lab at theSchool of Psychology of the University of Glasgowand the School of Art, Design and Architecture ofthe University of Huddersfield. The project hasbeen awarded with the NEON Organization 2014–2015 Grant for Performance Production, is sup-ported by MyndPlay and the premiere presentationswere made possible also with the support of CCA:Centre for Contemporary Arts Glasgow and theGSA Students’Association. We would like to heartilythank all the collaborators mentioned in Section 3and also Alison Ballantine, Brian McGeough, JessicaEvelyn Argo, Jack McCombe, Alexandra Gabrielle,Catherine M. Weir, Shannon Bolen and Erifyli Chat-zimanolaki. We are indebted to Dr Francis McKee,Director of CCA and to the members of the audiencewho volunteered to participate in the study. Wewould also like to thank the authors, the publishinghouses and estates for their permission to use the

DIGITAL CREATIVITY 119

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 19: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

literature extracts; all the CCA staff members; andEmeritus Prof Basil N. Ziogas for his continuedsupport.

Disclosure statement

No potential conflict of interest was reported by theauthors.

Funding

This work was supported by the NEON Organiz-ation under the 2014–2015 Grant for PerformanceProduction; and the Glasgow School of Art underthe Global Excellence Initiative Fund PhDStudentship.

Notes on contributors

Polina Zioga is an award-winning multimedia visualartist, lecturer in the School of Art, Design andArchitecture of the University of Huddersfield, andaffiliate member of national and international organ-isations for the visual and new media arts. Her inter-disciplinary background in Visual Arts (AthensSchool of Fine Arts) and Health Sciences (Nationaland Kapodistrian University of Athens) has influ-enced her practice at the intersection of art andscience and her doctoral research (Digital DesignStudio, Glasgow School of Art) in the field ofbrain–computer interfaces, awarded with the GlobalExcellence Initiative Fund PhD Studentship. Herwork has been presented since 2004 internationally,in solo and group art exhibitions, art fairs, museums,centres for the new media and digital culture, video-art and film festivals and international peer-reviewedconferences www.polina-zioga.com.

Dr Paul Chapman is Acting Director of the DigitalDesign Studio (DDS) of the Glasgow School of Artwhere he has worked since 2009. The DDS is a post-graduate research and commercial centre based inthe Digital Media Quarter in Glasgow housingstate of the art virtual reality, graphics and soundlaboratories. Paul holds BSc, MSc and PhD degreesin Computer Science, he is a Chartered Engineer,Chartered IT Professional and a Fellow of the BritishComputer Society. Paul is a Director of Cryptic andan inaugural member of the Royal Society of Edin-burgh’s Young Academy of Scotland which wasestablished in 2011.

Professor Minhua Ma is a Professor of Digital Media& Games and Associate Dean International in theSchool of Art, Design and Architecture at Universityof Huddersfield. Professor Ma is a world-leadingacademic developing the emerging field of seriousgames. She has published widely in the fields ofserious games for education, medicine and health-care, Virtual and Augmented Reality, in over hun-dred peer-reviewed publications, including sixbooks with Springer. She received grants fromRCUK, EU, NHS, NESTA, UK government, charitiesand a variety of other sources for her research onserious games for stroke rehabilitation, cystic fibrosisand autism, and medical visualisation. Professor Mais the Editor-in-Chief responsible for the SeriousGames section of the Elsevier journal EntertainmentComputing.

Professor Frank Pollick is interested in the percep-tion of human movement and the cognitive andneural processes that underlie our abilities to under-stand the actions of others. In particular, currentresearch emphasises brain imaging and how individ-ual differences involving autism, depression and skillexpertise are expressed in the brain circuits for actionunderstanding. Research applications include com-puter animation and the production of humanoidrobot motions. Professor Pollick obtained BS degreesin physics and biology from MIT in 1982, an MSc inBiomedical Engineering from Case Western ReserveUniversity in 1984 and a PhD in Cognitive Sciencesfrom The University of California, Irvine in 1991.Following this he was an invited researcher at theATR Human Information Processing ResearchLabs in Kyoto, Japan from 1991 to 1997.

ORCID

Polina Zioga http://orcid.org/0000-0003-1317-2074Paul Chapman http://orcid.org/0000-0002-6390-5558Minhua Ma http://orcid.org/0000-0001-7451-546XFrank Pollick http://orcid.org/0000-0002-7212-4622

References

Adobe. 2016. “After Effects.” Accessed January 62016. http://www.adobe.com/uk/products/aftereffects.html.

Adorno, T. W., E. Frenkel-Brunswik, D. Levinson,and N. Sanford. 1950. The AuthoritarianPersonality. New York: Harper & Row.

120 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 20: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Angelou, M. 1995. A Brave and Startling Truth.New York: Random House.

Auslander, P. 1999. Liveness: Performance inMediatized Culture. New York: Routledge.

Bonnet, L., F. Lotte, and A. Lecuyer. 2013. “TwoBrains, One Game: Design and Evaluation of aMulti-user BCI Video Game Based on MotorImagery.” IEEE Transactions on ComputationalIntelligence and AI in Games, IEEEComputational Intelligence Society, 5 (2): 185–198.

Caglayan, O. 2015. “OSC Controller.” AccessedJanuary 1 2016. http://openvibe.inria.fr/documentation/unstable/Doc_BoxAlgorithm_OSCController.html.

CCA. 2015. “Polina Zioga & GSA DDS:“Enheduanna – A Manifesto of Falling” LiveBrain-Computer Cinema Performance.”Accessed January 6 2016. http://cca-glasgow.com/programme/555c6ab685085d7e38000026.

Cycling. 2016. “udpreceive.” Accessed January 12016. https://docs.cycling74.com/max5/refpages/max-ref/udpreceive.html.

Dikker, S. 2011. “Measuring the Magic of MutualGaze.” YouTube video, 03:24. Posted by‘Suzanne Dikker’ (2011) Accessed January 172016. https://www.youtube.com/watch?v=Ut9oPo8sLJw.

Dixon, S. 2007. Digital Performance: A History ofNew Media in Theater, Dance, Performance Art,and Installation. Cambridge, MA: MIT Press.

Eaton, J. 2015. “The Space Between Us.” Vimeovideo, 06:35. Posted by ‘joel eaton’ (2015).Accessed January 6 2016. https://vimeo.com/116013316.

Eaton, J., D. Williams, and E. Miranda. 2015. “TheSpace Between Us: Evaluating aMulti-UserAffective Brain-Computer Music Interface.”Brain-Computer Interfaces 2 (2–3): 103–116.doi:10.1080/2326263X.2015.1101922.

Gürkök, H., A. Nijholt, M. Poel, andM. Obbink. 2013.“Evaluating A Multi-Player Brain-ComputerInterface Game: Challenge Versus Co-experience.”Entertainment Computing, 4 (3): 195–203.

Hadji, T. 1988. Enheduanna: He Epoche, he zoe, kaito Ergo tes. Athens: Odysseas. (in Greek).

Hasson, U., A. A. Ghazanfar, B. Galantucci, S.Garrod, and C. Keysers. 2012. “Brain-to-brainCoupling: A Mechanism for Creating andSharing a Social World.” Trends in CognitiveNeuroscience 16 (2): 114–121. doi:10.1016/j.tics.2011.12.007.

Hasson, U., O. Landesman, B. Knappmeyer, I.Vallines, N. Rubin, and D. Heeger. 2008.

“Neurocinematics: The Neuroscience of Film.”Projections 2 (1): 1–26.

Heitlinger, S., and N. Bryan-Kinns. 2013.“Understanding Performative Behaviour WithinContent-Rich Digital Live Art.” DigitalCreativity, 24 (2): 111–118. Accessed October 102015. doi:10.1080/14626268.2013.808962.

Hjelm, S. I., and C. Browall. 2000. “Brainball: UsingBrain Activity for Cool Competition.” Proceedingsof the first nordic conference on computer-humaninteraction (NordiCHI 2000), Stockholm, Sweden.

Krepki, R., B. Blankertz, G. Curio, and K.-R. Müller.2007. “The Berlin Brain-Computer Interface(BBCI) – Towards a new CommunicationChannel for Online Control in GamingApplications.” Multimedia Tools and Applications33 (1): 73–90. doi:10.1007/s11042-006-0094-3.

Laplanche, J., and J.-B. Pontalis. 1988. The Languageof Psychoanalysis. London: Karnac and theInstitute of Psycho-Analysis.

Le Groux, S., J. Manzolli, and P. F. M. J. Verschure.2010. “Disembodied and Collaborative MusicalInteraction in the Multimodal Brain Orchestra.”Proceedings of the 2010 conference on NewInterfaces for Musical Expression (NIME 2010),Sydney, Australia, 309–314.

Lee, S., Y. Shin, S. Woo, K. Kim, H.-N. Lee. 2013.Review of Wireless Brain-Computer InterfaceSystems, Brain-Computer Interface Systems –Recent Progress and Future Prospects. Edited byDr. Reza Fazel-Rezai. InTech. doi:10.5772/56436.

Lindinger, C., M. Mara, K. Obermaier, R. Aigner, R.Haring, and V. Pauser. 2013. “The (St)Age ofParticipation: Audience Involvement inInteractive Performances.” Digital Creativity, 24(2): 119–129. doi:10.1080/14626268.2013.808966.

Mann, S., J. Fung, and A. Garten. 2008.“DECONcert: Making Waves with Water, EEG,and Music.” In Computer Music Modeling andRetrieval. Sense of Sounds Computer MusicModeling and Retrieval. Sense of Sounds, editedby R. Kronland-Martinet, S. Ystad, and K. Jensen,487–505. Berlin: Springer, 197–229. Singapore:Springer.

max objects database. 2015. “swatch.” AccessedJanuary 1 2016. http://www.maxobjects.com/?v=objects&id_objet=715&requested=swatch&operateur=AND&id_plateforme=0&id_format=0.

Mori, M., K. Bregenz, and E. Schneider. 2003.Mariko Mori: Wave UFO. Köln: Verlag derBuchhandlung Walther König.

Mullen, T., A. Khalil, T. Ward, J. Iversen, G. Leslie, R.Warp, M. Whitman, et al. 2015. “MindMusic:

DIGITAL CREATIVITY 121

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017

Page 21: Enheduanna – A Manifesto of Falling: first demonstration ... - RADAR

Playful and Social Installations at the InterfaceBetween Music and the Brain.” In More PlayfulInterfaces, 197–229. Singapore: Springer. doi:10.1007/978-981-287-546-4_9.

MyndPlay. 2015. “BrainBandXL & MyndPlay ProBundle.” Accessed January 6 2016. http://myndplay.com/products.php?prod=9.

Nicolas-Alonso, L. F., and J. Gomez-Gil. 2012. “BrainComputer Interfaces, a Review.” Sensors 12 (2):1211–1279. doi:10.3390/s1202012111238.

Nijholt, A. 2015. “Competing and CollaboratingBrains: Multi-Brain Computer Interfacing.” InBrain-Computer Interfaces: Current Trends andApplications, edited by A.E. Hassanien, and A.T.Azar, 313–335. Switzerland: Intelligent SystemsReference Library series, 74, SpringerInternational Publishing. doi:10.1007/978-3-319-10978-7_12.

Nijholt, A., and M. Poel. 2016. “Multi-Brain BCI:Characteristics and Social Interactions.” InFoundations of Augmented Cognition:Neuroergonomics and Operational Neuroscience,edited by Dylan D. Schmorrow, and Cali M.Fidopiastis, 79–90. Lectures Notes in ComputerScience, 9743, Springer International Publishing.http://link.springer.com/chapter/10.1007%2F978-3-319-39955-3_8.

OpenViBE. 2011a. “Acquisition Server.” Posted by‘lbonnet’. Accessed January 1 2016. http://openvibe.inria.fr/acquisition-server.

OpenViBE. 2011b. “Designer Overview.” Posted by‘lbonnet’. Accessed January 1 2016. http://openvibe.inria.fr/designer.

Pampoudis, P. 1989. O Enikos. Athens: Kedros. (inGreek)

Ranciere, J. 2007. “The Emancipated Spectator.”Artforum. 45: 270–281.

Renard, Y. 2015. “Acquisition client.” AccessedJanuary 1 2016. http://openvibe.inria.fr/documentation/unstable/Doc_BoxAlgorithm_AcquisitionClient.html.

Renard, Y., F. Lotte, G. Gibert, M. Congedo, E. Maby,V. Delannoy, O. Bertrand, and A. Lécuyer. 2010.“OpenViBE: An Open-Source Software Platformto Design, Test and Use Brain-ComputerInterfaces in Real and Virtual Environments.”Presence: Teleoperators and Virtual Environments19: 35–53.

Swartz Center of Computational Neuroscience,University of California San Diego. 2014.“Introduction to Modern Brain-ComputerInterface Design Wiki.” Last modified June 102014. Accessed January 6 2016. http://sccn.ucsd.edu/wiki/Introduction_To_Modern_Brain-Computer_Interface_Design.

Thakor, N. V., and Lei. Sherman. 2013. “EEG SignalProcessing: Theory and Applications.” In NeuralEngineering, edited by Bin He, 259–304.New York: Springer Science+Business Media.

van der Wal, C. N., M. Irrmischer. 2015. “Myndplay:Measuring Attention Regulation with Single DryElectrode Brain Computer Interface.” In BIH2015, edited by Y. Guo, 192–201, LNAI 9250.Accessed January 8 2016. doi:10.1007/978-3-319-23344-4_19

Willis, H. 2009. “Real Time Live: Cinema asPerformance.” In AfterImage 37 (1): 11–15.

Woolf, V. 2002. On Being Ill. Ashfield: Paris Press.Yourcenar, M. 1974. Feux. Paris: Editions Gallimard.

(in French).Zander, T.O., C. Kothe, S. Welke, and M. Roetting.

2008. ‘Enhancing Human-Machine Systems withSecondary Input from Passive Brain-Computerinterfaces’. In Proceedings of the 4th internationalBCI workshop & training course, Graz, Austria.Graz: Graz University of Technology PublishingHouse.

Zioga, P., P. Chapman, M. Ma, and F. Pollick. 2014.“A Wireless Future: Performance art, Interactionand the Brain-Computer Interfaces.” InProceedings of ICLI 2014 – INTER-FACE:International Conference on Live Interfaces, editedby A. Sa, M. Carvalhais, and A. McLean, 220–230.Lisbon: Porto University, CECL & CESEM(NOVA University), MITPL (University ofSussex).

Zioga, P., P. Chapman, M. Ma, and F. Pollick. 2015.“A Hypothesis of Brain-to-Brain Coupling inInteractive New Media Art and Games UsingBrain-Computer Interfaces.” In Serious GamesFirst Joint International Conference, JCSG 2015,Huddersfield, UK, June 3–4, 2015, Proceedings,edited by S. Göbel, M. Ma, J. Baalsrud Hauge,M. F. Oliveira, J. Wiemeyer, and V. Wendel,103–113. Springer-Verlag Berlin Heidelberg.doi:10.1007/978-3-319-19126-3_9.

122 P. ZIOGA ET AL.

Dow

nloa

ded

by [

Gla

sgow

Sch

ool o

f A

rt L

ibra

ry]

at 0

3:31

17

Aug

ust 2

017