Top Banner

of 170

Electroacoustics Readings

Jul 06, 2018

Download

Documents

Clay Gold
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/17/2019 Electroacoustics Readings

    1/170

    Selected ReadingsIn Electroacoustics

    2006 - 2007

    EAMT 203 / 204EAMT 205

    Professor Michael PinsonneaultProfessor EldadTsabary

    Professor Kathy KennedyProfessor Christian Calon

    © 1984 - 2005 Prepared by Kevin Austin August / 2005Contributors include: Kevin Austin, Mark Corwin, Laurie Radford …

     Join the mail list. Send the message: subscribe eamt, to: [email protected]

    Royalties from this copyright document will be used to develop resources for the electroacoustics area of the Department

    of Music.

    NAME: ______________________________________

    email: ________________________________________

  • 8/17/2019 Electroacoustics Readings

    2/170

  • 8/17/2019 Electroacoustics Readings

    3/170

    TABLE OF CONTENTS

    READINGS-----------------------------------------------------------------------------------------------2

    ELECTROACOUSTICS — AN INTRODUCTION ------------------------------------------------------2History---------------------- ------------- ------------ ------------- ------------ ------------- ------------ --- 2

    General Overview------------ ------------- ------------ ------------- ------------ ------------- ------------- 2Artistic Practice ----------- ------------- ------------ ------------- ------------ ------------- ------------ --- 3

    Acousmatic------------ ------------ ------------- ------------ ------------- ------------- ------------ --------- 4

    Post Partum: But is it Music?------------ ------------ ------------- ------------ ------------- ------------ --- 4

    READING — I-------------------------------------------------------------------------------------------6

    AN INFORMAL INTRODUCTION TO LANGUAGE, THE VOICE, AND THEIRSOUNDS --------------------------------------------------------------------------------------------------6

    Linguistic Organization------------ ------------- ------------- ------------ ------------- ------------ ------- 6

    Vocabulary, Syntax and Cases, Semantic (elements, order, meaning) --- --- --- --- --- --- --- --- --- --- -- 6Stress------------------------------------------------------------------------------------------------------ 8

    Code--------------- ------------ ------------- ------------ ------------- ------------ ------------- ------------- 8

    Sound As Symbol: Letters and Spelling ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- --- 9

    IPA: The International Phonetic Alphabet ----------- ------------- ------------ ------------- ----------- 9

    Voice as Sound------ ------------ ------------- ------------ ------------- ------------ ------------- ----------- 9

    Segmentation of Text and Speech ------------ ------------ ------------- ------------- ------------ -------- 10

    A Quick Phonetic Reference Guide ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- -- 12

    Place of Articulation----- ------------ ------------- ------------- ------------ ------------- ------------ ---- 13

    Alphabets and Pictograms----------- ------------- ------------- ------------ ------------- ------------ ---- 14

    READING — II --------------------------------------------------------------------------------------- 16

    DESCRIBING SOUND(S) — I-------------------------------------------------------------------- 16Function and Context ----------- ------------- ------------ ------------- ------------ ------------- ---------- 16

    Mass Structures and the Cocktail Party Effect ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- -- 16

    Segregation and Streaming & ASA ------------ ------------- ------------ ------------- ------------ ------ 17

    ASA — A Brief Introduction-------------- ------------ ------------- ------------ ------------- ------------ 18

    Psychoacoustics------------------------------------------------------------------------------------------18

    Spectromorphology ------------ ------------- ------------ ------------- ------------ ------------- ---------- 19

    READING — IIA ------------------------------------------------------------------------------------- 22

    DESCRIBING SOUND(S) II — OPPOSITIONS -------------------------------------------- 22

    READING — III -------------------------------------------------------------------------------------- 25

    SIGNAL PATHS & TRANSDUCERS – LOUDSPEAKERS & MICROPHONES--- 25Signal Paths & Controls ----------- ------------- ------------- ------------ ------------- ------------ ------ 25

  • 8/17/2019 Electroacoustics Readings

    4/170

    Transducers – Sound to Electricity to Sound ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- - 27

    Microphones--------------------- ------------ ------------- ------------- ------------ ------------- --------- 27

    Loudspeakers----------- ------------- ------------- ------------ ------------- ------------ ------------- ----- 28

    Headphones--------------------- ------------ ------------- ------------- ------------ ------------- --------- 29

    Because of speakers coloration, why not mix sounds with headphones?-- --- --- --- --- --- --- --- --- -- 30

    Feedback ------------------------------------------------------------------------------------------------ 30

    READING — IV---------------------------------------------------------------------------------------32

    JUNGIAN MODELS FOR COMPOSITIONAL TYPES ------------------------------------32

    READING — V ----------------------------------------------------------------------------------------34

    PARAMETERS OF SOUND — I — PERCEPTUAL------------------------------------------34Duration/Time----- ------------- ------------ ------------- ------------- ------------ ------------- --------- 34

    Dynamics/Amplitude ------------ ------------ ------------- ------------- ------------ ------------- ------- 34

    Spectrum (timbre)------------- ------------ ------------- ------------ ------------- ------------- ----------- 34

    Envelope shape ------------- ------------ ------------- ------------ ------------- ------------ ------------- - 35

    Morphological Classification ----------- ------------- ------------ ------------- ------------- ----------- 35

    Psychological Implications/Effects ------------ ------------ ------------- ------------ ------------- ----- 35

    READING — VI---------------------------------------------------------------------------------------37

    PARAMETERS OF SOUND — II — PHYSICAL & THE HARMONIC SERIES----37Sound, Frequency and Amplitude ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 37

    Some more characteristics------------ ------------- ------------ ------------- ------------ ------------- --- 37

    Sound Waves, Their ‘Shape’ and Partials (‘Harmonics’)------- ----- ---- ---- ---- ---- ----- ---- ---- --- 38The Harmonic Series La série harmonique ------------ ------------- ------------ ------------- ------- 39

    Intervals------------ ------------- ------------ ------------- ------------- ------------ ------------- --------- 40

    Amplitude and Frequency ------------ ------------- ------------ ------------- ------------ ------------- --- 41

    Pitched Instruments, Unpitched Instruments and the Voice ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 42

    Instrumental Families---- ------------ ------------- ------------ ------------- ------------ ------------- --- 42

    Electronic sources-------------------- ------------- ------------ ------------- ------------ ------------- ----- 45

    The Frequency Ranges of Instruments ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 46

    READING — VII--------------------------------------------------------------------------------------48RESONANCE, VOWEL FORMANTS AND FREQUENCIES, TEMPERAMENT---48

    Resonance-------------------- ------------ ------------- ------------ ------------- ------------ ------------- - 48

    The Mouth, Vowels and Formant Frequencies---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 48

    Schematic View of the Voice ------------ ------------- ------------ ------------- ------------- ----------- 49

    Diagramatic representation of the vowel /i/.- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- - 50

    Frequencies of Notes in Equal Temperament ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 51

  • 8/17/2019 Electroacoustics Readings

    5/170

    CHART 1------------------------------------------------------------------------------------------------ 52

    INTERVALS & INTONATION — SELECTED INTERVALS FROM EQUALTEMPERAMENT, THE HARMONIC SERIES, AND THE CIRCLE OF FIFTHS -- 52

    CHART 2------------------------------------------------------------------------------------------------ 53

    FORMANT FREQUENCIES OF SPOKEN & SUNG VOWELS BY MEN, WOMEN ANDCHILDREN--------------------------------------------------------------------------------------------- 53

    READING — VIII------------------------------------------------------------------------------------ 56

    ANALOG AND DIGITAL -- SOUNDS AND SIGNALS --------------------------------- 56Analog / Digital ----------- ------------- ------------ ------------- ------------ ------------- ------------ -- 56

    SAMPLING RATE conversion ------------ ------------ ------------- ------------ ------------- ------------ 60

    READING — IX--------------------------------------------------------------------------------------- 63

    THE EAR AND SOUND PRESSURE LEVELS (SPLS) -------------------------------------- 63The Ear---------------------------------------------------------------------------------------------------63

    Hearing and Thresholds----------- ------------- ------------- ------------ ------------- ------------ ------ 64

    Hearing Loss---------------------------------------------------------------------------------------------64

    Typical Sound Pressure Levels (SPLs)------------------------------------------------------------------65

    READING — X---------------------------------------------------------------------------------------- 66

    PSYCHOACOUSTICS, LOUDNESS AND LOUD SOUNDS ---------------------------- 66

    Psychoacoustics------------------------------------------------------------------------------------------66Frequency and ‘Pitch’ ------------ ------------- ------------ ------------- ------------- ------------ -------- 66

    Loudness and Intensity ----------- ------------- ------------ ------------- ------------- ------------ -------- 66

    Loudness Curves ------------ ------------- ------------ ------------- ------------ ------------- ------------ -- 67

    Frequency Response of Human Hearing and Hearing Loss --- --- --- --- --- --- --- --- --- --- --- --- --- --- -- 68

    Causes----------------------------------------------------------------------------------------------------68

    Cautions, Adaptation and Coping----------------------------------------------------------------------69

    Hearing Protection--------------------------------------------------------------------------------------69

    Tinnitus ------------- ------------ ------------- ------------ ------------- ------------ ------------- ---------- 70

    READING - XI----------------------------------------------------------------------------------------- 71SPATIAL ACTUALIZATION--------------------------------------------------------------------- 71

    General Considerations ------------ ------------- ------------- ------------ ------------- ------------ ------ 71

    Speaker to fader-----------------------------------------------------------------------------------------72

    Specific Aspects of Speaker Placement ---- ---- ---- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- -- 73

    Calon – Minuit (timeline)------------ ------------- ------------- ------------ ------------- ------------ ---- 74

    Calon – Minuit (Projection Score)-----------------------------------------------------------------------79

  • 8/17/2019 Electroacoustics Readings

    6/170

    READING — XII--------------------------------------------------------------------------------------87

    REFLECTION AND REVERBERATION-------------------------------------------------------87Velocity, Wavelength and Frequency----- ------------- ------------- ------------ ------------- --------- 87

    Propagation ------------ ------------- ------------- ------------ ------------- ------------ ------------- ----- 87

    Absorption------------ ------------- ------------ ------------- ------------- ------------ ------------- ------- 88

    Reflection ----------------------------------------------------------------------------------------------- 88Reverberation Within a Room------------- ------------- ------------- ------------ ------------- --------- 89

    Reverberation Time and Reflection Density ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 91

    Free Field - Reverberant Field-------------------- ------------ ------------- ------------ ------------- --- 91

    Flutter Echo and Room Resonances---- ---- ---- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 92

    Electronic reverberation------ ------------ ------------- ------------ ------------- ------------- ----------- 93

    Total Absorption: Anechoic Chambers and Out-of-Doors--- ---- ----- ---- ---- ---- ---- ----- ---- ---- --- 93

    READING XIII ----------------------------------------------------------------------------------------94

    SOUND, VIBRATION, SPECTRUM AND MODELS FOR SPECTRALDEVELOPMENT:--------------------------------------------------------------------------------------94Waves and oscilloscopes; Vibration and variation-------- ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- - 94

    Introduction ------------ ------------- ------------- ------------ ------------- ------------ ------------- ----- 94

    Instrumental:--- ------------- ------------ ------------- ------------ ------------- ------------ ------------- - 94

    Voice:---------------------------------------------------------------------------------------------------- 95

    Environmental:------------ ------------ ------------- ------------ ------------- ------------ ------------- --- 95

    Intervallic Distances ------------- ------------ ------------- ------------- ------------ ------------- ------- 96

    READING XIV-----------------------------------------------------------------------------------------97

    COMPOSITIONAL STRATEGIES---------------------------------------------------------------97Structural and Gestural Types ----------- ------------- ------------ ------------- ------------- ----------- 97

    TERMS--------------------- ------------ ------------- ------------ ------------- ------------ ------------- --- 97

    CATEGORIZATION ------------- ------------ ------------- ------------- ------------ ------------- ------ 104

    INTRODUCTION TO MODULAR ANALOG SYNTHESIS--------------------------- 106A Guide----- ------------ ------------- ------------- ------------ ------------- ------------ ------------- ---- 106

    PROCESSORS ----------------------------------------------------------------------------------------- 107

    SOURCES ---------------------------------------------------------------------------------------------- 107CONTROLS---------- ------------- ------------ ------------- ------------- ------------ ------------- ------ 107

    LOGIC / TIMING ------------- ------------ ------------- ------------ ------------- ------------- ---------- 108

    VOLTAGE CONTROLED FILTER (VCF) ---- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- -- 109

    (Multimode Filter) ------------- ------------ ------------- ------------- ------------ ------------- -------- 109

    VOLTAGE CONTROLED PHASE / FLANGE ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----- ---- ---- -- 110

    VOLTAGE CONTROLED AMPLIFIER------ ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- -- 111

    (VCA) -------------------------------------------------------------------------------------------------- 111

    RING MODULATOR (BALANCED MODULATOR) ---- ---- ---- ---- ---- ---- ---- ---- ----- ---- ---- -- 112

  • 8/17/2019 Electroacoustics Readings

    7/170

    & PRE-AMPLIFIER------------------------------------------------------------------------------------112

    ADSR ---------------------------------------------------------------------------------------------------113

    (Envelope Generator) ------------ ------------- ------------ ------------- ------------- ------------ -------113

    TRIGGERS and GATES---------------------- ------------ ------------- ------------ ------------- ---------116

    VOLTAGE CONTROLED OSCILLATOR (VCO) ---- ---- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---117

    Basic Waveshapes and Spectrums (from oscillators)----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---118

    The Generic Voltage Controled Oscillator (VCO) ---- ----- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- -119SAMPLE & HOLD; CLOCK (VCLFO); NOISE GENERATOR; RANDOM VOLTAGE --- --- --- --120

    Sample/Hold-------------------------------------------------------------------------------------------122

    Track & Hold-------------------------------------------------------------------------------------------122

    ARTICLE A--------------------------------------------------------------------------------------------126

    PARAMETRIC CONTROLS ---------------------------------------------------------------------126

    ARTICLE B --------------------------------------------------------------------------------------------127

    CONCRETE TRANSFORMATIONS----------------------------------------------------------127

    ARTICLE C--------------------------------------------------------------------------------------------127

    FAMILIES OF SOUNDS AND FAMILY RELATIONS -----------------------------------128

    ARTICLE D--------------------------------------------------------------------------------------------129

    GENERALIZED SONIC TRANSFORMATIONAL PROCESSES ---------------------129Spectrum------------------------------------------------------------------------------------------------129

    Time-----------------------------------------------------------------------------------------------------129

    Amplitude----------------------------------------------------------------------------------------------129The compressor - limiter / expander ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----- ---- ---- ---- ---- ---129

    ARTICLE E --------------------------------------------------------------------------------------------130

    ON AMPLITUDE------------------------------------------------------------------------------------130Graphic Representation of Wave ------------ ------------ ------------- ------------- ------------ -------130

    Envelope Follower ----------- ------------- ------------ ------------- ------------ ------------- -----------131

    Processing of Envelopes ------------ ------------- ------------- ------------ ------------- ------------ -----132

    Gating---------------------------------------------------------------------------------------------------134

     A SOMEWHAT INCOMPLETE, SELECTIVE HISTORICAL TIMELINE OF SOUND

    TECHNOLOGY -------------------------------------------------------------------------------------------135Music Technologies Before 1948--------------------------------------------------------------135

    Musique concrète; Elektronische Musik; Tape Music ---------------------------------136

    Electronic Music 1948 – 1970-----------------------------------------------------------------136

    Synthesizers-----------------------------------------------------------------------------------------136

    Computers ---------------------------------------------------------------------------------------------136

    Live Electronics-----------------------------------------------------------------------------------136

  • 8/17/2019 Electroacoustics Readings

    8/170

    Timeline-------------- ------------- ------------ ------------- ------------- ------------ ------------- ------ 137

    INDEX (INCOMPLETE)--------------------------------------------------------------------------- 160

    http://cec.concordia.ca/

    http://www.sonus.ca/index.html

    http://www.ircam.fr/?L=1

    http://www.sonicartsnetwork.org/about.htm

    http://www.ears.dmu.ac.uk/

    … and follow links from each …

    Google new and unusual terms.

  • 8/17/2019 Electroacoustics Readings

    9/170

    READINGS

    ELECTROACOUSTICS — AN INTRODUCTION

    OverViewThis collection of readings provides a small introduction into parts of the disciplineof electroacoustics (ea). Assembled from shorter individual readings from 1984 tothe present, there are some repetitions and some contradictions. Electroacoustics:sound that comes from loudspeakers. http://www.ears.dmu.ac.uk/

    HISTORY

    The term electroacoustics comes from electrical engineering where it refers to the study of deviceswhich convert electrical energy to acoustic energy, or vice versa – loudspeakers and microphonesmostly.

    The term has been adopted by the ‘sonic arts‘ community from time to time and was partlysynonymous with ‘electronic music‘, ‘musique concrète’, tape music … , and has been spelled as“Electro Acoustic“, “electro-acoustic“, “Electro-acoustic“, and (as now widely adopted)“electroacoustic”. There are also on going discussions as to whether there are differences (andwhat they might be) between ‘electroacoustics’, ‘electroacoustic music’, ‘electro-acoustic music’ …(see below)

    GENERAL OVERVIEW

    A discipline as broad as electroacoustic studies is bound to encompass many (and growing)cognate disciplines. With sound, electricity and people at its core, it touches upon:

    engineering  practices of acoustics and electrical engineering, including

    hardware design and manufacturing

    computer sciences hardware and software – conception and design

    medical studies regarding physical/physiological aspects of hearing;also applied in other areas such as ultrasound tests

     psychology (and psychoacoustics)

    regarding human perception and interpretation of theseperceptions

    linguistics spoken and written language in technical and theoreticalapplications

    history, analysis, andaesthetics

    notably the history of technology, and more recently aspectsof gender issues; and models and tools for understanding(the nature of) the field; how thought and art are reflectedand evaluated

    artificial intelligence for perception, creation and analysis

    communications studiesand journalism, radio …

    often mostly text, but often free sound that sets the contextand describes the environment

  • 8/17/2019 Electroacoustics Readings

    10/170

    sound design the general area of design and control of ‘all’ aspects of thesound in a production, largely applied to film and gaming

    video, internet, film, performance art,installations, theater,television, animation

    for storage, manipulation and presentation of multi-mediasound aspects of work where visual, textual, dramatic,narrative or interactive elements are considered primary

     gaming  the new film medium where sound supports environmentand action through a combination of effects and music

    music and recording  including the combining of live performers with pre-recordedmaterial, live processing, concert presentation and therecording studio

    sonic arts and the uses of electronic technologies in the creative,artistic discipline of electroacoustics

    Computer Music computers applied to the creation and or analysis of music,including new methods of composition and sound

    generationelectroacoustic studies The broader discipline which integrates aspects of all of 

    these into a framework for creation, practice and study.

    People working in these specific areas require some degree of competence in several other areas:the acoustician needs to know about psychoacoustics and perception; the recording engineerneeds to be conversant with acoustics, engineering and music, sound designers require sensitivityto the dramatic and the narrative

    The history of the artistic / creative discipline of ea (see last section) dates from the end of the

    nineteenth century with various (uncoordinated) activities through the first half of the twentiethcentury. The major change / breakthrough occurred almost simultaneously in a number of countries – France, Germany, the USA, England (and less well-known, Canada and Japan). In thespace of a few years, from the late 1940s to the mid-1950s, the field grew from an ‘experimental‘practice to an art and a generalizeable practice, and, a study for media and communications.Paris in the late 40s saw the building of the first studios devoted to the artistic creation practiceof ‘electronic sound art‘, and soon had both the first public concerts and radio broadcasts.

    ARTISTIC PRACTICE

    The breadth of artistic practice of ‘sound employing electricity’ is extremely wide ranging from the

     basic live-recording with no editing (sonic documentation), to creation or manipulation of digitalinformation that later becomes sound. An electric-guitar player with processing employs many of the same pieces of equipment and software that the studio composer uses.

    Along the artistic continuum from the folk-singer with a microphone to the on-line digital hyper-sound convoluter, there are many types and styles of sonic interest. Whether musical pitch andmeter (regular rhythmic structures with beats, notes and chords) play a central role (cf  MIDI),whether the purely ‘sonic’ is central, or text (sung or spoken) is critical, and perhaps the acousticenvironment (and social implications) is important – including soundscaping and historical‘sound documentation‘, the discipline of electroacoustic studied embraces them all.

  • 8/17/2019 Electroacoustics Readings

    11/170

    ACOUSMATIC

    One small sub-set of the entire ‘sonic arts‘ practice focuses on a rather specific application of electroacoustic technologies to sound – acousmatics. While the practice cannot be preciselydefined, it does have at its center working in a ‘studio’ environment, and presenting in a ‘concert’

    situation, the materials frequently having originated from recording with a microphone. Themanner of presentation will not include live performers or real-time processing, and will employ asound projection system, most often with a minimum of 12 loudspeakers without visualaccompaniment.

    At the level of the aesthetic, the origins of the sounds are expected to be ‘hidden’ from the hearerso that the sound is heard ‘purely’ as sound, and not as representative of a known object. Butthere’s more to this discussion … for later!

    POST PARTUM: BUT IS IT MUSIC?

    Video had its roots in film – which had roots in theater. Some forms of film almost look likerecorded theater. But there are aspects of film which are not part of theater for example the close-up. Certain cultural / sociological aspects of video separate it from film, notably the reducedresources of production (a video camera and one person), and the methods of distribution, whichnow include web-streamed video.

    What debts does ea have to music – ie, the western music tradition, classical and popular? Thereare many possible approaches to the question: Do they have the same function? Do they employsimilar perceptual procedures? Are there aspects of thinking about ea which are foreign tothinking about music(s)? Can the practitioner of one (easily) move into the practice of the other?

    QUESTIONS

    1. Can you ‘hear’ a sound in your head? Is ‘listening to a sound in your head’ anelectroacoustic activity?

    2. In the (incomplete) organigrams on the next page determine those areas which are mostimportant for the researcher who would want to prepare a radio documentary on the history andimpact of technology in sound.

    3. Compare the impact of technology on artistic evolution with the impact of artisticevolution on technology.

    4. In listening to many types of music (non-western and western) it is possible to listen tovarious ‘parts’ of the music: beat, melody, harmony, text, phrase structure? Do you do this ‘all atonce’? Do you hear layers, or mass structures?

  • 8/17/2019 Electroacoustics Readings

    12/170

    ARTISTIC PRACTICE

    With Text Studio Live Popular Musics

    Radiophonic Concretepoetry

    Fixedmedium

    Synthesized Computer based

    concrete Mixed Livewith

    processing

    Liveelectronics

    Artistic Practice

    Concerts Dissemination

    With text, studio, live electronics, mixed, radiophonic, poetry, synthesizer, computer music

    SCIENCE AND RESEARCH

    Applied Theoretical

    Medical   Analysis

    Audiology Psychoacoustics   Linguistics,Artificial intelligence

    Historical /Documentary

    Science and Research

    Sociological/ cultural

    Hardware   Software

    Hardware, software, audiology, psychoacoustics, linguistics, AI, analysis

    APPLICATIONS

    With TextRecording Arts DisseminationPopular Musics

    Acoustics Hardware

    Pop Industry

    Communication

    Studies

    Direct

    DigitalMedia

    CDs, Video

    FilmTheater …

    Applications

    Games … W W W

    Software

     Journalism

    Acoustics, games, film, WWW, journalism, communications

    Source: after Mark Corwin (2001)

  • 8/17/2019 Electroacoustics Readings

    13/170

    READING — I

    AN INFORMAL INTRODUCTION TO LANGUAGE, THE VOICE, AND THEIRSOUNDS

    OverViewThis reading examines aspects of vocal language with the objective of providing a basic understanding of its many levels of organization. From the larger-scaleelements of vocabulary, syntax and semantics, to the most basic sound components(vowels and consonants), a framework and terminology are developed that will beapplicable to electroacoustic composition, analysis and synthesis. Following is anintroduction to the International Phonetic Alphabet (IPA).

    LINGUISTIC ORGANIZATION

    There are many ways of approaching an analysis of sound and electroacoustics. It is possible,and sometimes even desirable to use the human voice, the original instrument, as a model for thisstudy. This is especially true in electroacoustics.

    Spoken language and the various ways of looking at it are good starting points to examine thenature and structure of any of the arts. Here we’ll find a model to explore and develop many of the concepts and structures that will be useful in this course, and in many other areas.

    This examination of spoken or written language will start from the word, and examine larger(macro-) and smaller (micro-) structural aspects of it. While it will be a little simplistic — languageand language structures are definitely open for other interpretations and models, for now we’llstart with this tri-partite model.

    VOCABULARY, SYNTAX AND CASES, SEMANTIC (ELEMENTS, ORDER,MEANING)

    Vocabulary:Words form the basic vocabulary level of verbal language. They can stand alone: hat, cassette,black, dream, we (etc), in much the same way that individual sounds can be heard – but cannot bereduced without ceasing to function. Name is a word: wor  is not (according to the Word 2000spell checker) – it is a phoneme. In music, one may speak of notes, and in the visual arts, basicline types and shapes.

    Syntax and Cases:

    Syntax relates to the correct or acceptable order or sequence of vocabulary elements. There arelanguages, Latin, Polish and Russian among them, where the exact order of the words in asentence is not too critical to the expression of the meaning, as the word changes its form as itchanges case (meaning).

    An example of case is the possessive, where, in English, an ‘s is usually added (eg dog’s). Whenwe see the word dog’s , we know that the next word points to an object that belongs to the dog (egthe dog’s tail). In French, the word ‘de’ in certain places denotes the same relationship (eg le chiende Deschênes).

  • 8/17/2019 Electroacoustics Readings

    14/170

    English however lacks a strong case structure and the order of words is often critical to themeaning: The man fell on the sidewalk , has a different meaning from The sidewalk fell on the man.‘Man’ is the subject in the first form, but the object in the second. In languages with a strong casestructure, the word ‘man’ would be different in each sentence.

    Let us imagine a language with the following three words: MAN = hom; FALL ON = tombe;SIDEWALK = planch. If a word is the subject it takes an A as an ending; as an object, it takes

    an I. Therefore, HOMA tombe PLANCHI, means man falls on sidewalk ;HOMI tombe PLANCHA, means sidewalk falls on man.

    (Or closer to home, the old headline Man Bites Dog.) Note how in sound the sequence: has a different meaning to  

    This has led to a sense of an innately or structurally correct sequence for words. There are oftenpreferred (normal or correct?) ways in which words follow one another. In traditional westernmusic, and extending through the popular music and early jazz idioms, there are also norms orrules for the correct sequencing of chords. (II usually goes to V to I — if a particular ‘meaning‘ is to be understood.)

    The following sentence (sequence of words) is considered possible (correct) in English: The manwith the big black hat saw us as if in a dream.  If the words were to be presented in a different order,an English composition teacher may consider the sequence as being wrong: The black big man withthe dream saw as in a hat if us.

    A psychoanalyst or creative writing teacher may see in the new sentence profound significanceor banal meandering. It could be said that the syntax of the second version is not right.

    What would be the result of having words (vocabulary elements) appearing in any order?

    any appearing be elements having in of order result the vocabulary What words would ()?

    Semantic – the ‘meaning‘While the vocabulary elements have remained constant, their order is not considered ‘acceptable’or ‘having meaning‘. Somewhere in ‘breaking the rules’, art and poetry are sometimes found.

    Language forms (and meanings) are not cast in concrete, and it could be understood thatlanguages (verbal, musical or gestural) may exist as processes, where the action of creation is the‘meaning‘. In the study of spoken and written language, this area comes under the study of psycholinguistics.

    There is some ambiguity present in many linguistic forms, for example, ‘He fell on the rocks.’

    can have at least three meanings—two physical and one metaphorical. Sometimes thisambiguity is a source of interest, and sometimes a source of confusion.

    The evaluation of the sentence somehow relates to its having ‘meaning‘, or as the linguists wouldhave it, an acceptable semantic. (The understanding of a language—its semantic—relates to one’sexperience, for if you have never seen the words: hour, heure, Stunde, ora, timme, sho or godzina ,they would likely have little meaning to you, (and even less if you have had no contact with theconcept of dividing the day into twenty-four of them!))

  • 8/17/2019 Electroacoustics Readings

    15/170

    This text has meaning.

    /!//I//s/ /t//´//k//s//t/ /h//æ//z/ /m//i\//n//I//˜/

     

    has text meaning this

    /h//æ//z/ /t//´//k//s//t/ /m//i\//n//I//˜/ /!//I//s/

    /I/ /s/ /I/ /˜/ /æ/ /k/ /z//!/ /´/ /˜/

    /s/ /t/ /h/ /i\/

    /!/ /s/ /t/ /z/ /m/ /k/ /t/ /k/ /t/ /z/ /m/

    /!/ /I/ /z/ /˜/ /k/ /æ/ /I/

    STRESS

    In speech, many verbal characteristics effect understanding, including the rate of delivery—paced,deliberate, nervous—and the stress on different words:

    I’m  not going to do thatI’m not   going to do that.I’m not  going   to do that.I’m not going to do  thatI’m not going to do that .

    It is possible to have the words of a sentence have one meaning, while the delivery (intonation /stress) conveys another or even the opposite meaning. (“Why don’t you come over some time?” –said sarcasticaly!)

    English is a language that stresses syllables by using both time (length) and amplitude (loudness).French, and a number of other european languages (eg German), create stress patterns mostly bythe length of syllables. This helps to explain how ‘accent‘ works in a language, for a native englishspeaker will frequently place accents in french words where none belong, and may often stress thewrong sylla ble. (In french, the syllable has the middle syllable lengthened – in english, the firstsyllable is stressed. Compare the english and ‘french’ pronunciations of: english / anglais , Paris /Paris , music / musique.)

    Up to this point, the examination of language (and sound) has been macro-structural: the smallestunit examined has been the ‘word‘. Below, we examine the more fundamental elements, micro-structural , which when taken alone, do not carry specific meaning (they have no semanticdimension).

    CODE

    The semantics of a phrase may also contain ‘code’ which can only be understood by those whohave been initiated into its meaning. The surface features may be obscure “grok”, or oblique “the bug cheese”, or opposite “smart alec”. Political correctness is a way of having newspeak wherethe real meaning is obscured with a euphemism. In academia, the phrase “problems with timemanagement” implies something else. Humor is frequently based on such double meanings(semantic dualities).

  • 8/17/2019 Electroacoustics Readings

    16/170

    SOUND AS SYMBOL: LETTERS AND SPELLING

    In English there are 26 letters, in French, at least 32 (including à, ç, é, è, ê, etc), and Polish also has32. When linguists have attempted to create symbols for sounds in languages that have no writtenform, among them North American first nations peoples’, (indian and inuit), they have attemptedto avoid some of the ‘sonic‘ problems one finds in traditionally written languages (like English), by giving one sound, one symbol.

    English is a particularly good—or bad—example of how not to write a language! Note:Yesterday I read it in the red book. Do you read?The photo by that photographer is not photographic.

    Or even (How to pronounce ough (ouch!)), The bough bowed, while the doughty man whothought, coughed roughly through the dough, and threw it. Tough!

    Some written languages are almost entirely phonetic, with one symbol having only one sound,Russian and Polish being among them. Numerous alphabets (collections of symbols) have beeninvented to give one sound one symbol. The one dealt with here is the International PhoneticAlphabet (IPA). Another is appended below for your interest.

    IPA: THE INTERNATIONAL PHONETIC ALPHABET

    The International Phonetic Alphabet (IPA) is (ideally) a set of symbols for the representation of every verbal sound. It contains more than 120 symbols, of which 40 - 50 are often consideredadequate to describe ‘standard English’. A few more, and slightly different ones are required for‘standard French’. These symbols represent the basic phonemes, or sound elements used by theselanguages. The IPA symbols appear between slashes for the sake of clarity.

    Certain phonemes change their sound depending upon where the speaker is from. Thesedifferences are referred to as accent or dialect. (coffee - kawfee, kwa-fee)

    The ‘real-world’ application of the IPA is more complex than it appears on paper, as dialect andaccents will shift the ‘value’ of a vowel or diphthong – two vowels together – (* see variables below in list), trithongs (skewer when pronounced without the /w/) change the position of thestress, and in some cases will add or remove entire syllables.

    There are also a number of words which have the same pronunciation (homonyms) in one dialect,while being pronounced differently in another. An example is how some English people pronouncethe following words in the same weigh: sure, Shaw, shore.

    VOICE AS SOUND

    The human voice is the most complex and universal of any natural sound source. Physiologically,there are three major parts used in the production of voice sounds:

    • the lungs (which provide the energy),

    • the vocal cords / vocal folds (which vibrate and produce the basic sound), and

    • the mouth which creates the changes in the sound that we recognize as speech.

    There are two basic types of sounds, consonants and vowels , and a continuum of categoriesaround them: consonants (voiced and unvoiced), semi-consonants or semi-vowels, vowels (oraland nasal) and diphthongs (etc).

  • 8/17/2019 Electroacoustics Readings

    17/170

    Voiced sounds are characterized by the vibration of the vocal cords, and un-voiced sounds haveno vibration of the vocal cords, but are basically forms of spectrally modified, filtered (wind)noise. Since the vocal cords are involved, voiced sounds—vowels—may be sung, but unvoicedsounds, not have a vibrating source, cannot be sung. Unvoiced consonants and whispering areunvoiced sounds.

    Many voiced sounds are able to be sustained, and changed as they occur. Say the word musicvery slowly, taking about ten seconds on the vowel “u”. (It starts with a long “e” quality that becomes an “oo” quality. There is a ‘formant glissando‘ between the two parts of this diphthong.)

    Some unvoiced sounds can be sustained , “sh”, but others are transient /p/ . Try to sustain the‘sound’ /p/. While it is possible to sustain the “hhhhh” quality, the ‘identity‘ of the /p/ is in theway in which it starts and stops – its envelope.

    While working at this micro-structural level, the semantic dimension of the text is (frequently)lost. As you continue to work at this level of the ‘voice as sound’, try to carry this form of ‘abstracted hearing‘ in listening to regular speech. Ask someone who speaks a language that youdon’t understand to speak to you: even just to tell you about the weather. Listen with ‘abstractedhearing’.

    SEGMENTATION OF TEXT AND SPEECH

    On paper much of this seems to make sense, and is a quite useable model for handling words andtext. As you (will) have discovered, once again, the ‘real world’ is more complex. The mind takeswhat is a continuous stream of sound and ‘segments‘ it, breaking it down into component partsand then creating sense out of them. (See also Auditory Scene Analysis [ASA], following.)

    Say the phrase: “It was nighttime on the river.” With your ‘mental razorblade’, remove the /t/from “It”. In many instances this will not be possible since in regular speech, the /t/ was notpronounced, rather a glottal stop (back of the throat) was used to connect the /I/ of “ It” to the

    /w/ of was. Find the two /t/s of nighttime. Remove the silences between on and the , and the andriver.

    Segmentation is also made difficult by elision (leaving out parts of words), contractions (combiningwords which may be separate), interjections (the addition of sounds which are not part of thethought), repetition (repeated phrases, words or fragments), incompletion (starting words withoutfinishing them), punctuation (missing or excessive) etc

    Er … an’ I went down, ‘ts’easy y’knowhen the time’s righ, ‘an-er, yer … yer see, it’s afferI’done it … hrrnk … don’it, sorry, bad froat, that’I’decited thatI’d tied i‘too tigh … er, ferwell, ya’know, comfort-like

    This ‘simple‘ task for the human ear/brain posed large problems in the area of automated(computer-based) speech recognition, and this combined with the problems of dialect wereinstrumental in delaying the implementation of speech recognition by telephone companies forseveral decades.

    The first attempts at automated voice recognition tried to set up sonic dictionaries and usepattern matching to retrieve the word. A problem that exists is that a word doesn’t have (only)one sound. It has sets of characteristic elements (some of which may be missing – see above), andthe machine has to match the string of elements to the sonic pattern in memory.

    Compare the words pit, bit, kit, cat, pat, pot, (pit). They all have the characteristic consonant –vowel – consonant pattern (cvc): four start with labial stops /p/ /b/, two with velar stops; the

  • 8/17/2019 Electroacoustics Readings

    18/170

    vowel moves from front to back, and they all terminate with /t/ – an alveolar stop, which inmuch speech is substituted with /:/ (back of mouth stop).

    Segmentation is also a ‘musical‘ problem, for while (on the surface at the level of notation) notesappear to be quite distinct, when viewed as continuous sound (eg in a sound editing program),unless there are clear ‘stress markers’ – such as very strong beats or (regular) attack transients, it

    can be quite difficult to find the points of articulation. This also doesn’t account for players notplaying (quite) together, or dealing with recorded reverberation where parts of the sound arecarried over into subsequent sounds. ‘Musical segmentation’ is also about forming logical groupsat the level of phrasing and grouping, not just where notes start and stop. This too is an AIproblem which is on-going.

    International Phonetic Alphabet:http://www2.arts.gla.ac.uk/IPA/ipachart.html

    http://en.wikipedia.org/wiki/International_Phonetic_Alphabet

    http://www.antimoon.com/how/pronunc-soundsipa.htm

    French:http://french.about.com/library/pronunciation/bl-ipa-french.htm

    Font download:http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=encore-ipa

  • 8/17/2019 Electroacoustics Readings

    19/170

    A QUICK PHONETIC REFERENCE GUIDE

    Vowels, Nasals andDiphthongs

    IPA Soundslike: english french/i…/ long e feel fit/i/ ami/I/ short i big/ /́ short e get bête/æ/ short a fat/a/ papa/ ! / short u sun/å/ ah (are) lâche/ø/ short o box mort

    /o/ hard o faux/ /̈ short oo good/u/ long oo rule fou/\/ short ‘er’ fern de/"/ fut/e/ les/œ/ neuf /y/ mur/Ø/ peu

    Nasal

    vowels/ a~ /  banc/œ~ / un/ ø~ /  bon/ ´~ /  bain/ #~ / vin

    Diphthongs (*)feuille

    /aI/ high */I\/ near *

    /eI/ way/a¨/ now/´\/ air/o¨/ so/øI/  boy *

    /¨\/ moor/i…u/ you

    (*) dialect forms

    Consonants  unvoiced voiced

    z /b/ paper ball/w/ wet/„/ where/m/ man

    /f/ /v/ fat veal/†/ /$/ thing this

    /s/ /z/ seal zeal/ß/ /%/ ship vision/tß/ /d%/ chew jump/t/ /d/ to do

    /n/ none/k/ /g/ car game

    /l/ like/r/ rest/j/ you/ /̃ sing

    /h/ /R/ home (fr) rade

    /ç/ huge

    /i…/ /l/ /´/ /k/ /t/ /r/ /o¨//å/ /k/ /u/ /s/ /t/ /I/ /k/ /s/

    /´/ /l/ /´/ /k/ /t/ /r/ /o¨//å/ /k/ /u/ /s/ /t/ /I/ /k/ /s/

    / ! / /l/ /´/ /k/ /t/ /r/ / ! //k/ /i…u/ /s/ /t/ /I/ /k/ /s/

  • 8/17/2019 Electroacoustics Readings

    20/170

    PLACE OF ARTICULATION

    Another way of categorizing sounds is by their manner of production—the shape of the mouthand lips, and the position of the tongue and teeth.

    Some of the major places of articulation:

     

    2 3

    4

    5

    6

    7

    8

    1. Bilabial2. Labiodental3 . Dental4. Alveolar

    5 . Palatal

    6. Uvular

    7. Pharyngeal

    8. Glottal

    http://www.chass.utoronto.ca/~danhall/phonetics/sammy.html (with graphics!)

    CONSONANTS (unvoiced / voiced)

    Fricatives stop -fricatives

    stops(plosives)

    glides(liquids)

    semi-vowels

    nasal

    1. Labial /Ø/ / & / /p/ /b/ /„/ /w/ /m/

    2. Labiodental /f/ /v/3. Dental /†/ /$/ /t†/ /d$/4. Alveolar /s/ /z/ /t/ /d/ /n/5. Palatal /

    Lateral/̊ / /l/

    6. Palatal /Velar

    /ß/ /%/ /r/ /X/ /tß/ /d%/ /k/ /g/ /ç/ /j/ /̃ /

    7. Glottal /h/ /R/

    VOWELSVowels may be described approximately as front, central or back with varying degrees of openness.

    very open quite open medium quite closed almost closedFront /æ/ /́/ /e/ /I/ /i…/Central /å/ /\/ /"/Back  /a/ / ! / /ø/ /o/ /̈ / /u/ /Ø/

    http://www2.unil.ch/ling/english/phonetique/api1-eng.html

  • 8/17/2019 Electroacoustics Readings

    21/170

  • 8/17/2019 Electroacoustics Readings

    22/170

    ALPHABETS AND PICTOGRAMS

    There have been many systems invented for representing ideas and sounds as symbols. In verballanguage, alphabets for sounds (and pictograms for ideas / objects) have evolved in most moderncultures. Some of these appear to have common roots, and to our eyes, many seem to be quiteunintelligible. The Russian Cyrillic alphabet (below) is used in much of Eastern Europe (Russia

    etc). Note some of its similarity to Greek.

     batbatvatgodoyet

    yondervisionzoo

    easeyetkick wellmixnutpotpit

    rowsitetallcoolfillloch bitschairshut

    shch--—let

    museyard

    The alphabet below developed by early North American native peoples’ scholars for thetranscription of plains indian languages. Point out some of the weaknesses of the approach (eg

    sounds which are not present in english!).

    at

    fast

    king

    thirst

    ate(*) all bow   sell   chair   dip hen he   her

    goat   hat bit bite (*)   jaw   kiss   low   music   no

    lot   old(*) look    out(*) boy(*)   pipe   run   shore   top

    there   up do   you(*)   vest   wig azure   yes   zebra

    The word music inRUSSIAN THAI (?) HINDI

  • 8/17/2019 Electroacoustics Readings

    23/170

    QUESTIONS

    1. From this brief view it is seen that vocal sounds can be static or changing, transient orsustained. Give a short list of natural or mechanical sounds which fall into each (or more) of thesecategories.

    Static Changing Transient Sustainedcomputer fan X Xdrops of water X Xclock ticking

    2. Would it be possible to group sounds into ‘families’ based upon this proposedcategorization? What would be the advantages? What would be the disadvantages?

    car truck bus train planewind ocean distant traffic

    3. If a person is presented with a sequence of sounds that they have never heard before, is itpossible for them to determine / discover the meaning? What would this tell you about the natureof vocabulary? syntax? semantic?

    4. Given only the sounds of an event, how easy / difficult is it to describe the event? Why?What is the role of a distinctive sound signature? Name some.

    5. What would a syntax of electroacoustics look like? How would (have) the rules be(en)developed?

    6. Is there such a thing as a ‘generalized semantic‘ (ie universal meaning) for ea? How is thesemantic of electroacoustics determined?

    7. If electroacoustics is considered to be a ‘language‘, would it need to have some / all of theelements of vocabulary, syntax, and semantic?

    8. Could there be dialects of electroacoustics that have their own vocabulary, syntax, and

    semantic? Find examples.

    9. Are there vocabulary elements in electroacoustics? How would they be identified?

    10. Words – vocabulary elements of verbal language are the smallest meaningful unit. Is theresuch a ‘limit’ to sound? What is the smallest meaningful unit of sound?

  • 8/17/2019 Electroacoustics Readings

    24/170

    READING — II

    DESCRIBING SOUND(S) — I

    OverViewThis reading starts an on-going examination of the methods of describing sound(s)with words. The approach is partly psychological and introduces the model that joins the psychological, the perceptual and the scientific in the study known asAuditory Scene Analysis (ASA). Other models are briefly introduced to begin todevelop a framework and terminology applicable to electroacoustic analysis andcomposition.

    FUNCTION AND CONTEXT

    There is no single, simple, widely accepted method for describing sound(s) in detail, althoughmany people have worked on this problem, and there are numerous research projects currentlyunderway in this area.

    Traditionaly, sound has been broken down into two basic categories:

    Noise Useful sounds (not-noise)

    This is a useful psychological opposition categorization, as it helps determine one’s relationshipsto the sound, but however, does little (of necessity) to describe the sound.

    At 3:00 in the morning, an ambulance siren rushing past my sleeping bedroom is noise; at 3:05 inthe morning, an ambulance siren stopping next to my unconscious body is ‘not-noise‘.

    With these descriptors, sound is described by function and context—while saying little about the‘physical’ aspects of the sound, although one might have a mental image of ‘ambulance siren’.

    The ‘Noise / Not-noise‘ categorization relies upon certain physiological functions of the humanear and mind — along with a number of semantic ones, eg, if a tree falls in a forest and no onehears it, does it make a sound. This is about the definition of ‘sound’, as being a psychological ora physical attribute. [If sound is vibration of air within certain limits, then the answer is likely yes. If sound is the perception of these vibrations, then the answer is more likely no.]

    MASS STRUCTURES AND THE COCKTAIL PARTY EFFECT

    And the sound itself is problematic, being both singular and collective. A bell is a singularity (of sorts), the ocean is a collective: a single bell sound can be described approximately by specificphysical and acoustic properties, an ocean needs to be described by the multiple (stochastic)processes going on at the same time, so-called ‘mass structure‘.

    The bell can be heard as being metallic and having a particular (sense of) pitch and tone color, orit may just be thought of as being ‘high’ or ‘low’ (in pitch) and sounding ‘bell-like’ in tone color.The single stroke evokes the bell quality (identity). And some bell-like sounds are not based on bells at all, but on the function of the bell, an example being call-tones on phones which are saidto ‘ring’, even though real bells disappeared from phones in the 1980s.

  • 8/17/2019 Electroacoustics Readings

    25/170

    A single wave of an ocean can be considered to be more ambiguous. Striking the bell results in the‘same’ sound, but a single wave may not be so easily as a wave on water – was it a passing car?or wind in the trees? and no two waves are identical, similar but not quite the same. The wave isidentifiable a part of a collective sound, a single breaking wave is more difficult to contextualize.

    The bell has a rather clear shape (energy profile) – attack (klang) > decay, while an ocean wave isthe result of the action of many smaller parts forming a larger mass structure. A small wave of 

    say 15 meters in width is the action of millions of individual actions brought together at onemoment when the wave breaks, which itself is not a single action. Having broken, the wave(energy) does not stop but melds into the other parts of the dying wave.

    There are a number of parallels here to the sound of a piano which has bell-like characteristics,and the mass structure characteristics of a wave. Microphones placed over different parts of apiano will produce different qualities of sound, but at some critical distance, all of theseindividual qualities will have joined to become “the sound of the piano”.

    An individual speaking will be heard as ‘speech’. To describe the sound of a crowd (or mob), isdifferent. There are many individual sound sources and they merge into a ‘mass structure‘(composite event), however, through the psychological attribute of ‘selective hearing‘ (known bothas the ‘cocktail party effect‘ where one is able to listen to a specific train of speech even with very

    high background noise levels, and also the ‘deaf teenager effect’, where the adolescent is unable tohear the parent, but is able to listen to a CD, watch tv and talk on the phone at the same time —selective psychological filtering), individual ‘channels / streams‘ of sound can be perceived.

    Sound complexes (multiple source / additive, op cit ‘deaf teenager effect’) exist on a continuumfrom multiple discrete sources (sometimes also discreet), for example a string quartet, to multipleindistinguishable sources (eg an ‘amusement center’ / video-pinball arcade). With the quartet (oreven an octet) it is possible for a trained listener to hear (and follow) up to (about) 8 independentparts (lines), whereas the video-pinball machines, while each may be different, meld into a massstructure very quickly.

    SEGREGATION AND STREAMING & ASAThe example with the string quartet is a matter of segregation (being able to separate the fourindividual lines), and then ‘streaming‘ them – so as to be able to follow each one independently.And this is possible even if the string quartet is a recording played through one loudspeaker. Thisis segregation and streaming of musical instruments / musical lines is a feature of ‘ear-training’ inmusic classes. Outside of this ‘language specific’ (western european concert music) situationhowever, segregation of sound streams is strongly dependent upon being able to hear with to ears.Aspects of this are dealt with in later Readings.

    Another (difficult) example of segregation and streaming is applause. If there are four peopleclapping their hands, is it possible to hear four separate sources, and to follow each of them?How about with 8 people? 16? 32? At some point (dependent upon many variables, includingthe speed of the claps), the ability to segregate and stream yields to ‘mass structure‘ listening(modes).

    A heavy-metal band is somewhere near the middle of this continuum often being a mass structurewall-of-sound. The european orchestra occupies a wide part of the continuum, replying heavily on‘language specific’ indicators, sometimes being heard as multiple solo lines, other times as(multiple) mass structures, and a number of points in between.

  • 8/17/2019 Electroacoustics Readings

    26/170

    ASA — A BRIEF INTRODUCTION

    What are some of the psychoacoustic processes required to hear?

    The field of Auditory Scene Analysis (ASA)http://www.psych.mcgill.ca/labs/auditory/introASA.html proposes that given a continuousflow of acoustical energy to the ear – the wind is blowing through trees, cars pass, children play

    and scream, three people are having a lively discussion, church bells are ringing and someone istalking to you – how do two simple ears (and a brain) sort it out and keep all of the elementsseparated?

    Previously segmentation was introduced (with the International Phonetic Alphabet), and nowwith segregation and streaming, three of the four main elements of (ASA) have been introduced.The last is integration , often almost the opposite of segregation. In listening to a low note playedon a piano, most listeners hear ‘a note’ (an integrated quality). After the note has been repeatedmany times (10 to 100), listeners have been known to experience the sound ‘separating’(segregating) into some of its constituent frequency components. When heard as ‘a sound’, thestimulus was perceived as an integrated whole, subsequently the listeners’ perceptual systemssegregated components.

    This process happens at higher levels of perception as well. Consider an alarm clock going off.While we ‘know’ that the hammer is repeatedly hitting a bell, it is heard as a mass structuresound. The sound of the ocean presents similar ‘moments of hearing’, where the sound is heard asa mass sound, and/or its elements.

    The four elements of ASA are:segmentation: determining the boundaries of how a continuous stream of sound is divided intounits

    segregation: hearing the singer and the guitar as two different sounds even though heard together

    streaming: listening to the melody of the singer and the chords of the guitar as being two different

    linesintegration: hearing the chord played on the guitar as a chord and not as three separate notes

    (As will become clear, these are attributes of the perceptual system, not of the soundsthemselves.)

    PSYCHOACOUSTICS

    Psychoacoustics (see other sheets), describes certain ‘individual response’ aspects of sounds.

    Individuals are asked to evaluate certain things, and their responses are brought together toprovide a ‘psychometric‘ response. Psychometric responses attempt to be context independent,although in ‘reality’ this is very difficult to achieve.

    If it is 12 degrees and 25 people are asked if it is warm , their responses will depend upon suchcontextual matters as: Is it July 1st at 1:30 pm, or February 1st at 7:30 am? Is it indoors in July, or indoors in February!

  • 8/17/2019 Electroacoustics Readings

    27/170

  • 8/17/2019 Electroacoustics Readings

    28/170

     bell metalic complex complex clangpure tone piano note lotsa’ notes even more notes (white) noise

    whistle rush / rumble jumbled noise

    The ‘surface features‘ of a spectrum are often described in the psychological / psychoacousticdomain as: smooth, liquid, hollow, buzzy, granular, highly textured, uneven, edgy, coarse, fine, pitted,knoby, fuzzy, silken, transparent, translucent, metalic … (check any good Thesaurus for more terms borrowed from the visual domain).

    It is frequently useful to break the texture into component parts (segregation), to represent the‘channelization‘ (streaming), or ‘perceived layers‘ eg

    As they sat in the living room, through the slightly ajar window, sadly, the neighbor’schildren’s’ sharp scream-laughs are underpinned by an oboe playing a liquid melody over thesound of a door bell, while church bells behind complement the distant roar of the ocean, likethe ever/never dying sleeping breath of the once and forever dead.

    There are many possible levels and types of analysis applicable here, requiring a model such as‘auditory scene analysis / acoustic flow analysis’, which would be most useful for film and video

    soundtrack producers, but it is also possible to consider ‘what’ is heard by each of the peoplesitting in the room: the four-year old who wants to be outside playing, the wife awaiting news of her husband missing at sea, the Catholic father having heard the bells marking the moments of lifeevery day for the past 65 years.

    It is to also be noted that there are many individual envelopes in this multi-layered scene: fromthe continuous nature of the ocean to the punctuations of the children’s’ sharp scream-laughs.

    This particular description is simultaneously (and variably), programatic [refering to some aspect of narrative or story],emotional [appealing very directly to the listener, producing involuntary, and

    unmediated responses], andassociative [that reminds me of / about]

    for each listener.

  • 8/17/2019 Electroacoustics Readings

    29/170

    Questions:

    1. Is there a ‘common way’ to hear sounds? Do people hear sounds in the same way? Dothey interpret them in the same way? Which sounds are understood the same way by mostpeople? Give examples of sounds which are understood in different ways by different groups of 

    people.

    2. Sounds can appear as being in the foreground or the background. When you aredowntown walking along the street, which sounds do you put into which category? Which (typesof) sounds will move from one category to the other. What will cause this shift? Is it voluntary?

    3. In creating an ea piece, how is it possible to focus listeners’ attention on specific aspects of the sounds you are presenting? How could you create a piece in which no two people wouldreally hear ‘the same’ things?

    4. Could you create a work in which the same listener would not hear the ‘same’ thingstwice? How?

    5. What is the role of focus, attention and ‘directing of attention’ in listening?

    6. In what ways does hearing differ from listening?

    7. What is the maximum number of sounds you can hear at the same time? What effects thislimit?

  • 8/17/2019 Electroacoustics Readings

    30/170

    READING — IIA

    DESCRIBING SOUND(S) II — OPPOSITIONS

    OverViewThis reading continues the examination of the methods of describing sound(s) withwords. The approach is drawn from the (dualistic) model of oppositions, from the basic is / is not   division, through the addition of modifying (or clarifying)parameters, towards a model of description along a continuum. This proposedframework and terminology are applicable to electroacoustic analysis andcomposition.

    A sometimes useful way of approaching the description of sounds is borrowed from linguistics(lexical semantics): the use of oppositions for characterizing and delimiting (setting parameters)of a term or object.

    In language one could start to characterize the word father as:male not femalehaving a child

    In this case, a single rather simplistic definition has been produced. Greater extension and claritycould be produced by adding refinements:

    responsible adultlegal guardianloving

    etc.

    Similarly, some sounds (or families of sounds) can be given sets of parameters that draw themtogether, or separate them.

    noise not noiseloud not loud

    static (still) dynamic (changing)simple complex

    single event recuring eventsimple spectrum complex spectrumhigh frequency low frequency

    vocal not vocalsung spoken

    pitched un-pitchedvoiced un-voiced

    calm agitated

    seductive repulsivered green

    synthesized concretestraight processed

    woodwind brasssingular mass (group or collective)

    same differentetcSuch lists can be created by choosing terms and seeking their (logical) opposite, or by askingquestions that can be answered “yes” or “no”. This method is sometimes used as an example of ‘Aristotelian logic‘.

  • 8/17/2019 Electroacoustics Readings

    31/170

    The oppositions could describe ‘physical’ properties, psychological states, or models of production or transformation. Frequently, a good place to start is with a large list of theparameters of the oppositions, some of which will form ‘trees‘ or hierarchies …

    Processed –> filtered (a special case of spectral change)reverbed (a special case of repetition / delay)

    slowed down (a special change of speed change)re-enveloped (a type of amplitude modulation)

    It may happen from time to time that through this process you will find sounds which are closelyrelated, creating a family (or network) of sounds. Surface features may hide underlyingcommonalities, for example spoken voices played at extremely high speed may sound likeswarming insects, while slowed down 3 octaves may sound like hungry trolls.

    It will be up to the individual as to whether grouping sounds whose surface characteristics are sodifferent is a worthwhile categorization. Members of a family may be in opposition to each other,while sharing a common heritage at some point.

    As seen in the first list, the “oppositions“ can easily include psychological parameters, ‘auditory

    scene‘ characteristics, or simply acoustical ones. Sometimes families of sounds are represented as being along a continuum, or several continuums.

    Many pieces can be understood in terms of this model. Sometimes the oppositions are very wide:crashing/chaotic peaceful and slowly evolving

    or very narrowupward female sung glissando downward female sung glissando

    so that what in one context may be an opposition, in another context may be a criteria for unity.

    In many cases, the oppositions chosen represent points on a continuum rather than a ‘simple‘opposition. In one genre of ea/cm composition loosely called ‘exploration of the object‘, theobjective is to create families of sounds closely (and not so closely) related to each other throughvarious sonic transformation processes. Frequently, verbal language is too coarse to be able toclearly articulate the differences between: a big bell, a bigger bell, a larger bell, an even larger belland a humungous bell, but providing end points from the largest bells in the world (in Moscow) tothe minute Tinkerbell (in a child’s mind). Such parametric continuums can contribute to theexpression of a profile / identity / classification of a sound.

    In some circumstances it is useful to define / articulate the ‘negative space‘ – the way a stonesculptor removes the unwanted pieces of rock. In ea, an example includes acousmatic which has(a) the non-centrality of pitch, (b) the non-centrality of metric rhythm, and (c) does not have liveperformers or real-time processing.

  • 8/17/2019 Electroacoustics Readings

    32/170

    QUESTIONS

    1. Can you create a series of questions that when answered yes or no will show thesimilarities / differences between some short ea pieces?

    2. Is this method of ‘Aristotelian logic‘ applicable to human perception and interpretation?Give (counter-) examples.

    3. In the table below, where possible provide a ‘similar’ term, and an ‘opposite‘ wordapplicable to sound. In some cases there may be many; in some cases you may decide that thereare none, or they are ambiguous.

    Similar Term Opposition

    natural

    high

    regular

    thin

    noisy

    melodic

    calm

    weird

     jittery

    sad

    dry

    voiced

    incomprehensible

    gesture

  • 8/17/2019 Electroacoustics Readings

    33/170

    READING — III

    SIGNAL PATHS & TRANSDUCERS – LOUDSPEAKERS & MICROPHONES

    OverViewThis two-part reading starts an examination of the signal path and some of itscomponents It briefly looks at a number of transformations that a ‘sound’ (signal)may pass through on its way from being an idea, back into being an idea. A brief examination of microphones and loudspeakers covers the two main types of transducers in the studio.

    SIGNAL PATHS & CONTROLS

    The objective of the sound is to originate from a source, and arrive at a receiver. A simplified viewof this is:

    Idea —> Receiver!! !!

    But life isn’t so simple. Another simple view is:

    Idea (source) –> processor –> receiver

    !!!!

    The processor (a black box in this case), has an input and an output. It does something to theinput signal and the output is used. The beauty of the ‘black box’ is that it functions without theuser having to know why or how it does what it does. This particular model however, has nocontrols.

    Another feature of this model is that the signal changed its form, and was converted from onemedium to another (transduced), and then converted back.

    The improved model does this:

    Now there are two processors,

    and one of them has twocontrols.

    !!!!

    This basic model can be extended to describe a signal path, where an originating signal (a source),is converted into various forms of energy (transduced), processed by any number of devices, andis received.

  • 8/17/2019 Electroacoustics Readings

    34/170

    !!

    neurologicalactivity(electro-

    chemical)

    muscularactivity(electro-

    chemical)

    vibration(mechanical

    energy)

    vibrationin air

    (acousticenergy)

    signal(electricalenergy)

    transduction

    There are four kinds of energy used up to the point where the idea has become an electrical signal.

    electricalsignal

    storagesignal processing

    transduction

    sound(acousticenergy)

    processing

    The electrical signal is processed by any number of devices, and then is converted back intoacoustic energy.

    transduction

    outerear

    middleear

    innerear

    neurologicaltransmission

    acoustic -> mechanical -> electrochemical

    !!

    hearing andcognition

    Inside the head, the acoustic energy becomes electro-chemical again.

  • 8/17/2019 Electroacoustics Readings

    35/170

    TRANSDUCERS – SOUND TO ELECTRICITY TO SOUND

    In the studio, we deal with three basic types of information in the signal chain: sound, analog(electricity) and digital information. Transducers or converters, depending upon the particularinstruments or equipment involved, transform the signal from one form to another.

    Transducer

    Microphone

    Loudspeaker

      nalog to Digital

    Converter

    Digital to nalog

    Converter

    Electricity

    Mechanicalmovement (sound)

    Digital (electrical)signals

    Analog (electrical)signals

    Mechanicalmovement (sound)

    Electricity

    Analog (electrical)signals

    Digital (electrical)signals

    A microphone converts sound (mechanical vibration) into electricity. A loudspeaker convertselectricity into mechanical vibration (sound). There are various devices that convert informationinto and out of digital form, the analog to digital converter (ADC), and the digital to analogconverter (DAC). (ADC and DAC is covered in another READING under sampling rates.)

    MICROPHONES

    Microphones are available in many types based upon (a) the way in which sound istransduced—condenser, dynamic, ribbon, crystal and carbon types etc; (b) specificfunction—concert recording, public address, telephone, underwater etc; and (c) directionalcharacteristics – omnidirectional, directional.

    In general, dynamic microphones—which use a small magnet and coil—are quite robust;condenser microphones—which include electret condenser microphones—require a powersupply; ribbon microphones are extremely delicate; and crystal and carbon microphones wereused in almost all telephones until a few decades ago.

    Different applications have differing requirements for microphones: concert recording requiresextremely wide and flat frequency response; public address microphones need to reproduce voicevery clearly while being robust and tending to reject feedback; telephone mics must be clear,robust and very small; underwater microphones must be water-proof.

    The two basic families of directional characteristics are those that (ideally) respond equally well tosounds coming from all directions—omnidirectional , and those that respond better to soundsfrom one direction (or more) than others—directional.

  • 8/17/2019 Electroacoustics Readings

    36/170

    Within the directional category, there are two basic types, the unidirectional mic (more sensitive toone side), and the bi-directional mic, or the figure-of-eight. Each has its particular use andapplications. Remember that while the microphone pick-up patterns shown below are twodimensional, in fact, microphones respond in three dimensions.

    Simplified view of directional characteristics

     Just as different guitar amplifiers have a different ‘sound’ quality—a function of their uniquefrequency response—not all microphones respond equally well to all frequencies. It is sometimesdesirable to have this characteristic as it helps ‘color’ and give a distinctive character to the

    sound. Microphones range in price from $9.95 to over $7,000. All other things being equal, qualitycomes with price.

    Inside the housing for the microphone, there may be as many as four capsules, which will have four signal outputs. While most microphones are monophonic, for much live recording, ‘single point’ stereo microphones are common.

    Some microphones are rugged and can be dropped (eg telephone and ‘rock vocalist’ mics), mostare quite delicate. Avoid dropping microphones, for while some may not break, others may costfrom $200 to $1000+ to repair.

    LOUDSPEAKERS

    There is no ‘perfect’ loudspeaker, and as with microphones, a loudspeaker’s use will largelydetermine its prefered characteristics. Size, weight and required frequency response vary fromapplication to application as for example in sound reinforcement (amplification) for a concert,headphones, music in a cafeteria, recording studio monitors, bus or train station public addresssystems, or telephones.

    All loudspeakers change the quality of the (electrical) signal that goes to them. The amount of change that is acceptable (or desirable) is a function of many things: the intended use and theinherent limitations of the use, what the designer thinks a sound should sound like, and theamount of money that you want to spend.

    There are physical limitations for a vibrating body, which is what a loudspeaker is. Given this,loudspeakers often contain two or more different speakers inside them, each designed to handlea particular range of frequencies. A two-way speaker system will have a larger woofer to handlethe low frequencies, and a tweeter to handle the highs. A three-way system will have threecomponents, the previous two and a mid-range driver.

    It sometimes happens that each of these loudspeaker components will have its own poweramplifier, in which case, the system is referred to as a bi-amped or tri-amped loudspeakersystem. Sub-woofers for handling very low frequencies are common in ‘home–theater’ systems,and as low frequencies are not directional, the sub-woofer can be placed almost anywhere.

  • 8/17/2019 Electroacoustics Readings

    37/170

      C

    Typical loudspeakers.(A) Small ‘full-range’ loudspeaker.(B) Two-way loudspeaker system, with a horn (tweeter)(C) Three-way loudspeaker, similar to a home stereo loudspeaker

    Loudspeakers cost from $9.95 to over $20,000+ a pair for your home or studio stereo system. Theactual quality of the sound you hear is strongly dependent upon the environment and placementof the speakers, especially for low frequencies.

    A loudspeaker which is hung in the middle of a room radiates (more or less) in all directions,particularly at low frequencies. If the speakers seem to lack bass, putting them against a wall willimprove the bass output, since the low frequencies radiate through only half a sphere. Putting aspeaker in a corner will increase its bass response more again, as it will be radiating the sameamount of energy through one-quarter of a sphere. Placing it at the junction of two walls and thefloor will increase it even more. The bass is radiated through only one eighth of a sphere.

    You may have also noticed that closed rooms have ‘more bass’, or better low frequency responsethan rooms with open doors or windows. (This is used to great advantage (?) by boom-box /earthquake cars.)

    Because the loudspeaker is creating sound in a room, if the room has unusual acousticalcharacteristics, a bad echo or is particularly absorbent at some frequencies, the sound heard will

    also have these characteristics. What you will hear will be the original sound, plus the colorationadded by the loudspeaker, plus the unusual acoustical characteristics of the room. Well, wouldn’tit be better to use headphones then?

    HEADPHONES

    Headphones are, if not carefully used, dangerous. It is very easy to produce very high soundpressure levels with very little power because the transducer is so close to the ear. It is also quitenatural to turn up the volume to be able to overcome ambient noise from the outside. In general, alistener needs about 20dB more signal than is leaking in from noise.

    The danger is that if the ambient outside or surrounding noise is 75 – 80 dB, such as streetsdowntown, you will need sound pressure levels of 95 – 100 dB in order to hear everything on theCD / radio. Similarly, the métro sometimes has levels even higher than that. These sound pressurelevels (90 dB and higher) are dangerous for your hearing as both long term and short term hearingloss will be an eventual result.

    There are times and places for headphone listening, however the electroacoustic studio is NOTone of them. As you will or already may have experienced, it is possible to unexpectedly get veryloud sounds in the studio (feedback, fast forward tape on the heads, a loose synthesizer cablethat suddenly makes contact, a system beep …). You do not want these sounds right next to yourears. Such sounds have the potential to destroy the speakers. What will they do to your ears?

  • 8/17/2019 Electroacoustics Readings

    38/170

    There are three general types of headphones: • those that cover the entire ear; • those that siton the ear (open to the air); and • those that fit into the ear canal. The first type have theadvantage of most effectively blocking external noises, but after long periods of use may besomewhat uncomfortable. The second type often need higher sound pressure levels to beeffective, and therefore are potentially dangerous. The third type, while small, may need highsound pressure levels, and through physical contact, may irritate the ear canal. The second and

    third types also have irregular low frequency response since the bass depends upon the exact placement of the earphone.

    BECAUSE OF SPEAKERS COLORATION, WHY NOT MIX SOUNDS WITHHEADPHONES?

    Speakers are the weakest link in the audio chain. But they do reproduce the sound in air, and youwill have acoustical mixing. As shown in the diagram below, sounds from the left speaker willreach the right ear (delayed slightly), as well as reflected sounds, and similarly with the rightspeaker. This does not happen with headphones. To gain a sense of what something will soundlike in a real room, it is necessary to hear it in a real room.

    Direct sound

    Sound ‘leaking’to the other ear

    Reflected sound

    Headphones Loudspeakers

    The headphone directs sound into one ear, while with loudspeakers in a room, each ear receives

    sound from both loudspeakers, and at least two reflections from nearby surfaces.

    The ear then converts the sound back into electrical impulses for the brain. There are manyspeculative views on trying to develop a method where the acoustical element of soundtransmission would be entirely by-passed, that is, plugging the brain directly into the source(usually another brain).

    FEEDBACK

    The general concept of feedback is that the ‘output’ is returned to the ‘input’. With positivefeedback, there will be an increase in the effect, with negative feedback, there will be a reduction.

    In a situation with microphones and loudspeakers, a signal from the loudspeaker that gets back to the microphone (and amplifier), could build up into a howl, whistling or roar.

  • 8/17/2019 Electroacoustics Readings

    39/170

    QUESTIONS

    1. Given that the acoustics and sound reproducing systems of the creator and the listener arenot the same, what can be done to assure the ‘integrity’ of the artist’s sonic conception?

    2. Is it possible to have electroacoustic pieces that do not involve sound?

    3. Is the ‘studio’ dead? What are the advantages / drawbacks to having / not having knobs, buttons and sliders on equipment?

    4. Popular music recording is all processed and assembled. As foods list the ingredients andadditives, should recordings list their ‘non-living’ additives?

  • 8/17/2019 Electroacoustics Readings

    40/170

    READING — IV

    JUNGIAN MODELS FOR COMPOSITIONAL TYPES

    OverViewThis reading approaches compositional and analytic concerns by adapting a four-part model proposed by (among others) Carl Jung. The proposition is presentedlargely through a single diagram at the end, which may be familiar to those who haveexamined palmistry or astrology.

    Carl Jung, in some of his writing, postulated four general personality types which are present ineveryone, with one or more in domination from birth. The individual eventually achieving balanceand completion through — ‘knowledge’ / ‘realization’ / ‘contact’ / ‘sense’ — and full utilizationof all of them. Jung describes them as two pairs, a rational pair and an irrational pair, and theyare (roughly speaking): thought and feeling (rational), sensation and intuition (irrational).

    • Thought: which relates to the intellectual processes—the application of the mind (andanalytical processes) to problems, processes and situations (thoughts, ideas, form,structure).

    • Feeling: which relates to the emotional processes—the (immediate / gut) response of theindividual to problems, processes and situations (like, dislike, mood).

    • Sensation: which relates to the immediate sensory processes—the here and now of thephysical sensation without reference to anything beyond the absolute, immediate present(absolute perception of stimulus).

    • Intuition: which relates to the processes of the past and future as reflected through thepresent—the interpretation of the present almost metaphorically, (through the

    relationship of symbols). The present is only a set of symbols about other things (this isnot printing on a piece of paper).

    It is possible to place these four points in a two-dimensional space, and apply them toelectroacoustic compositional types. Just as it is very rare (if not impossible) to have a ‘pure’personality type, works usually have elements of two (or more) of the compositional types.

    Let us (for the moment) slightly rename the categories for our purposes, as: structural (thought),emotional (feeling), sonorous (sensation), and metaphorical (intuition).

    Much of the work of the acousmatic tradition (new French concrete school) appears to becentered in the metaphorical, emotional domains, with strong support from the sonorous region(Dhomont, Calon, Normandeau, Harrison, Wishart). The sensation aspect is so important in theacousmatic tradition that it is often repeated that the original source of the sounds should remainhidden from the listeners’ perception.

    Much algorithmic composition, and computer-based synthesis appears to draw upon thestructural, sonorous areas. (Truax, Degazio, Xenakis) A good reason why these types of compositions could seem to come from different worlds.

    Much of Stockhausen’s work seems to fall into all four categories, being structurally conceived,emotional in impetus, interesting and challenging in terms of sonority, and metaphorical inmeaning.

  • 8/17/2019 Electroacoustics Readings

    41/170

    These, like all models, are not absolute realities, but potentially useful points of reference.

    earth

    OBJECTIVES

    SENSATION

    EMOTION

    INTELLECT

    INTUITION

    OTHERS

    COMMUNICATION 

    SUN SATURN

    MARS JUPITER

    (artificial)

    Active

    THUMB

    logic / will

    MOON

    MERCURY

    SOUL

    ^

    I RRAT I O

    N AL

     

     MIND

    BODY 

    SELF

    SPIRIT 

    Subconscious

    metaphorical

    passiveobservationfundamental

     Music is what remainswhen the sound is gone.

    The self and the

    inner world

    The self and the outside world

     Music is the sound

    sensory / sonorous

    Conscious   immediate

    air

     fire

    water

    Passive

    (real)

    feeling

    release

    thought / structure control

    ka 98 - ix - 14

    Of interest also may be Jung’s proposition of the anima and animus;