This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Lively, S., Pisoni, D., & Goldinger, S. (1994). Spoken word recognition:Research and theory. In M.A. Gernsbacher (Ed.), Handbook ofPsycholinguistics. Chapter 8, pp.265-301.San Diego: Academic Press. Either library or my office (no electronic version)
Tanenhaus, M., Spivey-Knowlton, M., Eberhard & Sedivy, J. (1996).Using eye movements to study spoken language comprehension:evidence for visually mediated incremental interpretation. In T. Inui & J.McClelland (Eds.), Attention & Performance XVI: Integration inperception and communication (pp. 457-478). Cambridge, MA: MITPress. My office (no electronic version); this is the extended version of the 1995
Spoken word recognition takes place in time - words are not heard all at oncebut from beginning to end.
Written words are available to readers as a whole (depending on length).
Typically there is no chance to reconsider the spoken input.
In printed text we typically can re-read words or passages.
Spoken words are rarely heard in isolation but rather within longer utterances,but there is no reliable cue in speech to mark word boundaries.
In printed text white space unambiguously mark word beginnings.
In spoken words phonemes are realized differently in different contexts(coarticulation). “Sweet girl” is often pronounced as “sweek girl” (but also withinsyllables -> compare tongue position with /ki/ versus /ku/).
No such variability is usually found in printed text.
First contact with the lexicon after hearing some speech
Different theories assume different forms of contact Spectrographic (LAFS (lexical access from speech) model assumes direct
lexical access); frequency/speed at which air particles vibrate plusintensity/loudness in a sound wave form pattern that is recognizable by thelexicon
Motor theory assumes the extraction of articulatory gestures (i.e., lip rounding,tongue position); brain constructs a model of intended articulatory movements
Phonemic theories (or bigger units such as the syllable) assume a prelexicalrepresentation level
Causes certain lexical representation to “activate”
“staple” could be identified by the /p/ because no other English wordswould match the string of phonemes in the mental lexicon. This point ina word is called uniqueness point. Word recognition can occur beforeall phonemes of the word are available.
However, quite regularly words do not become unique prior to wordoffset: /steI/ could not only be the word “stay”, it could also be the beginning of
More phonemic input is needed to identify “stay”:/steIku/ is not a word, also there is no English word beginning with /u/, thusthere must be a word boundary between /I/ and /k/ (stay cool)
2. Uniqueness point early uniqueness point = strawberry (there are no other English words
beginning with /strç˘b/
late uniqueness point = blackberry (not unique at /b/ of berry; blackbird,blackbeetle,…)
What affects lexical access time?What affects lexical access time?
Faster responses to words with earlier uniqueness points E.g. strawberry vs. blackberry
Marslen-Wilson,W. (1990). Activation, competition, and frequency in lexicalaccess. In G. Altmann (Ed.), Cognitive Models of speech processing, pp.148-172. Cambridge: MIT Press.
Again, this effect does not necessarily tell us anything about theorganization of the mental lexicon.
High frequency words = common words (“cat, mother, house”)
Low frequency words = uncommon words (“accordion, compass”)
What affects lexical access time?What affects lexical access time?
High frequency words are faster to access than low frequency words even when they’re balanced on other features (e.g. length)
E.g. pen vs. pun
Marslen-Wilson,W. (1990). Activation, competition, and frequency in lexicalaccess. In G. Altmann (Ed.), Cognitive Models of speech processing, pp.148-172. Cambridge: MIT Press.
In priming studies the actual target word is preceded by thepresentation of a prime word, the prime word can be related to thetarget in different ways
Slowiaczek, L. & Hamburger, M. (1992). Prelexical facilitation andlexical interference in auditory word recognition. Journal ofExperimental Psychology: Learning, Memory and Cognition, 18,1239-1250.
What affects lexical access time?What affects lexical access time?
Meyer, D.E., & Schvaneveldt, R.W. (1971). Facilitation in recognizingpairs of words: Evidence of a dependence between retrieval operations.Journal of Experimental Psychology, 90, 227-234.
What affects lexical access time?What affects lexical access time?
Even though both lexical decision and word spotting are considered onlineparadigms, in both cases measurements are taken after the complete wordshave been presented.
The presentation of nonsense sequences might not be considered totallynatural.
Reaction time measurements do not tell us much about the time course ofprocessing.
Eye-tracking offers the possibility to investigate processes during actual wordrecognition.
Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., & Sedivy, J. (1995).Integration of visual and linguistic information in spoken languagecomprehension. Science, 268, 1632-1634.
See also Tanenhaus, M., Spivey-Knowlton, M., Eberhard & Sedivy, J. (1996).Using eye movements to study spoken language comprehension: evidence forvisually mediated incremental interpretation. In T. Inui & J. McClelland (Eds.),Attention & Performance XVI: Integration in perception and communication (pp.457-478). Cambridge, MA: MIT Press.
Tanenhaus and colleagues used a video-based eye tracker (33 mssampling rate).
They analyzed the movie from target word onset until target wordoffset.
They measured the onset time of the first saccade to the target object.
400
420
440
460
480
500
520
540
mit Kompetitor ohne Kompetitor
Eye movements are tightly locked in time with the spoken utterance and thus caninform us about the ongoing comprehension process
Retrieving lexical information begins prior to word offset (it takes about 200 ms tolaunch a programmed eye movement)
The names of possible referents in the display influenced the speed of wordrecognition (this argues for an incremental interpretation of the speech signal incombination with visual information)
First, it was important to show, that results are not task-specific and arenot simply caused by visual presentation of the four objects (is theobserved competition effect reflecting competition in real life)
Distract from phonological overlap
Results don’t change when pictures are repeatedly shown
Participants, when asked, are not aware of phonological overlap
But more importantly, it has been shown that fixations are influencedby properties of the language system (this would not be the case if theresults just reflect participants using strategies, by-passing the normalspeech comprehension system)
For instance, effects of lexical frequency were replicated (Dahan,Magnuson, & Tanenhaus, 2001) We know about frequency effects from other paradigms (high frequent
words are faster recognized than low frequent words)
Using eye tracking, it was shown that high frequent competitors are fixatedmore often and earlier than low frequent competitors
Also, the time course and probabilities of eye movements closelycorrespond to response probabilities derived from TRACE simulations(Allopenna, Magnuson, & Tanenhaus, 1998)
Computationally implemented models of spoken-word recognition exist(e.g., TRACE, Shortlist)
Such models, are based on ample empirical results and can be used tosimulate and predict (quantitatively) human behavior during spoken-word recognition
TRACE was used to calculate predictions of response probabilities fora certain set of items
The same items were presented to participants during an eye-trackingstudy
The close match between predicted and observed fixation patternsallowed the following linking hypothesis (link between lexical activationand eye movements):
The activation of the name of a picture determines the probability that asubject will shift attention to that picture and thus make a saccadic eyemovement to fixate it.