Top Banner
Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study Yael Henkin, 1,2 Paul R. Kileny, 3 Minka Hildesheimer, 1,2 and Liat Kishon-Rabin 1 Objectives: To investigate the effect of increasing acoustic-phonetic difficulty in children with co- chlear implants (CI) by means of auditory event- related potentials (AERPs). Design: AERPs were recorded from a group of ten 9- to 14-year-old prelingually deafened children who exhibited open-set speech recognition, using the Nucleus 22 CI for at least 5 years. AERPs were recorded in sound field while children were per- forming oddball discrimination tasks with increas- ing acoustic-phonetic demand. The tasks consisted pairs of naturally produced stimuli that differed by one phonetic feature: vowel place (/ki/ versus /ku/), vowel height (/ki/ versus /ke/), voicing (/ka/ versus /ga/), and place of articulation (/ka/ versus /ta/). Using a repeated measure design, the effect of increasing acoustic-phonetic difficulty on P3 la- tency, amplitude, and scalp distribution as well as on the simultaneously obtained behavioral mea- sures, performance accuracy, and reaction time was evaluated. Results: AERPs elicited in the range of 350 msec poststimulus onset were contaminated by the CI stimulus artifact, thus enabling reliable identifica- tion of the P3 component only. Increasing acoustic- phonetic difficulty was manifested in all measures in a hierarchical manner: P3 latency and reaction time increased, whereas P3 amplitude and perfor- mance accuracy decreased. The correlations, how- ever, between behavioral and electrophysiological measures were not significant. Further support for P3 sensitivity to increasing acoustic-phonetic de- mand was its absence in four of the 10 children, but only in the most difficult place of articulation task. P3 amplitude was maximal at the midline parietal cite, with equal amplitudes over the right and left scalp regardless of side of implant. Conclusions: The results underscore the significant value of the P3 potential as a sensitive neural index of speech-sound processing in children with CI. The similar hierarchy of acoustic-phonetic demand manifested in both behavioral and electrophysio- logical measures suggests that speech perception performance relates to neurophysiologic responses at cortical levels of the auditory system. Thus, recording the P3 potential to distinct phonetic contrasts may be useful for studying accessibility and neural encoding at the cortical level in CI recipients. (Ear & Hearing 2008;29;239–249) INTRODUCTION Auditory event-related potentials (AERPs) are measures of electrical brain activity that reflect high-level cognitive processing (Hillyard & Kutas, 1983; Kutas & Dale, 1997; Polich & Herbst, 2000; Stapells, 2002). AERPs signal perceptually or be- haviorally relevant differences in neural activity evoked by a specific stimulus versus previous stim- uli. Of the AERPs, there are components that are known to reflect the presence of audible stimulus energy (e.g., N1), whereas others are associated with stimulus discrimination [e.g., mismatch negativity (MMN), P3] (Eggermont & Ponton, 2002). Previous studies have shown that AERPs to speech stimuli provide insight into the brain processes underlying the perception of speech (Henkin, et al., 2002; Kaplan-Neeman, et al., 2006; Kraus, 1999; Martin, et al., 1997). As such, they are well suited to explore how the auditory system makes use of the informa- tion provided by a cochlear implant (CI) and how it adapts to process different stimuli. Furthermore, with the significant reduction in age of implantation the utilization of such electrophysiological mea- sures, which do not necessarily require cooperation from the infant/child, have a potentially significant role in the evaluation of speech perception. In adult CI recipients, the MMN and P3 poten- tials have been studied using pulsatile stimuli pre- sented directly through the implant (Ponton & Don, 1995; Wable, et al., 2000) or by tones presented acoustically (Beynon, et al., 2002; Kelly, et al., 2005; Kileny, et al., 1997; Okusa, et al., 1999). The results of these studies showed that as target discriminabil- ity decreased, P3 latency of CI recipients increased (Kelly, et al., 2005; Okusa, et al., 1999). These results presumably reflect the decreased number of activated neurons per unit of time leading to dimin- ished synchrony. The parietal P3 (P3b) potential, 1 Department of Communication Disorders, Sackler Faculty of Medicine, Tel-Aviv University, Tel Aviv, Israel; 2 Speech and Hearing Center, Sheba Medical Center, Tel Hashomer, Israel; and 3 Division of Audiology and Electrophysiology, University of Michigan Health System, Ann Arbor, Michigan. 0196/0202/08/2902-0239/0 • Ear & Hearing • Copyright © 2008 by Lippincott Williams & Wilkins • Printed in the U.S.A. 239
11

Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

Apr 20, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

Phonetic Processing in Children With CochlearImplants: An Auditory Event-Related Potentials

StudyYael Henkin,1,2 Paul R. Kileny,3 Minka Hildesheimer,1,2 and Liat Kishon-Rabin1

Objectives: To investigate the effect of increasingacoustic-phonetic difficulty in children with co-chlear implants (CI) by means of auditory event-related potentials (AERPs).

Design: AERPs were recorded from a group of ten 9-to 14-year-old prelingually deafened children whoexhibited open-set speech recognition, using theNucleus 22 CI for at least 5 years. AERPs wererecorded in sound field while children were per-forming oddball discrimination tasks with increas-ing acoustic-phonetic demand. The tasks consistedpairs of naturally produced stimuli that differed byone phonetic feature: vowel place (/ki/ versus /ku/),vowel height (/ki/ versus /ke/), voicing (/ka/ versus/ga/), and place of articulation (/ka/ versus /ta/).Using a repeated measure design, the effect ofincreasing acoustic-phonetic difficulty on P3 la-tency, amplitude, and scalp distribution as well ason the simultaneously obtained behavioral mea-sures, performance accuracy, and reaction timewas evaluated.

Results: AERPs elicited in the range of 350 msecpoststimulus onset were contaminated by the CIstimulus artifact, thus enabling reliable identifica-tion of the P3 component only. Increasing acoustic-phonetic difficulty was manifested in all measuresin a hierarchical manner: P3 latency and reactiontime increased, whereas P3 amplitude and perfor-mance accuracy decreased. The correlations, how-ever, between behavioral and electrophysiologicalmeasures were not significant. Further support forP3 sensitivity to increasing acoustic-phonetic de-mand was its absence in four of the 10 children, butonly in the most difficult place of articulation task.P3 amplitude was maximal at the midline parietalcite, with equal amplitudes over the right and leftscalp regardless of side of implant.

Conclusions: The results underscore the significantvalue of the P3 potential as a sensitive neural indexof speech-sound processing in children with CI. Thesimilar hierarchy of acoustic-phonetic demandmanifested in both behavioral and electrophysio-logical measures suggests that speech perception

performance relates to neurophysiologic responsesat cortical levels of the auditory system. Thus,recording the P3 potential to distinct phoneticcontrasts may be useful for studying accessibilityand neural encoding at the cortical level in CIrecipients.

(Ear & Hearing 2008;29;239–249)

INTRODUCTION

Auditory event-related potentials (AERPs) aremeasures of electrical brain activity that reflecthigh-level cognitive processing (Hillyard & Kutas,1983; Kutas & Dale, 1997; Polich & Herbst, 2000;Stapells, 2002). AERPs signal perceptually or be-haviorally relevant differences in neural activityevoked by a specific stimulus versus previous stim-uli. Of the AERPs, there are components that areknown to reflect the presence of audible stimulusenergy (e.g., N1), whereas others are associated withstimulus discrimination [e.g., mismatch negativity(MMN), P3] (Eggermont & Ponton, 2002). Previousstudies have shown that AERPs to speech stimuliprovide insight into the brain processes underlyingthe perception of speech (Henkin, et al., 2002;Kaplan-Neeman, et al., 2006; Kraus, 1999; Martin,et al., 1997). As such, they are well suited to explorehow the auditory system makes use of the informa-tion provided by a cochlear implant (CI) and how itadapts to process different stimuli. Furthermore,with the significant reduction in age of implantationthe utilization of such electrophysiological mea-sures, which do not necessarily require cooperationfrom the infant/child, have a potentially significantrole in the evaluation of speech perception.

In adult CI recipients, the MMN and P3 poten-tials have been studied using pulsatile stimuli pre-sented directly through the implant (Ponton & Don,1995; Wable, et al., 2000) or by tones presentedacoustically (Beynon, et al., 2002; Kelly, et al., 2005;Kileny, et al., 1997; Okusa, et al., 1999). The resultsof these studies showed that as target discriminabil-ity decreased, P3 latency of CI recipients increased(Kelly, et al., 2005; Okusa, et al., 1999). Theseresults presumably reflect the decreased number ofactivated neurons per unit of time leading to dimin-ished synchrony. The parietal P3 (P3b) potential,

1Department of Communication Disorders, Sackler Faculty ofMedicine, Tel-Aviv University, Tel Aviv, Israel; 2Speech andHearing Center, Sheba Medical Center, Tel Hashomer, Israel;and 3Division of Audiology and Electrophysiology, University ofMichigan Health System, Ann Arbor, Michigan.

0196/0202/08/2902-0239/0 • Ear & Hearing • Copyright © 2008 by Lippincott Williams & Wilkins • Printed in the U.S.A.

239

Page 2: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

elicited by the traditionally active two-stimulus tar-get discrimination task, has been shown to be gen-erated by the posterior bilateral temporal and tem-poroparietal lobe regions in addition to lateralprefrontal areas in normal hearing (NH) subjects(Calhoun, et al., 2006; Polich, 2007). To date, theputative neural generators of the P3 potential in CIrecipients have not been reported.

While studies of AERPs using tones provide in-sight into auditory processing of simple stimuli, it isdifficult to generalize this information to more com-plex auditory stimuli such as speech. In adult CIrecipients, speech stimuli have been used to elicitlate auditory potentials (i.e., acoustic change com-plex) (Friesen & Tremblay, 2006; Martin, 2007) andAERPs such as the P3 and MMN (Beynon, et al.,2005; Groenen, et al., 2001; Kraus, et al., 1993;Micco, et al., 1995). Using one specific speech task(/da/ versus /di/) it was shown that P3 and MMN of“good” CI performers did not differ from those of NHsubjects, whereas P3 and MMN were degraded orabsent in “poor” CI performers (Micco, et al., 1995).Furthermore, while a vowel discrimination taskelicited the P3 potential in the majority of cases aconsonant discrimination task yielded a prolongedor absent P3 in adult CI recipients (Beynon, et al.,2005; Groenen, et al., 2001). This is in accord withpreviously published data suggesting that P3 elic-ited by difficult-to-discriminate contrasts has longerlatency than that elicited by easy-to-discriminatecontrasts (Henkin, et al., 2002; Kutas, et al., 1977;Maiste, et al., 1995).

The use of AERPs in pediatric CI populations hasbeen limited (Beynon, et al., 2002; Kileny, et al.,1997; Singh, et al., 2004). Beynon et al. (2002)generated synthetic stimuli and used them in twospeech discrimination tasks [vowel place (/i/ versus/a/) and place of articulation (/ba/ versus /da/)] toelicit AERPs in a group of children using the Nu-cleus 22 CI. In children with poor speech recognition(mean open set monosyllabic word score of 20%), P3was either prolonged or absent. In contrast, inchildren with good speech recognition (mean openset monosyllabic word score of 76%), P3 latency wascomparable with those of NH children. Recently,Singh et al. (2004) investigated the correlation be-tween the MMN and behavioral speech perceptionabilities of a group of children implanted with theNucleus 22 or 24 CIs. A synthetic pair of stimuli, /ba/and /da/, was used in a passive oddball task. Resultsindicated that MMN was elicited in 80 to 85% of starperformers, but identified in only 16 to 20% of thepoor performers. The authors concluded that giventhat MMN is an indicator of normal auditory sen-sory memory and auditory discrimination, which areprerequisites for normal central auditory process-

ing, their results suggest sensitivity and specificityof 80 to 85% in detecting good central processingskills objectively, using the MMN. Another study,however, found no significant correlation betweenMMN latency and amplitude and speech perceptionmeasures (Wable, et al., 2000).

Although useful for the precise control of tempo-ral and spectral characteristics of the stimulus,synthetic speech stimuli that have been used inmost studies are only an approximation of naturalspeech. The utilization of natural speech stimuli,however, may lead to more accurate inferences re-garding the process of speech perception in childrenwith CI. To date, AERPs have not been used inchildren with CI to investigate the perception of nat-ural phonetic contrasts that differ by the salience oftheir acoustic cues (i.e., temporal, spectral, or spectro-temporal). The main goal of the present study wastherefore to characterize AERPs, elicited by a hier-archical set of oddball discrimination tasks thatconsisted of natural phonetic stimuli, in a group ofchildren with CI. Theoretically, such data may shedlight on the neural correlates related to speech-sound processing of hearing-impaired individualswho gained some of the auditory capabilities via theCI. If AERPs will reflect auditory discrimination ofsmall phonetic differences, then such data can pro-vide information regarding accessibility and neuralencoding at the cortical level of the auditory systemof children with CI.

MATERIALS AND METHODS

Subjects

Ten prelingually deafened children participatedin the study. Individual background information isshown in Table 1. All had profound hearing loss,before implantation, were using high-power hearingaids bilaterally, and scored category 1 (no patternperception) on the Early Speech Perception test (Moog& Geers, 1990) adapted to Hebrew. All children wereimplanted with the Nucleus 22 CI and were usingthe Spectra speech processor, programmed withSPEAK strategy (frequency allocation Table 7 with6 Maxima), in bipolar �1 mode of stimulation. Eightchildren were implanted to the left ear and two tothe right ear, and all rejected the use of a contralat-eral hearing aid because of limited benefit. Themean age at implantation was 4.5 yrs (range, 2.5–6.3). The mean age at testing was 11.5 yrs (range,9.7–13.2), and the mean duration of implant use was7.4 yrs (range, 5–9.5). All were healthy children withno other medical history other than deafness, con-sidered good performers, exhibited open set wordrecognition, and were mainstreamed in oral rehabil-itation programs.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249240

Page 3: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

Stimuli

A hierarchical set of four tasks were constructedto elicit AERPs to increasing acoustic-phonetic dif-ficulty (Kishon-Rabin, et al., 2002). The tasks con-sisted of naturally spoken nonmeaningful conso-nant–vowel (CV) syllables that differed by one speechcontrast. Two of the tasks required vowel discrimi-nation and consisted of the CV pairs /ki/ and /ku/that differed by vowel place, and /ki/ and /ke/ thatdiffered by vowel height. The remaining two tasksrequired consonant discrimination and consisted ofthe CV pairs /ka/ and /ga/ that differed by the voicingof the initial plosive, and /ka/ and /ta/ that differedby the place of articulation of the initial plosive. Theselection of stimuli were based on a previous studyof auditory phonetic contrast perception in Hebrew-speaking children with CI that showed that theperception of vowel place yielded the highest scores,place of articulation was the most difficult thusyielding the lowest scores, vowel height perceptionwas reduced compared with vowel place, and voicingscores, although difficult to perceive postimplanta-tion, were predicted to plateau at 90% after 9 yrs ofimplant use (Kishon-Rabin, et al., 2002).

An adult female produced all speech stimuli. Theywere digitally recorded at 44 kHz sampling rate and16-bit quantization in a sound proof room via a JVCMV40 microphone, using Sound-Forge Software(Version 4.5a). From a very large sample of natu-rally produced stimuli, the final set of syllables wasselected so that all had vowels and consonantssimilar in intensity and duration. Syllables werenormalized using the RMS function of the Sound-Forge Software and were equal in duration (250msec). Stimuli were presented at 75 dB SPL every 2sec through a loud speaker placed 1 m in front of theCI recipient.

AERP Recordings

Brain electrical activity was recorded from 19sites on the scalp using electrocap tin electrodes.

The electrodes were placed according to the 10–20system (Jasper, 1958) at the following locations:FP1, FP2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5,P3, Pz, P4, T6, O1, and O2 referenced to the chin. Aground electrode was placed on the mastoid con-tralateral to the CI. Eye movements were monitoredby electrodes above and below the right eye. Theimpedance measured for each electrode was lowerthan 5 kOhm.

A Brain Performance Measurement System (Orgil™,Ein-Ayala, Israel) sampled brain electrical activityat 256 Hz. The resolution of the analogue-to-digitalconverter was 0.1 �V per bit and bandpass filter was0.1 to 100 Hz.

Procedure

After electrode application, subjects were seatedin a comfortable armchair in a sound-treated room.They were instructed to fixate their eyes on a coloredspot located on the wall in front of them whilelistening to the prerecorded stimuli, and to avoidexcessive eye movements. All sets of stimuli werepresented in an “oddball” paradigm. The target sylla-bles were /ki/ and /ka/ in the vowel and consonantdiscrimination tasks, respectively. All tasks con-sisted of 200 stimuli with target probability of 20%.Children were instructed to respond to the infre-quent events (targets) with a button press placingequal emphasis on accuracy and speed.

Before testing, it was confirmed that the childunderstood the requirements of the task. Presenta-tion order of the different sets of stimuli was coun-terbalanced across subjects to prevent the effect offatigue on the experimental measures. Brief inter-missions were provided between stimulus sets. Therecording session lasted 2 hrs. In addition to AERPrecordings, reaction time was recorded and perfor-mance accuracy was measured.

The Institutional Review Board of the ShebaMedical Center approved experimentation and par-

TABLE 1. Background information of the 10 cochlear implant recipients

SubjectAge at

implantation (yrs)Duration of

implant use (yrs)Age at

testing (yrs)Average thresholds

(dB HL) at 0.5, 1, 2 kHz with CIOpen-Set One-Syllable

word recognition (%) (AB)Etiology

of deafness

1 2.5 7.2 9.7 16.7 75 Wardenburg2 3 7.3 10.3 15 75 Usher3 3 9.5 12.5 16.7 45 Wardenburg4 3.7 6.5 10.2 21.7 55 Genetic5 4 7.4 11.4 30 75 Unknown6 4.1 8.5 12.6 25 60 Unknown7 4.2 9 13.2 23.3 75 Genetic8 5.5 5.2 10.7 15 35 Unknown9 6.2 7 13.2 28.3 60 Unknown

10 6.3 5.1 11.4 28.3 55 Unknown

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249 241

Page 4: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

ents signed an informed consent form for the partic-ipation of their children in the study.

Data Analysis

As the active two-stimulus oddball discriminationtask was used, our analysis focused on the P3 (P3b)potential. Record by record inspection was used toidentify and exclude single records contaminatedwith CI or myogenic artifacts during the P3 timewindow. An eye movement correction procedure wasperformed off-line using the Eye Movement Correc-tion algorithm of the system (for details see Henkin,et al., 2002). After baseline adjustment and selectiveaveraging of single records elicited by the targetstimulus (minimal number of single records aver-aged was 30), the latency and amplitude of the P3potential were measured for each subject and task.Peak latency was defined as the time from stimulusonset to the point of the most positive peak in apreselected time window of 350 to 800 msec. P3amplitude was defined as the voltage differencebetween the peak and the average voltage over the200 msec preceding stimulus onset.

Reaction time analysis included record by recordanalysis of behavioral responses (button press) tothe predefined target stimulus. Reaction time wasdefined as the time (in milliseconds) from stimulusonset to the button press. Correct response wasdefined as a button press to the target stimuluswithin a predefined epoch of 0 to 1500 msec post-stimulus onset. The percentage of correct responseswas calculated and defined as performance accu-racy. In addition, the percentage of false positiveresponses (i.e., button press to the nontarget stimu-lus) was calculated.

Multivariate analysis of variance (ANOVA) forrepeated measures with four levels of task (vowelplace, voicing, vowel height, and place of articula-tion), three levels of laterality [right (F4, C4, andP4), midline (Fz, Cz, and Pz), and left (F3, C3, andP3)], and three levels of frontality [frontal (F4, Fz,and F3), central (C4, Cz, and C3), and parietal (P4,Pz, and P3)] was performed on the latency andamplitude of the P3 component. Analysis of con-trasts was then used to detect significant differencesamong task, laterality, and frontality. An additional

multivariate ANOVA for repeated measures wasperformed on the behavioral measures, reactiontime, and performance accuracy.

The relationships between P3 latency/amplitudeand the behavioral measures [performance accu-racy, reaction time, and open-set one-syllable wordrecognition scores (AB test)] were studied usingPearson correlation coefficient analysis.

RESULTS

Behavioral Measures

The means (�1SE) of the performance accuracy(%), false positive rates (%), and reaction times(msec) in the different tasks are presented in Table2. A significant main effect of task on the arcsinetransformed data of performance accuracy wasfound [F(3,7) � 7.2, p � 0.015]. Contrast analysisrevealed that scores in the vowel place task weresignificantly higher compared with all other tasks(p � 0.015). Scores in the place of articulation taskwere significantly poorer than those in all othertasks (p � 0.025). No significant differences werefound between the voicing and vowel height scores.Because of the nature of the oddball discriminationtask where subjects are instructed to press a buttononly to the target stimuli (and to ignore the nontar-get stimuli) the percentage of false positive/falsealarms was considerably low. A hierarchical trendwas evident, however, indicating that while thevowel place task yielded the lowest rates (0.25%),the place of articulation task yielded the highestrates (11.2%).

A significant main effect of task on reaction timewas also found [F(3,7) � 18.9, p � 0.001]. Contrastanalyses revealed that reaction time in the vowelplace task was significantly shorter than those of allother tasks (p � 0.01). Reaction time in the place ofarticulation task was significantly longer than thosein all other tasks (p � 0.01). No significant differ-ence was found between the reaction time in thevoicing and vowel height tasks.

AERPs

The recording of AERPs was confounded by thecontamination of responses by the CI artifact. As the

TABLE 2. Reaction time, performance accuracy, and false positive results in the phonetic tasks

Behavioral measure Vowel place Voicing Vowel height Place of articulation

Reaction time (msec) 630.7 (24) 760.3 (41.2) 781.1 (26.4) 859.9 (36.4)Performance accuracy (%)* 99.4 (0.5) 90.2 (2.4) 90.2 (3.5) 69.3 (7.3)False positive (%)† 0.25 (0.4) 1.3 (2.6) 2.8 (4.5) 11.8 (11.2)

All the values are given as mean (SE).* Percentage of correct responses to target stimuli (n � 40).† Percentage of false positive responses (button press to nontarget stimuli) (n � 160).

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249242

Page 5: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

CI coil functions as an antenna for data and powertransmission, it generates an electromagnetic field(i.e., CI artifact). Figure 1 depicts the CI artifactfrom a single subject to the target stimulus /ka/ inthe place of articulation task. Note that for allpresented waveforms positivity is illustrated down-ward whereas negativity is illustrated upward. Theartifact was evident in all recording electrodes, andresulted in saturation of electrode T5 that is in closevicinity to the CI (left). Although variation amongsubjects was evident, the midline electrode siteswere less contaminated in some cases. The artifactwas synchronized with the auditory stimulus, andits duration was stimulus dependent (stimulus du-ration, 250 msec). As a result it overlapped andmasked AERPs (i.e., P1, N1, P2, and N2) that wereelicited in the range of approximately 350 msecpoststimulus onset. Different procedures aimed atdiminishing the CI artifact, such as filtering, subtrac-tion, and correction by linear regression using estima-tion of the artifact by an electrode placed under the CIcoil during the recording session, yielded unsatisfac-tory results. Consequently, only P3 could be identi-fied reliably, in a time window free of artifact.

The P3 component was elicited in all children inall tasks, excluding the place of articulation tasks inwhich it was absent in four of the children.

P3 Latency

A significant main effect of task on P3 latency wasfound [F(3,7) � 13.8, p � 0.002]. Figure 2 shows thatas acoustic-phonetic difficulty increased P3 latencyincreased. Contrast analysis revealed that P3 la-tency in the vowel place task was significantlyshorter than the latencies in all other tasks (p �0.002). P3 latency in the voicing task was shorter

Fig. 1. Averaged waveform froma CI recipient to target stimuli(ka) in the place of articulationtask. Displayed is a large stimu-lus artifact from the CI deviceevident in all recording elec-trodes (electrode T5 not showndue to saturation) including theeye movement channel (X2) wherethe artifact was inverted in polar-ity. The waveform recorded fromCz is enlarged and shows theduration of the artifact with thearrow above the abscissa (ap-proximately 350 msec). Stimulusonset is indicted by the upwardarrow, and its duration was 250msec. Positivity is illustrated down-ward and the P3 potential is de-picted at 703 msec.

542.9

644.0

690.6718.3

0

100

200

300

400

500

600

700

800

Vowel Place Voicing Vowel Height Place ofArticulation

Late

ncy

in m

s

Fig. 2. Mean (�SE) P3 latency at the Pz electrode in thedifferent phonetic tasks.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249 243

Page 6: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

than that in the vowel height and place of articula-tion tasks with borderline significant probabilityvalues (p � 0.06 and p � 0.07, respectively). Nosignificant differences were found between P3 la-tency elicited by the vowel height and place ofarticulation tasks.

P3 Amplitude

A significant main effect of task on P3 amplitudewas found [F(3,7) � 4.4, p � 0.048]. Figure 3 showsthat as acoustic-phonetic difficulty increased, P3amplitude at the Pz electrode (where amplitude wasmaximal) gradually decreased. Contrast analysisrevealed that P3 amplitude in the vowel place taskwas significantly greater than that in the place ofarticulation (p � 0.007). Furthermore, while P3amplitude in the vowel place task was greater thanthat in the vowel height task (borderline significanteffect, p � 0.08) it was similar to that in the voicingtask. P3 amplitude in the voicing task was signifi-cantly greater than that in the place of articulationtask (p � 0.02) yet was similar to that in the vowelheight task.

The effect of increasing acoustic-phonetic diffi-culty on P3 latency and amplitude is shown inFigure 4. Figure 4 shows superimpositions of wave-forms of a single subject elicited by the vowel placeand vowel height tasks (4A) and of those elicited bythe voicing and place of articulation tasks (4B).

P3 Scalp Distribution

Figure 5 depicts the significant main effect offrontality on P3 amplitude in the midline electrodes[F(2,35) � 72.2, p � 0.0001]. Shown are P3 ampli-tudes at the midline electrodes (Fz, Cz, and Pz).

ANOVA revealed greater P3 amplitudes at the pa-rietal electrodes (P3, Pz, and P4) compared with thecentral ones (C3, Cz, and C4) (p � 0.0004), which, inturn, were greater than the frontal ones (F3, Fz, andF4) (p � 0.0001). The frontality by task interactionwas not significant.

Figure 6 shows the significant main effect oflaterality on P3 amplitude [F(2,35) � 12.2, p �0.0001]. Shown are P3 amplitudes at the parietalsites (P3, Pz, and P4). ANOVA revealed greateramplitudes at the midline electrodes (Fz, Cz, and Pz)compared with the right (F4, C4, and P4) (p � 0.03),and left (F3, C3, and P3) (p � 0.0001) electrodes. Nosignificant differences were found between P3 am-plitudes at the right and left electrodes. The later-ality by task interaction was not significant.

Relationship Between Behavioral Measuresand P3 Latency/Amplitude

Pearson correlation coefficient analysis was usedto study the relationship between the behavioralmeasures (i.e., performance accuracy and reactiontime) and P3 latency/amplitude. When calculatedfor each of the phonetic tasks, no significant corre-lations were found. Furthermore, the correlationsbetween P3 latency/amplitude and speech percep-tion scores as measured by the (Hebrew) AB testwere not significant.

DISCUSSION

The results of the present study demonstratedthat in prelingually deafened children with CI, in-creasing acoustic-phonetic difficulty was manifestedin all measures in a hierarchical manner: P3 latencyand reaction time increased, whereas amplitude andperformance accuracy decreased. Further supportfor P3 sensitivity to increasing acoustic-phoneticdemand was its absence in four of the 10 children,but only in the most difficult place of articulationtask. Thus, with the reduction in the accessibility ofessential temporal and/or spectral cues our CI recipi-ents exhibited delayed and less efficient (i.e., synchro-nized) auditory processing. These results underscorethe significant impact of increasing acoustic-pho-netic difficulty on the neural activity underlying theprocesses involved in the discrimination of naturalspeech stimuli in prelingually deafened childrenusing CI. Although the present study was not de-signed to compare AERP data of CI recipients andNH children, data regarding the perception of He-brew phonetic contrasts show that already at anearly age (4 yrs) performance levels are at 100% forNH children for all phonetic contrasts (Kishon-Rabin, et al., 2002). Furthermore, converging elec-trophysiological data show that increasing acoustic-

11.09

13.25

14.64

18.43

0

2

4

6

8

10

12

14

16

18

20

22

Vowel Place Voicing Vowel Height Place ofArticulation

Am

plit

ud

e in

Mic

rovo

lt

Fig. 3. Mean (�SE) P3 amplitude at the Pz electrode in thedifferent phonetic tasks.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249244

Page 7: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

phonetic demand did not alter the latency, amplitude,and scalp distribution of AERPs of NH children(Henkin, et al., 2002; similar age range) and adults(Henkin, Reference Note 1; same stimuli as thoseused in the present study).

P3 elicited by the vowel place task was the short-est and was prolonged by 101, 147, and 175 msec forvoicing, vowel height, and place of articulation, re-

spectively. It is suggested that as the discriminationtask became more difficult, speed of processing wasslowed down in our CI recipients. In the presentcontext, it is plausible that P3 latency reflects thesubtlety and accessibility of the acoustic cues to theperception of the significant phonetic contrasts andsubsequently the listener’s ease of listening. Previ-ous studies also provided support for P3 latency

Fig. 4. Superimposition of aver-aged waveforms to target stim-uli of a single subject elicited by(A) the vowel place (gray line)and vowel height (black line)discrimination tasks, and (B) thevoicing (gray line) and place ofarticulation (black line) discrim-ination tasks. All recording elec-trodes are shown (including eyemovement channel, X1), andpositivity is illustrated down-ward. The waveform recordedfrom Pz is enlarged and showsthe P3 latency prolongation andreduced amplitude in the vowelheight (A) and place of articula-tion (B) tasks. Also depicted arestimulus onset indicated by up-ward arrow and the CI stimulusartifact.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249 245

Page 8: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

sensitivity to the amount of information available onwhich a subject is required to base a consciousphonetic decision in the case of sensorineural hear-ing loss (Oates, et al., 2002) and in the presence ofbackground noise (Kaplan-Neeman, et al., 2006).The results of the present study are in agreementwith the classical reports (Hillyard & Kutas, 1983;Kutas, et al., 1977) as well as with more recent ones(Henkin, et al., 2002; Kutas & Dale, 1997; Polich,2007; Polich & Herbst, 2000) implying that P3latency is an index of stimulus classification speedrepresenting the closure or completion of the stim-ulus evaluation process that precedes the overtbehavioral response (Verleger, 1997).

The finding of progressive reduction in P3 ampli-tude following the hierarchy of contrast perception

found for P3 latency (i.e., P3 amplitude was largestin the vowel place task and decreased by 3.8, 5.2,and 7.3 �V in the voicing, vowel height, and place ofarticulation tasks, respectively) is in agreementwith previously published data showing that asacoustic and/or phonetic differences between targetand nontarget were minimized P3 amplitude de-creased (Henkin, et al., 2002; Okusa, et al., 1999;Polich, 2007). P3 amplitude is considered an index ofcentral nervous system activity that reflects theprocessing of incoming information when it is incor-porated into memory representations of the stimu-lus in the context in which it occurs. Changes in P3amplitude are therefore believed to reflect the de-gree or quality (i.e., synchronization) of which thatinformation is processed (Polich, 2007; Polich &Herbst, 2000).

Further support for P3 sensitivity to increasingacoustic-phonetic demand was its absence in four ofthe 10 children, but only in the most difficult place ofarticulation task. The absence of P3 as a result of thecentral auditory system’s inability to generate asynchronous response may reflect failure of thediscrimination process. The results of the simulta-neously obtained behavioral measures may providesome support for this notion. Specifically, the per-formance of two of the four children for whom P3was not detected/generated was just about chancelevel (47.5 and 50%; compared with the mean per-formance accuracy of 71% of children with measur-able P3) and they exhibited relatively high falsepositive rates (20 and 35%; compared with the meanfalse positive rate of 5.4% of children with measur-able P3). In addition, their reaction times wereprolonged (989 and 1024 msec; compared with themean reaction time of 839 msec of children withmeasurable P3). The behavioral results of the othertwo children (performance accuracy 92.5 and 65%;false positive rates 25 and 5.6%; reaction time 735.6and 817 msec) do not coincide with the notion thatP3 was absent as a result of failure of the discrimi-nation process but may rather reflect a nonsynchro-nized and therefore undetectable neural response. Itshould further be noted that greater individualvariance in reaction time was evident in all fourchildren in whom P3 was absent (mean standarddeviation, 282.3 msec) compared with that of chil-dren with measurable P3 (mean, 197.7 msec). Over-all, the prolonged or absent P3 in the place ofarticulation tasks is in agreement with previouslyreported data in adult CI recipients (Beynon, et al.,2005; Groenen, et al., 2001).

Similar to previous reports, the correlation be-tween behavioral and electrophysiological measureswere not significant (Wable, et al., 2000; Firszt, etal., 2002); however, the similar hierarchy of acous-

-2

0

2

4

6

8

10

12

14

16

18

20

22

Vowel Place Voicing Vowel Height Place ofArticulation

Am

plit

ude

in M

icro

volt

Fz

Cz

Pz

Fig. 5. Scalp distribution of P3 amplitude (Means � SE): theeffect of frontality. Shown are P3 amplitudes at frontal (Fz),central (Cz), and parietal (Pz) electrodes in the different tasks.

0

2

4

6

8

10

12

14

16

18

20

22

Vowel Place Voicing Vowel Height Place ofArticulation

Am

plit

ude

in M

icro

volt

Left

Midline

Right

Fig. 6. Scalp distribution of P3 amplitude (Means � SE): theeffect of laterality. Shown are P3 amplitudes at left (P3),midline (Pz), and right (P4) electrodes in the different tasks.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249246

Page 9: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

tic-phonetic demand manifested in both measuressuggests that speech perception performance relatesto neurophysiologic responses at cortical levels. Thishierarchy can be explained by the acoustic informa-tion and its transmission by the CI. Acoustically,vowel place is known to be cued by the frequency ofthe second formant (F2) (Kent, 1997). For the spe-cific vowel pair used in the present study /i/ and /u/,the first formant (F1) of the Hebrew speaker issimilar (341 and 369 Hz for /i/ and /u/, respectively),whereas the second formant (F2) varies greatly(2798 and 779 Hz for /i/ and /u/, respectively). It istherefore expected that in CI recipients, stimulationof two different and widely separated electrodes (10and 18 in frequency allocation Table 7 which allchildren were using), cues to the perception of thesevowels. Evidently, the discrimination of this pho-netic contrast yielded the highest performance accu-racy, shortest P3 latency and reaction time, andlargest P3 amplitude.

In contrast, the perception of vowel height isknown to depend on the ability to perceive thefrequency of F1. Inspection of the frequency-to-electrode allocation Table 7 reveals that for vowels/i/ and /e/, F1 stimulated the same electrode 21(F1:341 Hz and 423 Hz for /i/ and /e/, respectively). As F2stimulated adjacent electrodes 10 (2798 Hz) and 11(2403 Hz) for /i/ and /e/, respectively, it is notsurprising that our CI users had difficulties inperceiving this contrast thus resulting in lowerperformance accuracy, longer P3 latency and reac-tion time, and reduced P3 amplitude compared withthose found for the vowel place task. Interestingly,in contrast to our expectations, vowel height wasperceived with high recognition scores. This sug-gests that our CI recipients were using cues otherthan F1 and F2. Discrimination based on differencesin intensity is ruled out as the vowels were normal-ized to the same levels. It is possible, however, thatour CI users were able to extract temporal cues thatmay have assisted in the perception of the vowelheight contrast. This issue clearly requires furtherinvestigation.

Of the consonant discrimination tasks, the pro-cessing of the voicing contrast yielded higher perfor-mance, shorter P3 latency and reaction time, andlarger P3 amplitudes compared with those that werethe result of the place of articulation task. Data inHebrew suggest that voicing is perceived based onvoice onset time, the burst of the stop, and thetransition to the following vowel (Taitelbaum-Swead,et al., 2003). Studies in the perception of Hebrewvoicing in implanted and hearing children showedthat voicing is the last of the contrasts to develop,probably because of the lack of visual cues and theneed to rely exclusively on acoustic information

(Kishon-Rabin & Henkin, 2000; Kishon-Rabin, etal., 2002). After longer duration of implant use,however, voicing can be fully perceived (scores be-tween 90 and 100%; Kishon-Rabin, et al., 2002). It istherefore plausible that the greater acoustic acces-sibility of this contrast together with the consider-ably long period of implant use (�5 yrs) of thepresent cohort were the main contributors to theseresults.

The significant difficulty in the perception ofplace of articulation was manifested in both electro-physiological and behavioral measures. Comparedwith all other tasks used in the present study,performance was the poorest, P3 latency and reac-tion time were the longest, and P3 amplitude wasthe smallest. Acoustically, place of articulation iscued by rapid spectral changes in the second formanttransition, representing the articulatory movementsfrom the place of production of the consonant to theposition of the following vowel (average duration ofthe transition is 47 msec). Although the perceptionof this contrast develops earlier than the perceptionof voicing, it plateaus at approximately 70% inHebrew speaking children with CI (Kishon-Rabin, etal., 2002). We therefore suggest that the acousticcues to the perception of this contrast were onlypartially accessible to our children using the Nu-cleus 22 programmed with SPEAK coding strategy.Although SPEAK delivers stimulation at a moderaterate (250 Hz), more advanced speech coding strate-gies such as ACE that use higher pulse rates (�800Hz) may better represent rapidly changing temporalvariations that are known to be important for con-sonant recognition (Pssaros, et al., 2002). Behavior-ally, significant improvements in speech perceptionabilities have been reported in children who wereconverted from SPEAK to ACE (Pasanisi, et al.,2002; Pssaros, et al., 2002). Furthermore, in adultCI recipients who exhibited higher speech recogni-tion scores with ACE compared with SPEAK strat-egy, shorter P3 latencies to a vowel discriminationtask (i.e., /i/ versus /a/) were evident (Beynon, et al.,2004).

Although AERPs do not contain direct informa-tion regarding the location of their sources, scalpdistribution characteristics may show which hemi-sphere is more involved in the processing of aspecific stimulus (Kraus & Cheour, 2000). The pari-etal P3 (P3b) elicited by our active oddball discrim-ination tasks was of maximal magnitude at themidline electrodes (Fz, Cz, and Pz) in 33 of the 36recordings. Furthermore, no differences were foundbetween amplitudes at the right (F4, C4, and P4)and left (F3, C3, and P3) electrode sites in allchildren, eight implanted to the left ear and two tothe right ear. In contrast, in a group of 20 NH

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249 247

Page 10: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

children of similar age as that of the present studycohort (i.e., range, 11–16.6 yrs), phonetic discrimi-nation tasks presented binaurally yielded signifi-cantly greater P3 amplitudes over the left hemi-sphere (Henkin, et al., 2002). These lateralizationpatterns were compatible with enhanced synchro-nous activity over the left hemisphere that is moreinvolved in speech processing as shown in studiesusing AERPs (Kayser, et al., 1998) and neuroimag-ing methods (Kent, 1998; Zattore, et al., 1992). Thus,it is possible that the patterns of brain activationduring auditory processing of CI recipients are dif-ferent from those of NH subjects, however, furtherinvestigation is necessary.

In the present study, the components P1, N1, P2,and N2 were compromised because of the large CIstimulus artifact. Although some attempts havebeen made to eliminate the CI artifact during therecording of AERPs (Gilley, et al., 2006; Martin,2007; Ponton & Don, 2003) solutions are not yetsatisfactory especially when using natural speechstimuli. Using very short stimuli to isolate thestimulus artifact from the epoch containing theAERPs of interest, however, is a compromise as itmay affect intelligibility, especially for CI recipients.To follow the time-course of the perception of natu-ral speech stimuli from their arrival to primaryauditory cortex (i.e., P1-N1), future methods thatwill abolish the CI artifact will significantly contrib-ute to the study of the brain processes underlyingthe perception of speech in CI recipients.

In summary, the results of the present studyunderscore the significant value of the P3 potentialas a sensitive neural index of speech-sound process-ing in children with CI. The similar hierarchy ofacoustic-phonetic demand manifested in both be-havioral and electrophysiological measures suggeststhat speech perception performance relates to neu-rophysiologic responses at cortical levels of the au-ditory system. Thus, recording the P3 potential todistinct phonetic contrasts may be useful for study-ing accessibility and neural encoding at corticallevels of CI recipients.

ACKNOWLEDGMENTS

The authors greatly acknowledge Simona Tetin-Schneider forpreparation of stimuli, Esther Shabtai for statistical analysis,and two anonymous reviewers for their insightful comments andcritique on a previous version of this manuscript.

Address for correspondence: Yael Henkin, PhD, Department ofCommunication Disorders, Sheba Medical Center, Tel Hashomer,Ramat Gan, 52621 Israel. E-mail: [email protected].

Received August 2, 2006; accepted August 29, 2007.

REFERENCES

Beynon, A. J., & Snik, A. F. (2004). Use of the event-related P300potential in cochlear implant subjects for the study of strategy-dependent speech processing. Int J Aud, 43 (suppl 1) S44–S47.

Beynon, A. J., Snik, A. F., Stegeman, D. F., et al. (2005).Discrimination of speech sound contrasts determined withbehavioral tests and event-related potentials in cochlear im-plant recipients. J Am Acad Audiol, 16, 42–53.

Beynon, A. J., Snik, A. F., & van den Broek, P. (2002). Evaluationof cochlear implant benefit with auditory cortical evoked po-tentials. Int J Aud, 41, 429–435.

Calhoun, V. D., Adali, T., Pearlson, G. D., et al. (2006). Neuronalchronometry of target detection: fusion of hemodynamic andevent-related potential data. Neuroimage, 30, 544–553.

Eggermont, J. J., & Ponton, C. W. (2002). The neurophysiology ofauditory perception: from single units to evoked potentials.Audiol Neurootol, 7, 71–99.

Firszt, J. B., Chambers, R. D., & Kraus, N. (2002). Neurophysi-ology of cochlear implant users II: comparison among speechperception dynamic range, and physiological measures. EarHear, 23, 516–531.

Friesen, L. M., & Tremblay, K. L. (2006). Acoustic changecomplexes recorded in adult cochlear implant listeners. EarHear, 27, 678–685.

Gilley, P. M., Sharma, A., Dorman, M., et al. (2006). Minimizationof cochlear implant stimulus artifact in cortical auditoryevoked potentials. Clin Neurophysiol, 117, 1772–1782.

Groenen, P. A. P., Beynon, A. J., Snik, A. F. M., et al. (2001).Speech evoked cortical potentials and speech recognition incochlear implant users. Scand Audiol, 30, 31–40.

Henkin, Y., Kishon-Rabin, L., Gadoth, N., et al. (2002). Auditoryevent-related potentials during phonetic and semantic process-ing in children. Audiol Neurootol, 7, 228–239.

Hillyard, S. A., & Kutas, M. (1983). Electrophysiology of cognitiveprocessing. Annu Rev Psychol, 34, 33–61.

Jasper, H. H. (1958). The ten-twenty electrode system of theInternational Federation. Electroencephalogr Clin Neuro-physiol, 10, 371–375.

Kaplan-Neeman, R., Kishon-Rabin, L., Henkin, Y., et al. (2006).Identification of syllables in noise: electrophysiological andbehavioral correlates. J Acoust Soc Am, 120, 926–933.

Kayser, J., Tenke, C. E., & Bruder, G. E. (1998). Dissociation ofbrain ERP topographies for tonal and phonetic oddball tasks.Psychophysiology, 35, 576–590.

Kelly, A. S., Purdy, S. C., & Thorne, P. R. (2005). Electrophysio-logical and speech perception measures of auditory processingin experienced adult cochlear implant users. Clin Neuro-physiol, 116, 1235–1246.

Kent, R. D. (1997). The speech sciences. San Diego: SingularPublishing Group.

Kent, R. D. (1998). Neuroimaging studies of brain activation forlanguage, with an emphasis on functional magnetic resonanceimaging: a review. Folia Phoniatr Logop, 50, 291–304.

Kileny, P. R., Boerst, A., & Zwolan, T. (1997). Cognitive evokedpotentials to speech and tonal stimuli in children with im-plants. Otolaryngol Head Neck Surg, 117, 161–169.

Kishon-Rabin, L., & Henkin, Y. (2000). Age related changes in thevisual perception of phonologically significant contrasts. Br JAudiol, 34, 363–374.

Kishon-Rabin, L., Taitelbaum, R., Muchnik, C., et al. (2002).Development of speech perception and production in childrenwith cochlear implants. Ann Otol Rhinol Laryngol, 189, 85–90.

Kraus, N. (1999). Speech sound perception, neurophysiology, andplasticity. Int J Pediatr Otorhinolaryngol, 15, 123–129.

Kraus, N., & Cheour, M. (2000). Speech sound representation inthe brain. Audiol Neurootol, 5, 140–50.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249248

Page 11: Phonetic Processing in Children With Cochlear Implants: An Auditory Event-Related Potentials Study

Kraus, N., Micco, A. L., Koch, D. B., et al. (1993). The mismatchnegativity cortical evoked potential elicited by speech in cochle-ar-implant users. Hear Res, 65, 118–124.

Kutas, M., & Dale, A. (1997). Electrical and magnetic readings ofmental functions. In M. D. Rugg (Ed.), Cognitive neuroscience(pp. 197–242). United Kingdom: Psychology Press.

Kutas, M., McCarthy, G., & Donchin, E. (1977). Augmentingmental chronometry: the P300 as a measure of stimulusevaluation time. Science, 197, 792–795.

Maiste, A. C., Wiens, A. S., Hunt, M. J., et al. (1995). Event-related potentials and the categorical perception of speechsounds. Ear Hear, 16, 68–90.

Martin, B. A. (2007). Can the acoustic change complex be re-corded in an individual with a cochlear implant? Separatingneural responses from cochlear implant artifact. J Am AcadAudiol, 18, 126–140.

Martin, B. A., Segal, A., Kurtzberg, D., et al. (1997). The effects ofdecreased audibility produced by high-pass noise masking oncortical event-related potentials to speech sounds /ba/ and /da/.J Acoust Soc Am, 101, 1585–1599.

Micco, A. G., Kraus, N., Koch, D. B., et al. (1995). Speech-evokedcognitive P300 potentials in cochlear implant recipients. Am JOtol, 16, 514–520.

Moog, G. S., & Geers, A. E. (1990). Early speech perception test forprofoundly hearing impaired children. St. Louis: Central Insti-tute of the Deaf.

Oates, P. A., Kurtzberg, D., & Stapells, D. R. (2002). Effects ofsensorineural hearing loss on cortical event-related potentialsand behavioral measures of speech-sound processing. EarHear, 23, 399–415.

Okusa, M., Shiraishi, T., Kubo, T., et al. (1999). Effects ofdiscrimination difficulty on cognitive event-related brain po-tentials in patients with cochlear implants. Otolaryngol HeadNeck Surg, 121, 610–615.

Pasanisi, E., Bacciu, A., Vincenti, V., et al. (2002). Comparison ofspeech perception benefits with SPEAK and ACE coding strat-egies in pediatric Nucleus CI24M cochlear implant recipients.Int J Pediatr Otorhinolaryngol, 17, 159–163.

Polich, J. (2007). Updating P300: An integrative theory of P3aand P3b. Clin Neurophysiol, 118, 2128–2148.

Polich, J., & Herbst, K. L. (2000). P300 as a clinical assay:rationale, evolution, and findings. Int J Psychophysiol, 38,3–19.

Ponton, C. W., & Don, M. (1995). The mismatch negativity incochlear implant users. Ear Hear, 16, 131–146.

Ponton, C. W., & Don, M. (2003). Cortical auditory evokedpotential recorded from cochlear implant users: methods andapplications. In H. E. Cullington (Ed.), Cochlear implants:Objective measures (pp. 187–230). Philadelphia: Whurr Pub-lishing.

Psarros, C. E., Plant, K. L., Lee, K., et al. (2002). Conversion fromthe SPEAK to the ACE strategy in children using the nucleus24 cochlear implant system: speech perception and speechproduction outcomes. Ear Hear, 23, 18S–27S.

Singh, S., Liasis, A., Rajput, K., et al. (2004). Event-relatedpotentials in pediatric cochlear implant patients. Ear Hear, 25,598–610.

Stapells, D. (2002). Cortical ERPs to auditory stimuli. In J. Katz(Ed.), Handbook of clinical audiology (5th ed., pp. 378–407).Baltimore: Lippincott Williams & Wilkins.

Taitelbaum-Swead, R., Hildesheimer, M., & Kishon-Rabin, L.(2003). Effect of voice onset time (VOT), stop burst and vowelon the perception of voicing in Hebrew stops: preliminaryresults. J Basic Clin Physiol Pharmacol, 14, 165–176.

Verleger, R. (1997). On the utility of P3 latency as an index ofmental chronometry. Psychophysiology, 34, 131–156.

Wable, J., Van den Abbeele, T., Gallego, S., et al. (2000). Mis-match negativity: a tool for the assessement of stimuli discrim-ination in cochlear implant subjects. Clin Neurophysiol, 111,743–751.

Zattore, R. J., Evans, A. C., Meyer, E., et al. (1992). Lateralizationof phonetic and pitch discrimination in speech processing.Science, 256, 846–849.

REFERENCE NOTE1. Henkin, Y. (2005). Insights and hurdles in the recording and

interpretation of auditory event-related potentials in cochlearimplant recipients. Keynote lecture presented at 4th Interna-tional Symposium: Objective Measures in Cochlear Implants,Hannover, Germany, 1–4 June.

HENKIN ET AL. / EAR & HEARING, VOL. 29, NO. 2, 239–249 249