Top Banner
SCHOOL NEUROPSYCHOLOGY Linking Assessment to Intervention CHAPTER 4 Linking Assessment to Intervention THE COGNITIVE HYPOTHESIS-TESTING MODEL Prereferral Issues In our cognitive hypothesis-testing (CHT) model, emphasis is placed on helping a majority of children through systematic prereferral services. As a psychologist, you must intervene to assess: You must develop an effective prereferral intervention program, using a team approach such as an intervention assistance team (see Ross, 1995) and problem-solving consultation, to reduce the number of referrals for formal evaluation. A large majority of children can be helped via an indi- rect service delivery model, and consultative approaches can effectively reduce the number of re- ferrals for formal standardized evaluation. This is the only way in which the comprehensive CHT evaluations we argue for will be feasible; reducing referrals means gaining more time to conduct both interventions and more comprehensive evaluations. Of course, there have been calls for more emphasis on prereferral interventions, or a move to interventions instead of referrals, for many years. Since Public Law 94-142 originally mandated serving children with disabilities rather than excluding them, school psychology has tried to em- phasize interventions. The National Association of School Psychologists issued a volume titled Al- ternative Educational Delivery Systems (Graden, Zins, & Curtis, 1988), which called for more con- sultation, more teacher assistance teams, and more interventions. The 25th-anniversary issue of the School Psychology Review (Harrison, 1996) called for the same, as did Best Practices in School Psychology IV (Thomas & Grimes, 2002). Despite these numerous calls for professional change, however, school psychologists continue to spend the majority of their time in determining eligibil- ity for special education (Hosp & Reschly, 2002). Why is this? There are probably several reasons. Intervention resources often depend on special education eligibility. Also, the funding to pay school psychologists may come from special education money. High student–psychologist ratios, as well as a high number of required assessments, may contribute to a lack of time to spend in al- ternative roles (e.g., Wilczynski, Mandal, & Fusilier, 2000). How can we increase the perceived 128 This is a chapter excerpt from Guilford Publications. School Neuropsychology: A Practitioner's Handbook, James B. Hale and Catherine A. Fiorello. Copyright © 2004.
49

Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Aug 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

SCHOOL NEUROPSYCHOLOGY Linking Assessment to Intervention

CHAPTER 4

Linking Assessment to Intervention

THE COGNITIVE HYPOTHESIS-TESTING MODEL

Prereferral Issues

In our cognitive hypothesis-testing (CHT) model, emphasis is placed on helping a majority ofchildren through systematic prereferral services. As a psychologist, you must intervene to assess:You must develop an effective prereferral intervention program, using a team approach such as anintervention assistance team (see Ross, 1995) and problem-solving consultation, to reduce thenumber of referrals for formal evaluation. A large majority of children can be helped via an indi-rect service delivery model, and consultative approaches can effectively reduce the number of re-ferrals for formal standardized evaluation. This is the only way in which the comprehensive CHTevaluations we argue for will be feasible; reducing referrals means gaining more time to conductboth interventions and more comprehensive evaluations.

Of course, there have been calls for more emphasis on prereferral interventions, or a move tointerventions instead of referrals, for many years. Since Public Law 94-142 originally mandatedserving children with disabilities rather than excluding them, school psychology has tried to em-phasize interventions. The National Association of School Psychologists issued a volume titled Al-ternative Educational Delivery Systems (Graden, Zins, & Curtis, 1988), which called for more con-sultation, more teacher assistance teams, and more interventions. The 25th-anniversary issue ofthe School Psychology Review (Harrison, 1996) called for the same, as did Best Practices in SchoolPsychology IV (Thomas & Grimes, 2002). Despite these numerous calls for professional change,however, school psychologists continue to spend the majority of their time in determining eligibil-ity for special education (Hosp & Reschly, 2002). Why is this? There are probably several reasons.Intervention resources often depend on special education eligibility. Also, the funding to payschool psychologists may come from special education money. High student–psychologist ratios,as well as a high number of required assessments, may contribute to a lack of time to spend in al-ternative roles (e.g., Wilczynski, Mandal, & Fusilier, 2000). How can we increase the perceived

128

This is a chapter excerpt from Guilford Publications.School Neuropsychology: A Practitioner's Handbook, James B. Hale and Catherine A. Fiorello. Copyright © 2004.

Page 2: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

value of interventions? In many schools, special education is seen as the only way to get help for achild who is experiencing difficulties. Systems change efforts must include resource allocation forsupporting children in general education. Then the school psychologist’s ability to help designand monitor those interventions will be seen as a valuable role, and the consultation role can in-crease. And when evaluations are truly useful for intervention design, rather than focusingentirely on eligibility, they will be valued as well. Only with a mix of both these roles can schoolpsychologists completely fulfill the promise of their training.

As noted earlier, a child’s behavior and his or her environment are inextricably related. Theenvironment—including teachers, peers, the curriculum, and the classroom structure and rou-tine—exerts a great influence on the child’s behavior. But the child’s characteristics—biologicalconstraints, temperament, past learning history, and current skills—also influence both the child’sbehavior and, in turn, the environment. Practitioners can use information about both parts of thecycle to intervene and develop individualized interventions that will work with the environmentto meet the child’s unique needs. We suggest paying attention to both sides of the equation, sincewe feel that an exclusive focus either on external, environmental factors or on within-child factorsneglects half the picture. This approach combines the two most powerful strands of the schoolpsychology profession: individual psychoeducational assessment (e.g., Kamphaus, 2001; Sattler,2001), and intervention development and monitoring from the behavioral intervention andproblem-solving consultation models (e.g., Erchul & Martens, 2002; Thomas & Grimes, 2002).

CHT in Assessment and Intervention

Initially, standard problem-solving consultation is used in CHT to develop data-based interven-tions at the prereferral level. But a child who does not benefit from these initial interventions isreferred for a formal CHT evaluation. The referral question, history, and previous interventionsare examined to develop a theory of the problem (see #1 under “Theory” in Figure 4.1). If cogni-tive functioning is thought to be related to the academic or behavioral deficit areas in question(see #2 under “Hypothesis” in Figure 4.1), the intelligence/cognitive test is used as one of thefirst-level assessment tools (see #3 under “Data Collection/Analysis”). Via demands analysis, thefindings are interpreted (see #4 under “Data Interpretation”) to determine possible cognitivestrengths and weaknesses (#5 under “Theory”). This is where many psychologists stop the pro-cess. Because of time demands, psychologists in the schools typically write their reports and pres-

Linking Assessment to Intervention 129

FIGURE 4.1. The cognitive hypothesis-testing (CHT) model.

Page 3: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

ent their findings in a team meeting; they have little contact with the child, parents, or teacherthereafter (unless individual therapy is offered). But our CHT model goes beyond this to chooseadditional measures (#6 under “Hypothesis”) to confirm or refute the intellectual test data (#7under “Data Collection/Analysis”). The results are examined in light of the record review/history,systematic observations, behavior ratings, and parent/teacher interviews to gain a goodunderstanding of the child (#8 under “Data Interpretation”).

Completing the initial assessment is where the CHT process begins, not ends. Interventionsare subsequently developed using the understanding of the child and the environment during col-laborative consultative follow-up meetings with teachers and/or parents. Possible interventionstrategies are explored in consultation with the teacher (#9 under “Theory”), and an interventionplan likely to succeed is developed (#10 under “Hypothesis”). The systematic intervention is thenundertaken (#11 under “Data Collection/Analysis”) and evaluated to determine intervention effi-cacy (#12 under “Data Interpretation”). If the intervention does not appear to be effective, it isrevised or recycled until beneficial results are obtained (#13 under “Theory”). Like brief experi-mental analysis (Chafouleas, Riley-Tillman, & McGrath, 2002), the CHT model we describe usesa problem-solving approach and single-subject methodology to examine child performance overtime. We are strong advocates for behavioral technology and single-subject methodology. The dif-ference between our model and other behavioral approaches is that we use information aboutcognitive functioning in developing our interventions.

Conducting Demands Analysis

Demands analysis is a core component of the CHT model. It is the key both to accurate identifica-tion of childhood disorders and to development of interventions that are sensitive to individualneeds. The demands analysis process that we present here is derived from two assessment tradi-tions. The first tradition is the “intelligent testing” approach, which examines global, factor, andsubtest scores based on clinical, psychometric, and quantitative research (e.g., Flanagan & Ortiz,2001; Kamphaus, 2001; Kaufman, 1994; McGrew & Flanagan, 1998; Sattler, 2001). When formu-lating your clinical demands analysis, you must be careful to examine all relevant technical andcross-battery subtest information. Heavily influenced by the Luria (1973) approach toneuropsychological assessment, the second tradition consists of the developmental and process-oriented neuropsychological assessment approaches (e.g., Bernstein, 2000; Kaplan, 1988; Lezak,1995). Although demands analysis may seem similar to other versions of profile analysis (e.g.,Kaufman, 1994), the major difference is the emphasis on the neuropsychological and cognitiveprocesses necessary for task completion. We have noted previously that the input and output de-mands are straightforward; they are the observable and measurable test stimuli and behavioral re-sponses. However, research is clearly demonstrating that the underlying neuropsychologicalprocessing demands are essential for understanding and helping many children with theirlearning and behavior problems.

For many children and most tests/subtests, a brief demands analysis should be sufficient toexamine and test hypotheses about brain–behavior relationships. We have provided you with twoforms (Appendix 4.1 and Appendix 4.2) to guide you in interpretative efforts. The form in Appen-dix 4.2 may even be more helpful as you become more accustomed to demands analysis, becausethis allows you to add constructs as necessary to reflect the neuropsychological processes underly-ing a particular subtest or if a child responds in an idiosyncratic manner. To conduct the demandsanalysis, identify tests/subtests that represent the child’s strengths and weaknesses. Enter them inthe appropriate spaces in Appendix 4.2, and for each measure conduct a task analysis of the input,

130 SCHOOL NEUROPSYCHOLOGY

Page 4: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

processing, and output demands. Input refers to the stimulus materials as well as the directions,demonstrations, and teaching items. Think about what modality or modalities are needed for theinput—for example, whether there are pictures or verbal directions, whether the content is mean-ingful or abstract, and what other aspects of the content are relevant (e.g., level of English lan-guage used or amount of cultural knowledge required). Processing refers to the actualneuropsychological processing demands of the task, as discussed in Chapters 2 and 3. Think aboutthe primary requirement (often suggested by the test’s developers), but also secondary require-ments, such as the executive and working memory skills needed to keep a stimulus in mind whileprocessing it. Output refers to the modalities and skills required for responding to the task. Is theoutput a complex verbal response, a simple pointing response, or a complex motoric response? Iforal expression is needed, is syntax important, and is word choice an issue? These are some of thequestions you must answer in demands analysis. The form we provide in Appendix 4.1 is merely atool for you to begin thinking about underlying psychological processes. We have included blanksin the last column for you to provide additional subtest input, processing, and output demands.Once you have listed the input, processing, and output demands for all of the child’s strengths andweaknesses, it is important to look for commonalities and contradictions among the data.

After completing the sheets for each subtest, you attempt to identify patterns in the child’sperformance. If you find that one particular processing demand is required on all low-score tests,and it is not needed for the high-score tasks, you would hypothesize that this demand is a weak-ness for the child. Information from your observations of the child during testing, as well as infor-mation provided by the teacher, should also be consistent with any hypotheses. The weakness maybe a cognitive processing weakness, but it may also be a sensory or motor weakness, a result ofemotional interference, or a consequence of limited exposure or background. Enter this informa-tion on the worksheet provided in Appendix 4.3. Although these sheets and interpretive texts(e.g., Groth-Marnat, Gallagher, Hale, & Kaplan, 2000; Kamphaus, 1993; Kaufman, 1994; McGrew& Flanagan, 1998; Sattler, 2001) can be helpful in conducting demands analysis, you should not belulled into a “cookbook” approach when interpreting subtest data—a tendency that often resultsin erroneous interpretation. To guard against this and to foster accurate interpretation, we haveprovided a checklist in Appendix 4.4. This checklist is primarily for you to complete to aid inclinical judgment, but it could possibly be used as an informant rating scale as well.

Let’s walk through a demands analysis of the Wechsler Intelligence Scale for Children—Fourth Edition (WISC-IV) Block Design subtest to see what the process looks like. First, considerthe input. The task has oral directions, and the task is modeled for younger children and thosewho have difficulty on the first item. The stimulus materials (booklet with visual model and two-color blocks) are abstract colored shapes, so that verbal encoding is difficult. The task will benovel for most children (although perhaps not on reevaluation or as the testing progresses). Theprocessing demands are quite complex and involve both hemispheres and executive/frontal de-mands. Primarily, Block Design is a right-hemisphere task, since it is visual–spatial (i.e., involvesthe dorsal stream), is novel, and does not depend on crystallized prior knowledge. However, thereis some bilateral processing because of the bimanual sensory and motor coordination, as well asthe part (directional orientation of the blocks—left parietal) and whole (gestalt/spatial—right pari-etal) coordination (see Kaplan, 1988). There is a heavy frontal component, due to the executiveand motor requirements of the task. The frontal demands include planning and organization, self-monitoring and evaluation of the response, inhibition of impulsive responding, and fine motor andbimanual coordination. This is particularly true if the child uses a trial-and-error approach. Noteparticularly if the child has more difficulty after the lines are removed from the stimulus book, asthis may suggest right posterior (i.e. configuration problem) or frontal (delayed responding due to

Linking Assessment to Intervention 131

Page 5: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

novelty) difficulties. Considering the output, Block Design requires fine motor and bilateralmotor coordination, and adequate processing speed. Bilateral sensory–motor coordination re-quires the corpus callosum, so look for midline problems or a tendency to use just one hand.Slow responding may be due to difficulties in frontal–subcortical circuits (i.e., prefrontal–basalganglia–cingulate) or the sensory–motor system (constructional praxis)—inattention/disorganiza-tion (symptoms resembling attention-deficit/hyperactivity disorder [ADHD]), low cortical tone orlethargy (motivation problems or depression-like symptoms), or perfectionistic tendencies(symptoms resembling obsessive–compulsive disorder [OCD] or other anxiety disorders).

Although conducting demands analysis may be helpful in understanding patterns of perfor-mance, remember that multifactorial tasks can be solved in more than one way, so that the de-mands analysis may differ from child to child. For instance, a child who uses good executive andpsychomotor skills to compensate for a right posterior spatial problem may still do well on BlockDesign, but you would err if you concluded that the child had adequate visual–spatial–holisticprocessing skills. This is where we psychologists have often gone wrong in the past: concludingthat the same subtest measures the same thing for all children. For instance, concluding that poorWISC-IV Information subtest performance is due to a limited “fund of information” may not becorrect if a child has retrieval problems or difficulty due to limited knowledge in just one area,such as science. Concluding that a child has adequate attention, working memory, and executivefunction because he or she has an average WISC-IV Digit Span scaled score, but a Digits For-ward score of 10 and a Digits Backward score of 2, would clearly be inappropriate (see Hale,Hoeppner, & Fiorello, 2002). Table 4.1 provides you with some sample demands analyses on a fewadditional subtests, so you can see how the process works. As you become more familiar with us-ing demands analysis to task-analyze subtests you will eventually become quite comfortable withdetermining the demands on any subtest. In my (Hale’s) graduate child neuropsychology assess-ment class, I have a final exam item that requires students to do a “mystery test” demands analysison a test they have not been exposed to in class. Though students find this challenging, they typi-cally find that they can identify the key input, processing, and output demands on the test. Try thisactivity yourself. Generalizing these skills to other measures will allow you to expand your use ofdemands analysis to just about any instrument you are trained to administer.

We now turn to a discussion of neuropsychological tests for use in the CHT model. Althoughmany of these tests may be new to you, it is important to realize that the demands analyses youperform on cognitive and intellectual measures apply to neuropsychological measures as well. Donot let yourself be overly concerned that these measures are “neuropsychological”; many of themare easier to administer and score than the measures you are used to. For instance, the StroopColor–Word Test requires approximately 5 minutes to administer (a stopwatch is needed to time45 seconds for each subtest), and it has brief, simple instructions. Even though it is easy to admin-ister, it is highly sensitive to executive functions and to frontal–subcortical circuit dysfunction.

ASSESSMENT TOOLS FOR CHT

Fixed versus Flexible Batteries in Hypothesis Testing

One of the biggest debates in neuropsychological assessment is whether to use a fixed test battery(a standard set of tests) or a flexible battery (a set of tests chosen for an individual child)(Bornstein, 1990). Fixed batteries predominated early in the field’s history, but flexible batterieshave become increasingly popular, especially since they tend to be more time- and cost-effective.Fixed batteries tend to lead to more testing than is needed to address unique child characteristics.

132 SCHOOL NEUROPSYCHOLOGY

Page 6: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Linking Assessment to Intervention 133

TABLE 4.1. Sample Demands Analysis of Selected Subtests

WISC-IV Block Design

Input

• Models and abstract visual pictures• Oral directions—moderate English-language knowledge• Demonstration/modeling• Low cultural knowledge and emotional content

Processing

• Visual processing (spatial relations, visualization)• Perception of part–whole relationships• Discordant/divergent processing (analysis)• Constructional praxis• Bimanual coordination/corpus callosum• Concordant/convergent processing (synthesis)• Attention and executive demands: Moderate• Planning and strategy use• Inhibition of impulsive/wrong responding• Novel problem solving: Low to moderate

Output

• Fine motor response, arrangement of manipulatives• Timed score with speed bonus; process score without time bonus• Visual–motor integration

SB5 Picture Absurdities (Levels 4, 5, and 6—Nonverbal Knowledge)

Input

• Large color pictures• Oral directions• Sample item• High cultural and English-language knowledge

Processing

• Visual scanning• Perception of objects (ventral stream)• Crystallized ability for prior knowledge (left temporal)• Discordant/divergent processing (analysis)• Attention and executive demands: Low to moderate• Persistence/inhibition of impulsive responding• Novel problem solving/reasoning

Output

• Brief oral or pointing response• One right answer (convergent responding)

WJ-III Visual–Auditory Learning

Input

• Brief oral directions, teaching items, feedback• Semiabstract figures/symbols• Moderate cultural and English-language knowledge

(continued)

Page 7: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

In addition, a fixed battery gives examiners the impression that the battery assesses all relevantneuropsychological domains (Lezak, 1995). We too prefer a flexible-battery approach in the CHTmodel, because different measures and techniques can be used to address hypotheses developedafter initial data gathering. You may need one or more measures that look at a particular domain indepth. For instance, if you’re interested in an apparent visual–sensory–motor integration deficit,you really need to pick and choose measures that tap each of these four possible causes to get abetter understanding of the problem and direction for intervention.

This is not to say that a fixed-battery approach should be completely avoided. Someneuropsychologists prefer such approaches, because all the children tested are administered thesame tests in the same order. This can serve both research and practice needs. Obviously, manychildren who receive the same measures would be needed for a group-design research project.For clinicians, fixed-battery approaches not only help standardize performance expectationsacross children, but also allow practitioners an opportunity to develop “head norms” about child

134 SCHOOL NEUROPSYCHOLOGY

TABLE 4.1. (continued)

WJ-III Visual–Auditory Learning (continued)

Processing

• Visual perception of figures/symbols (dorsal and ventral streams)• Sound–word/symbol–rebus association• Working memory/learning• Encoding and retrieval of associative/semantic memory• Benefiting from feedback• Inhibition of impulsive/wrong responding• Syntax knowledge: Helpful• Attention and executive demands: Moderate to high• Memory: primary; novel problem solving: secondary

Output

• Brief oral response• Oral formulation/retrieval

CAS Nonverbal Matrices

Input

• Brief oral directions; sample and teaching items• Abstract/nonmeaningful figures• Low cultural knowledge and English-language knowledge

Processing

• Visual scanning and discrimination• Color processing• Visual–spatial processing (dorsal stream)• Part–whole relationships• Discordant/divergent processing (perceptual analysis)• Novel problem solving and inductive reasoning/fluid abilities• Attention and executive demands: Moderate• Inhibition of impulsive/wrong responding

Output

• Pointing response• Multiple-choice format (can solve by elimination/match to sample)

Page 8: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

performance. It is much easier to interpret a measure after dozens of regular administrations thanif it is used sparingly to test hypotheses for individual children. In addition, once demands analy-ses have been done on the fixed-battery subtests, they may only need to be changed slightly forchildren who perform them in a unique way. Finally, the use of a fixed battery does not precludeadditional hypothesis testing with other instruments. Actually, using an intellectual/cognitivemeasure (e.g., the Woodcock–Johnson III [WJ-III]), a fixed battery (e.g., the Halstead–Reitan),and additional hypothesis-testing measures (e.g., subtests from the Comprehensive Test of Phono-logical Processing [CTOPP]) might be the ultimate approach for conducting CHT. However, it isimportant to remember that as the number of measures increases, the likelihood of childperformance variability and of Type I error increases as well.

Intellectual Tests for Hypothesis Testing

You may be surprised to find that you are already familiar with many of the tools available forCHT—including the intelligence/cognitive tests discussed in Chapter 1, such as the subtestsfound on the Differential Ability Scales (DAS), Stanford–Binet Intelligence Scale: Fifth Edition(SB5), WISC-IV, and WJ-III. Although intelligence test subtests are typically factorially complex(McGrew & Flanagan, 1998), there is often a wealth of information published about these mea-sures; their technical quality can be thoroughly evaluated; and you are familiar with their scoringand interpretation. The manuals on these measures come with many statistics to support interpre-tation, such as reliability, standard deviations, standard error of measurement, correlations, factoranalyses, and validity studies.

To aid in your demands analysis of these and other measures, it is worthwhile to consult TheIntelligence Test Desk Reference (McGrew & Flanagan, 1998), which specifies subtest technicalcharacteristics from a Gf-Gc cross-battery approach perspective, and Sattler’s (2001) Assessmentof Children: Cognitive Applications text. Similarly, CHT of the skills necessary for academic per-formance can utilize subtests from several achievement batteries. For instance, on the WJ-IIITests of Achievement, the Story Recall subtest can be used to assess long-term memory encodingand retrieval of semantic information in addition to receptive and expressive language. Anothervaluable text for use in CHT is The Achievement Test Desk Reference (Flanagan, Ortiz, Alfonso, &Mascolo, 2002) which provides readers with technical information and guidance in administeringand interpreting many achievement measures.

Although these intellectual and achievement instruments are useful in CHT, let us now ex-amine several test batteries that are often considered “neuropsychological” instruments. It is im-portant to realize that many neuropsychological tests are easy to administer and score, and thatthey tap many of the constructs already discussed in this book. We do not claim to present an ex-haustive list of measures, just those that we have found to be useful in our practice of CHT. We donot suggest that these measures are better than others, or that measures not included here cannotbe adopted in the CHT model. However, recall that it is your responsibility to evaluate whether ameasure has adequate technical quality for use in CHT. In addition, you should complete a de-mands analysis for each measure you use and review the extant literature on new tests before youuse them. Do not automatically assume that a test measures what we suggest, or what the test au-thors report in the manual. Although our interpretive information is limited, you can consult thetest manuals and other interpretive texts to aid in your understanding of the measures (e.g.,Groth-Marnat, 2000b; Golden, Espe-Pfeifer, & Wachsler-Felder, 2000; Reynolds & Fletcher-Janzen, 1997; Spreen & Strauss, 1998). Your background, training, and experience will determineyour need for individual training and supervision on these measures.

Linking Assessment to Intervention 135

Page 9: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Traditional Neuropsychological Test Batteries

We begin our review of instruments by discussing two of the most commonly usedneuropsychological test batteries (NTBs): the Halstead–Reitan NTB (Reitan & Wolfson, 1993) andthe Luria–Nebraska NTB (Golden, Purisch, & Hammeke, 1985). Though we aren’t advocatingthat every school psychologist use one of these batteries, a brief description follows to familiarizeyou with them. These batteries are often used as “fixed” batteries, and both have a long traditionof use in neuropsychological assessment and research, so there are many supplemental resourcesand publications to aid in their interpretation.

Halstead–Reitan NTB

Table 4.2 provides an overview of the constructs tapped by the Halstead–Reitan NTB (Reitan &Wolfson, 1993) subtests, and of possible brain areas responsible for performance. The CategoryTest requires the child to view simple objects on a screen and press a button coinciding with thenumbers 1 to 4. The child is not told how to perform the task, but instead receives feedback aftereach response. (A more recent version of the Category Test is mentioned later in Table 4.10.) Forthe Tactile Performance Test, the child is blindfolded and presented with an upright formboardand shapes. The child places the different shapes in the corresponding holes as quickly as possi-ble, first with the dominant hand, then with the nondominant hand, and then with both. The TrailMaking Test is a connect-the-dots task, where the child draws a line connecting numbers in order(Trails A), and then alternating between numbers and letters (Trails B), as quickly as possible. Forthe Sensory-Perceptual Examination, a brief screening of visual, auditory, and somatosensoryfunctioning is followed by three somatosensory tasks: finger touching, writing of numbers (olderchildren) or symbols (young children) on fingers/hands, and recognition of shapes, all hidden fromthe child’s view. The Finger Tapping test is a simple measure of motor speed and persistence. TheHalstead–Reitan provides an Impairment Index of brain dysfunction/damage, which ranges from0 to 10. Although the original norms may have been limited, more recent normative data and in-

136 SCHOOL NEUROPSYCHOLOGY

TABLE 4.2. Characteristics of Halstead–Reitan Neuropsychological Test Battery (NTB) Subtests

Subtest Constructs purportedly tapped Brain areas involved

Category Test Concept formation, fluid reasoning,learning skills, mental efficiency

Prefrontal area, cingulate,hippocampus, temporal lobes (?)(associative and categorical thinking)

Tactile Performance Test Tactile sensitivity, manual dexterity,kinesthetic functions, bimanualcoordination, spatial memory, incidentallearning

Lateralized sensory and motor areas,parietal lobes, corpus callosum,hippocampus

Sensory-PerceptualExamination

Simple and complex sensory functions Lateralized sensory areas (morecomplex, bilateral?)

Finger Tapping Simple motor speed Lateralized motor areas

Trail Making Test,Parts A and B(Trails A and B)

Processing speed, graphomotorcoordination, sequencing, number/letterfacility (Trails B also requires workingmemory, mental flexibility, set shifting)

Trails A: Dorsal stream, premotor area,primary motor area, corpus callosum;Trails B: also prefrontal–basal ganglia–cingulate

Page 10: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

terpretive strategies have been developed. For a recent review of the Halstead–Reitan NTB, seeNussbaum and Bigler (1997).

Luria–Nebraska NTB

The Luria–Nebraska NTB (Golden et al., 1985) consists of 12 scales derived from Luria’s (1973,1980a, 1980b) approach to neuropsychological assessment, which emphasizes flexible administra-tion and interpretation of measures. Therefore, it is not a true fixed battery per se, but practitio-ners may have a tendency to administer it as such. The 12 Luria–Nebraska subscales are labeledMotor, Rhythm, Tactile, Visual, Receptive Language, Expressive Language, Writing, Reading,Arithmetic, Memory, Intelligence, and Delayed Memory. Because the traditional examinationmay take up to 2 days to complete (Golden, Freshwater, & Vayalakkara, 2001), this instrument maynot be practical for use in the schools. As we will see in the next section, several contemporaryneuropsychological assessment tools are available to assess skills similar to those tapped by theLuria–Nebraska domains, and many were designed solely for use with children. For a recentreview of the Luria–Nebraska NTB, see Golden (1997).

Neuropsychological/Cognitive Tests for Hypothesis Testing

We now review instruments that assess multiple as well as specific areas of neuropsychologicalfunctioning. You may wish to use an entire test at times, but for the most part, you will pick andchoose subtests from these batteries for CHT. They are listed in alphabetical order.

Children’s Memory Scale

Since we are often asked to give an indication of a child’s capability of learning in the classroom, itis somewhat surprising that more educational administrators don’t mandate assessment of learn-ing and memory skills. Designed for use with children aged 5–16, the Children’s Memory Scale(CMS; Cohen, 1997) is an excellent measure of learning and memory designed for clinical assess-ment. It was carefully standardized on a representative sample. It is not surprising that the CMSdemonstrates adequate internal consistency for a memory measure, and comprehensive validitystudies support the instrument’s construct validity. It has six core subtests, two each in the Audi-tory/Verbal, Visual/Nonverbal, and Attention/Concentration domains; the last domain is probablythe least useful in CHT. In addition, there are three supplemental subtests, one for each domain.The subtests we typically use are presented in Table 4.3. The reported subtests all have delayedportions for further examination of long-term memory retrieval—an advantage of this measure. Adisadvantage is relying on the Auditory/Verbal–Visual/Nonverbal dichotomy for organizing thebattery.

Cognitive Assessment System

Designed for ages 5–17, the Cognitive Assessment System (CAS; Naglieri & Das, 1997) is a rela-tively new measure of cognitive functioning that represents the authors’ planning, attention, si-multaneous, and successive (PASS) model (Das, Naglieri, & Kirby, 1994). It is purportedly basedon Luria’s model of neuropsychological processing and assessment, but as we have seen in Chap-ter 2, there is no PASS acronym in Luria’s model. In addition, although the authors’ confirmatory

Linking Assessment to Intervention 137

Page 11: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

factor analysis has been used to support a four-factor model, cross-battery analyses have raiseddoubt about the model, with findings suggesting that the Planning and Attention factors should becombined (Carroll, 1995; Keith, Kranzler, & Flanagan, 2001; Kranzler & Keith, 1999). This wouldcertainly fit with Luria’s (1973) model, as attention and executive functions are intimately relatedto the integrity of the third functional unit or frontal lobes (except for cortical tone, which wouldbe the responsibility of Luria’s first functional unit). Of course, for an individual child, planningand attention may differ and lead to different recommendations, so their separation may berelevant for individual children.

Another issue has to do with purported relationships between the hemispheres and CASmeasures. Whereas the association between simultaneous processes and right-hemisphere func-tions makes sense, the association of the left hemisphere with successive processes needs furtherexamination. As we have seen in Chapters 2 and 3, this representation is not entirely correct—leaving the construct validity of the PASS model in question, at least as a neuropsychological test.Two of the successive tasks rely heavily on grammatical structure, and all use verbal information,so they are not truly tests of successive processing. It is interesting to note that the test authors’own predictive validity study, using the WISC-III, CAS, and WJ-R achievement scores, revealedthat the WISC-III Verbal scale consistently predicted achievement domains better than the CASfactors.

Given these criticisms, why do we advocate use of the CAS in CHT? We like to use several ofthe CAS subtests for hypothesis testing. The scale was adequately normed, and most subtestsshow good technical characteristics. In addition, the test authors have provided us with the firstsubstantial treatment validity studies of any cognitive measure, presented in the PASS RemedialProgram (PREP; see Das, Carlson, Davidson, & Longe, 1997). The PREP has focused primarilyon reading, with training of successive and simultaneous skills leading to improved word recogni-tion and decoding skills. There is also evidence that strategy-based instruction can improve mathachievement in students with poor planning skills. We do not think, however, that the CAS shouldbe used to measure global intellectual functioning, even though it provides a Full Scale standard

138 SCHOOL NEUROPSYCHOLOGY

TABLE 4.3. Characteristics of Children’s Memory Scale (CMS) Subtests

Subtest Constructs purportedly tapped

Auditory/Verbal

Stories Auditory attention, semantic long-term memory encoding and retrieval, sequencing/grammar, verbal comprehension, expressive language

Word Pairs Paired-associate task; auditory attention, learning novel word pairs

Word Lists Selective reminding task; long-term memory encoding, storage, and retrieval of unrelatedwords

Visual/Nonverbal

Dot Locations Visual–spatial memory encoding and retrieval (dorsal stream), susceptibility to interference

Faces Visual–facial memory encoding and retrieval (ventral stream)

Attention/Concentration

Sequences Rote recall of simple information followed by mental manipulation/executive functionitems

Page 12: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

score (SS). Absent from the CAS, moreover, is a measure of crystallized intelligence (Gc). Al-though the lack of Gc measurement makes the CAS a fair test for people for people of linguisticand cultural difference, it doesn’t adequately tap left-hemisphere processes as a result. Therefore,though we feel that the CAS is not adequate as a baseline measure of global functioning, it is agood tool for hypothesis testing. Given these caveats and criticisms, we present the CAS subtestswe typically administer in Table 4.4. Please note that our interpretation is somewhat differentfrom that presented by the test authors.

Comprehensive Test of Phonological Processing

The CTOPP (Wagner, Torgesen, & Rashotte, 1999) is a unique measure of the cognitive constructsmost commonly associated with reading and language disorders. Designed for use with childrenand youth aged 5–24, it measures phonological awareness, phonological memory, and rapid au-tomatized naming, which have been linked with word recognition, word attack, and other basicreading skills (Wolf, 2001). The CTOPP is composed of 13 subtests, several of which we find use-ful in CHT. It was recently normed on a fairly large representative sample, and subtests have goodto excellent good technical characteristics. Validity studies show the phonological awareness andrapid naming tasks have strong relationships with reading skills.

Linking Assessment to Intervention 139

TABLE 4.4. Characteristics of Cognitive Assessment System (CAS) Subtests

Subtest Constructs purportedly tapped

Planning

Matching Numbers Sustained attention, visual scanning, psychomotor speed

Planned Connections Substitute for Halstead–Reitan Trails A and B (see Table 4.2), but no separation

Attention

Expressive Attention Substitute for Stroop Color–Word Test (see Table 4.10); inhibition of automaticresponse (reading words) to name ink color of printed word

Number Detection Cancellation task; sustained attention, visual scanning, visual discrimination,inhibition, psychomotor speed

Simultaneous Processing

Nonverbal Matrices Typical Gf measure of inductive reasoning; multiple-choice format

Verbal/Spatial Relations Similar to Token Test for Children (see Table 4.10); receptive language, verbalworking memory, grammatical relationships, visual scanning/discrimination

Figural Memory Similar to DAS Recall of Designs (see Chapter 1, Table 1.1); visual perception,spatial relationships, visual memory, graphomotor reproduction, constructional skills,figure–ground relationships (?)

Successive Processing

Word Series Word span; rote recall of unrelated words

Sentence Repetition Rote recall of meaningless sentences; grammatical structure important

Sentence Questions Similar sentence stimuli to Sentence Repetition, but child answers questions (e.g.,“The brown is purple. What is purple?” Answer: “The brown.”)

Page 13: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Table 4.5 outlines the CTOPP subtests and what they measure. The Nonword Repetitionsubtest is an interesting task that taps phonemic processing and expression skills for nonsensewords (e.g., “lidsca”), similar to other visually presented pseudoword tasks. However, it includesan auditory model (so the child hears the nonword first) and an auditory working memory compo-nent (because the child has to recall what he or she heard). This task can be combined with theBlending Nonwords (e.g., “raq” + “di”) subtest to help determine whether the phonologicalbreakdown is occurring at the individual-phoneme level or the assembly level. An additional con-cern with the CTOPP is the limited assessment of rapid naming. Including rapid naming of morecomplex letter combinations (e.g., digraphs, diphthongs) and simple words presented two gradesbelow reading level would have been helpful. Although phonological processes have been linkedto left temporal lobe functions, and rapid naming is typically associated with frontal structures,you should recognize that several areas are involved in reading competency, as discussed inChapters 2 and 5.

Delis–Kaplan Executive Function System

The Delis–Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001) is ameasure of key components of executive function, mediated primarily by the frontal lobe. It wasrecently developed and normed on a large representative national sample to assess ages 8–89. Un-like many neuropsychological measures, the D-KEFS has extensive information about technicalquality presented in the manual, which facilitates interpretation. Any of the specific tests can beadministered separately, making it ideal for use in CHT. Many of the tasks have rich histories inneuropsychological assessment, and research is likely to support the validity of these measures.Table 4.6 describes the individual D-KEFS tests and the constructs purportedly assessed by each.

Kaufman Adolescent and Adult Intelligence Test

Although the Kaufman Adolescent and Adult Intelligence Test (KAAIT; Kaufman & Kaufman,1993) provides good measures for hypothesis testing of Gc and fluid intelligence (Gf), it is pri-

140 SCHOOL NEUROPSYCHOLOGY

TABLE 4.5. Characteristics of Comprehensive Test of Phonological Processing (CTOPP)Subtests

Subtest Constructs purportedly tapped

Phonological Awareness

Elision Phonological perception, segmentation, individual phonemes

Blending Words Phonological assembly; similar to WJ-III Sound Blending (see Chapter 1, Table 1.4)

Phonological Memory

Nonword Repetition Phonemic analysis, assembly, auditory working memory

Rapid Naming

Rapid Object Naming Object recognition, naming automaticity, processing speed, verbal fluency

Rapid Digit Naming Number automaticity, processing speed, verbal fluency

Rapid Letter Naming Letter automaticity, processing speed, verbal fluency

Page 14: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

marily designed for children 11 years of age and older, so this limits its use in CHT to olderchildren. For Gc, the Word Knowledge subtest is a measure of word knowledge and verbal con-cept formation; Auditory Comprehension taps understanding of oral information; and DoubleMeanings measures categorical responding (i.e., the child must determine the word that bestfits two different meanings). For Gf, the Rebus Learning subtest is similar to the WJ-III Glrtask; Logical Steps taps logical reasoning and problem solving; and Mystery Codes requires de-tecting relationships and applying them to solve novel problems. It also has four extended-bat-tery subtests: Famous Faces, Memory for Block Designs, Rebus Delayed Recall, and AuditoryDelayed Recall. Although the KAAIT cannot be used for younger children, it is easy to admin-ister and score, and has fairly good technical characteristics (Sattler, 2001). Consider using thisbattery in CHT if you work with older children, as we feel it is a much more theoreticallysound instrument than the Kaufman Assessment Battery for Children (Kaufman & Kaufman,1983), which suffers from the same problem as the CAS (the simultaneous/right-hemisphere–successive/left-hemisphere dichotomy).

NEPSY

The NEPSY (Korkman, Kirk, & Kemp, 1998) is the first truly developmental neuropsychologicalmeasure designed for children aged 3–12. There are 27 subtests designed to provide a compre-hensive evaluation of five functional domains: Attention/Executive Functions, Language, Sensori-motor Functions, Visuospatial Processing, and Memory and Learning. The NEPSY subtests andflexible administration format are primarily based on Luria’s (1973, 1980a, 1980b) model. How-ever, like similar measures, the test does not break tasks down into primary, secondary, or tertiaryskills; nor does the manual readily identify the relationships between subtest performance and thefirst, second, and third functional units. With many years in development, the NEPSY has all theadvantages of being published by a major test developer, including an adequate normative sam-ple, subtest technical quality, and ample validity studies. Not all of the NEPSY subtests showcomparable technical quality, however, so Table 4.7 presents the subtests we have found to bemost beneficial in CHT. In addition, though the Language subtests serve as a measure of Gc, theNEPSY does not adequately measure Gf or novel problem-solving skills.

Linking Assessment to Intervention 141

TABLE 4.6. Characteristics of Delis–Kaplan Executive Function System (D-KEFS) Subtests

Subtest Constructs purportedly tapped

Sorting Test Problem solving, verbal and spatial concept formation, categorical thinking,flexibility of thinking on a conceptual task

Trail Making Test Mental flexibility, sequential processing on a visual–motor task, set shifting

Verbal Fluency Test Verbal fluency

Design Fluency Test Visual fluency

Color–Word Interference Test Attention and response inhibition

Tower Test Planning, flexibility, organization, spatial reasoning, inhibition

20 Questions Test Hypothesis testing, verbal and spatial abstract thinking, inhibition

Word Context Test Deductive reasoning, verbal abstract thinking

Proverb Test Metaphorical thinking, generating versus comprehending abstract thoughts

Page 15: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Process Assessment of the Learner: Test Battery for Reading and Writing

To look in more detail at the processes involved in reading and writing, the Process Assessment ofthe Learner: Test Battery for Reading and Writing (PAL; Berninger, 2001) is available to comple-ment regular standardized achievement testing. Individual subtests can be administered and in-terpreted, making this test ideal for CHT. There are also intervention materials available for bothindividual and classroom implementation. The PAL includes measures of phonological process-ing; orthographic coding; rapid automatized naming; and integration of listening, note taking, andsummary writing skills. Although the PAL is used for examining academic skills, it focuses on pro-cesses associated with these skills, making it especially useful for linking assessment tointervention.

142 SCHOOL NEUROPSYCHOLOGY

TABLE 4.7. Characteristics of NEPSY Subtests

Subtest Constructs purportedly tapped

Attention/Executive Functions

Tower Planning, inhibition, problem solving, monitoring, and self-regulation

Auditory Attentionand Response Set

Sustained auditory attention, vigilance, inhibition, set maintenance, mentalflexibility

Visual Attention Visual scanning, self-organization, processing speed

Design Fluency Visual–motor fluency, mental flexibility, graphomotor responding instructured and unstructured situations

Language

Phonological Processing Similar to WJ-III Ga subtests (see Chapter 1, Table 1.4); auditory attention,phonological awareness, segmentation, assembly

Comprehension of Instructions Similar to token test for Children (see Table 4.10); receptive language,sequencing, grammar, simple motor response

Repetition of Nonsense Words Auditory presentation of nonsense words; phonemic awareness,segmentation, assembly, sequencing, simple oral expression

Verbal Fluency Similar to Controlled Oral Word Association Test (see Table 4.10); rapidlong-term memory retrieval in structured (semantic cue) and unstructured(phonemic cue) situations

Sensorimotor Functions

Fingertip Tapping Simple motor speed, perseverance

Imitating Hand Positions Visual perception, memory, kinesthesis, praxis

Visuomotor Precision Visual–motor integration, graphomotor coordination without constructionalrequirements

Finger Discrimination Simple somatosensory perception, finger agnosia

Visuospatial Processing

Design Copying Visual perception of abstract stimuli, visual–motor integration, graphomotorskills

Arrows Spatial processing, visualization, line orientation, inhibition, no graphomotordemands

Block Construction Similar to WISC-III Block Design (see Tables 1.3 and 4.1)

Page 16: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Test of Memory and Learning

The Test of Memory and Learning (TOMAL; Reynolds & Bigler, 1994) is in many ways a morecomprehensive measure of learning and memory than the CMS. Designed for children aged 5–19, the TOMAL consists of 10 core and 4 supplemental subtests, and 4 delayed-recall subtests. Itwas carefully standardized, and the norms are representative of the 1990 U.S. census population.Reliabilities tend to be quite strong across ages, especially for the composite scores. Unfortu-nately, the validity studies are not as comprehensive as those for the CMS. However, further sup-port for its use in memory assessment can been found in subsequent studies reported in the litera-ture. Table 4.8 provides an overview of the TOMAL subtests we find useful in CHT. The DelayedRecall Index includes delayed recall from the Memory for Stories, Word Selective Reminding, Fa-cial Memory, and Visual Selective Reminding subtests. As with the CMS, one of the difficultieswith the TOMAL is its breakdown into verbal and nonverbal memory domains.

Wide Range Assessment of Memory and Learning

The Wide Range Assessment of Memory and Learning (WRAML; Sheslow & Adams, 1990) wasthe first child memory scale on the market, having been developed in the 1980s. Like the othermeasures reviewed here, it examines verbal and visual memory, and includes a learning indexscore. Additional examination of delayed recall is possible. For verbal memory, rote, sentence, andstory memory are tapped. For visual memory, both abstract and meaningful memory are assessed,and visual–sequential memory is assessed via an interesting Finger Windows subtest, which is dif-ficult to mediate with verbal skills. There are also a list-learning task, a memory-for-designs task(in which the child tries to find matching designs), and a sound–symbol association task. Thesetasks are challenging yet interesting for children, making the WRAML a possible alternative tothe CMS and TOMAL. It is fairly easy to administer and score. It has a large normative sample

Linking Assessment to Intervention 143

TABLE 4.8. Characteristics of Test of Memory and Learning (TOMAL) Subtests

Subtest Constructs purportedly tapped

Verbal Memory Index

Memory for Stories See CMS Stories (Table 4.3 lists this and other CMS subtests)

Word Selective Reminding Similar to CMS Word Lists, but no interference task

Paired Recall See CMS Word Pairs

Digits Backward Similar to WISC-III/WJ-III versions; more demands on attention, workingmemory, executive functions

Nonverbal Memory Index

Facial Memory See CMS Faces; good ventral stream measure

Visual Selective Reminding Visual analogue to word selective reminding, with dots; dorsal stream, visual–motor coordination, praxis without visual discrimination

Abstract Visual Memory Visual discrimination of abstract symbols, recognition memory

Visual–Sequential Memory Visual discrimination of abstract symbols, sequencing, praxis

Memory for Location See CMS Dot Locations; good dorsal stream measure

Manual Imitation Short-term visual–sequential memory, praxis

Page 17: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

and adequate technical characteristics. However, some have questioned the construct validity andstructure of the test (for a review, see Spreen & Strauss, 1998).

WISC-IV Integrated/WISC-III Processing Instrument

We conclude this section with the unique WISC-IV Integrated and its predecessor, the WISC-IIIProcessing Instrument (WISC-III PI; Kaplan, Fein, Kramer, Delis, & Morris, 1999). These in-struments are unique because they help examiners test the limits and derive both qualitative andquantitative data for interpretive purposes. Designed to objectify many of the qualitativeneuropsychological interpretation methods posited by Kaplan and colleagues in their Boston pro-cess approach (see Chapter 3), these measures are easily incorporated into your assessments, es-pecially if you use the WISC-III or WISC-IV as your initial intellectual assessment tool. Becausethe WISC-IV and WISC-III subtests are factorially complex, tapping several cognitive processes,further evaluation is often needed to pinpoint individual strengths and weaknesses. Designed pri-marily for children aged 8–16, the WISC-IV Integrated and WISC-III PI have additional mea-sures and procedures for hypothesis testing, so it may be helpful to administer only the sectionsthat are relevant to the areas of concern (Sattler, 2001). Some procedures are administered duringthe WISC-IV or WISC-III, while others are administered immediately following the assessment.Several of the WISC-IV Integrated/WISC-III PI procedures are designed to provide a morecomprehensive way to look at scoring WISC-IV/WISC-III responses, while several other standalone subtests can aid in CHT. Although some reliabilities are low and more validity informationwould be helpful, the WISC-IV Integrated and WISC-III PI have satisfactory concurrent validity,and they are unique instruments for obtaining qualitative and quantitative information about achild’s cognitive functioning (Sattler, 2001). An overview of the WISC-IV Integrated/WISC-IIIPI measures and procedures we prefer, and the constructs purportedly measured by each, arepresented in Table 4.9.

Supplemental Neuropsychological Measures for Hypothesis Testing

Table 4.10 presents a number of other neuropsychological measures we have found useful inCHT. Although some are specifically for use with children, others listed in this table have a longhistory of use in neuropsychological assessment of adults, and most have been adequately ex-tended downward for use with children. These instruments measure a variety of cognitive orneuropsychological constructs, and many have been found to be sensitive to brain functions anddysfunctions. They can be used to test initial hypotheses or validate hypotheses derived from pre-viously discussed measures. Some measures, such as the Rey–Osterreith Complex Figure (avisual–spatial–graphomotor task) and the California Verbal Learning Test (a language task), couldbe listed under other table subheadings. However, we have put the measures in the domains thatare most likely to serve our CHT purposes.

BEHAVIORAL NEUROPSYCHOLOGY AND PROBLEM-SOLVING CONSULTATION

Utilizing Assessment and Consultation Skills

Now that we have reviewed the assessment part of our model, let’s integrate it with consultationtechnology. Notice the heading above. Isn’t behavioral neuropsychology an oxymoron? No, be-cause we believe that these two technologies should become one, not be seen as antithetical. Con-

144 SCHOOL NEUROPSYCHOLOGY

Page 18: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

145

TABLE 4.9. Characteristics of WISC-III Process Instrument (WISC-III PI) Subtests and Procedures

Subtest Constructs purportedly tapped

Verbal scale/Verbal Comprehension and Working Memory Indices

1. Information Multiple Choice2. Vocabulary Multiple Choice(WISC-IV Integrated also includesSimilarities and Comprehension MultipleChoice)

Long-term memory retrieval of prior learning (1) and wordknowledge (2); compares free-recall and recognition memory

Picture Vocabulary Taps receptive word knowledge for comparison with expressiveword knowledge in #2 above

1. Arithmetic Addendum2. Written Arithmetic

1. Mental problem solving of items read simultaneously withexaminer; paper/pencil for failed items; reduces attention/executive/working memory demands, and eliminates auditoryprocessing requirements

2. Presents equations on paper; helps determine math skills inabsence of Arithmetic processing demands

Sentence Arrangement (WISC-III PI) Verbal analogue to WISC-III Picture Arrangement; semantic/grammatical knowledge and sequencing, but not temporalrelationships

Digit Span Forward/Backward Separates rote auditory memory (Forward) from attention, workingmemory, and executive functions (Backward)

1. Letter Span Rhyming and Non-Rhyming2. Letter Number Sequencing—Embedded

Words

1. Letters of sequence rhyme or do not rhyme, with the formerresulting in phonological/auditory processing demands, reducingrote aspect of encoding and retrieval

2. Letters form words, which helps encoding, but working memorystill relevant; may be more difficult breaking known word intoalphabetical order

Performance scale/Perceptual Reasoning and Processing Speed Indices

1. Block Design PI2. Block Design Multiple Choice

1. Part A (Unstructured): six additional designs; Part B (Structured):failed Part A designs; helps determine configuration (right-hemisphere) vs. orientation (left-hemisphere) errors

2. Visual discrimination, spatial perception; removes visual–motorintegration and processing speed demands

1. Visual Span Forward/Backward2. Spatial Span Forward/Backward

Visual–spatial analogues to digit span forward/backward; with DigitSpan, can compare auditory with visual sensory and workingmemory, sequencing, mental flexibility, and ability to shift cognitivesets1. Visual attention, numeric memory, and verbal response2. Spatial–holistic or visual–sequential memory, praxis

1. Coding Incidental Learning Recall2. Coding–Symbol Copy

1. Paired-associate symbol recall, free recall, paired-associate digitrecall (visual memory and graphomotor reproduction of symbolsand numbers, retrieval of number–symbol associations)

2. Visual–motor integration, graphomotor skills, and processingspeed

Symbol Search Child marks matching symbol in array or no box; ensures “guessing”is not occurring; better measure of discrimination and sustainedattention

Elithorn Mazes Maze-like task; assesses executive functions such as planning,organization, monitoring, working memory, and inhibition betterthan WISC-III Mazes, still requires graphomotor skills

Page 19: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

146 SCHOOL NEUROPSYCHOLOGY

TABLE 4.10. Supplemental Measures for Hypothesis Testing

Subtest Constructs purportedly tapped

Attention/memory/executive function

Children’s Category Test (Boll, 1993) See Halstead–Reitan Category Test (Table 4.2)

Wisconsin Card Sorting Test (Heaton,Chellune, Talley, Kay, & Curtis, 1993)

Executive functions, problem solving, set maintenance, goal-oriented behavior, inhibition, ability to benefit from feedback,mental flexibility, perseveration

Tower of London (Shallice, 1982) See NEPSY Tower (Table 4.7)

Stroop Color–Word Test (Golden, 1978) See CAS Expressive Attention (Table 4.4)

Rey–Osterrieth Complex Figure (Meyers& Meyers, 1995)

Visual–motor integration, constructional skills, graphomotorskills, visual memory, planning, organization, problem solving

Conners Continuous Performance Test II(CPT; Conners & MHS Staff, 2000)

Computerized measure of sustained attention, impulse control,reaction time, persistence, response variability, perseveration,visual discrimination

Gordon Diagnostic System (Gordon, 1991) Similar to Conners Test (see above) for vigilance task; delay taskincludes problem solving, learning temporal relationships,impulse control, self-monitoring, ability to benefit from feedback

California Verbal Learning Test—Children’s Version (Delis, Kramer, Kaplan,& Ober, 1994)

Verbal learning, long-term memory encoding and retrieval,susceptibility to interference

Comprehensive Trail-Making Test (CTMT;Reynolds, 2002)

Attention, concentration, resistance to distraction, cognitiveflexibility/set shifting

Behavior Rating Inventory of ExecutiveFunction (BRIEF; Gioia, Isquith,Guy, &Kenworthy, 2000)

Parent and teacher rating scales of behavioral regulation,metacognition; includes clinical scales assessing inhibition,cognitive shift, emotional control, task initiation, workingmemory, planning, organization of materials, and self-monitoring; includes validity scales assessing inconsistentresponding and negativity

Sensory–motor/nonverbal skills

Developmental Test of Visual–MotorIntegration (Beery, 1997)

Visual-perceptual skills, fine motor skills, visual–motorintegration

Grooved Pegboard (Kløve, 1963) Complex visual–motor–tactile integration, psychomotor speed(compare to simple sensory–motor integration)

Judgment of Line Orientation (Benton &Tranel, 1993)

See NEPSY Arrows (Table 4.7)

Language measures

Oral and Written Language Scales(Carrow-Woolfolk, 1996)

Listening comprehension, oral expression, written expression;not limited to single-word responses, as the PPVT-III and EVT(see below) are

Comprehensive Assessment of SpokenLanguage (Carrow-Woolfolk, 1999)

Language processing in comprehension, expression, andretrieval in these categories: lexical/semantic, syntactic,supralinguistic, pragmatic; the supralinguistic and pragmaticcategories show promise in the assessment of right-hemispherelanguage skills

(continued)

Page 20: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

sultation is often described as something a school psychologist will do before a standardized as-sessment, or instead of a standardized assessment. However, data collection is important inconsultation too, and the fact that you are doing standardized assessments doesn’t mean you can’tdo problem-solving consultation. All we are suggesting is that these two functions of school psy-chologists can be combined to make both stronger. You can bring assessment data into the consul-tation data-gathering phase when this is appropriate, linking interventions to the child’s strengthsand needs. And instead of being the mysterious “WISC jockey” who borrows a child for a coupleof hours and then produces a useless report, you can ensure that the assessment you do is linkedto the teacher’s concerns and the child’s performance in the classroom. The CHT emphasis onecological validity and treatment validity is what sets our model apart from other test interpreta-tion models. Most referrals for consultation concern academic problems, and most of those aca-demic problems are reading difficulties (Bramlett, Murphy, Johnson, Wallingsford, & Hall, 2002).Although general consultation on reading instruction may be helpful, combining this knowledgewith information about the multiple determinants of the child’s problem can have importanteffects on the intervention you and the teacher choose, and on the success the child experiencesas a result of your efforts.

Consultation is intended to be collaboration between equals, but the fact that the consultantis there to help the consultee solve a problem has the potential to make the power relationship un-

Linking Assessment to Intervention 147

TABLE 4.10. (continued)

Subtest Constructs purportedly tapped

Language measures (continued)

Clinical Evaluation of LanguageFundamentals—Fourth Edition (CELF-4;Semel, Wiig, & Secord, 2003)

Assesses receptive and expressive language with the coresubtests, but also allows assessment of language structure,language content, and memory; includes standardizedobservations in the classroom and assessment of pragmaticlanguage skills, in addition to individual assessment

Test of Language Development—TOLD-3,Primary and Intermediate; Newcomer &Hammill, 1997)

Primary version assesses phonology, semantics, and syntax;Intermediate version assesses semantics and syntax

Receptive auditory/verbal skills

Wepman Auditory Discrimination Test—Second Edition (Wepman & Reynolds,1987)

Auditory attention, phonemic awareness, phonemicsegmentation, phoneme position (primary/medial/recent)

Peabody Picture Vocabulary Test—ThirdEdition (PPVT-III; Dunn & Dunn, 1997)

Receptive vocabulary (visual scanning/impulse control);conormed with EVT (see below)

Token Test for Children (DiSimoni, 1978) See NEPSY Comprehension of Instructions (Table 4.7)

Expressive auditory/verbal skills

Controlled Oral Word Association Test(Spreen & Benton, 1977)

See NEPSY Verbal Fluency (Table 4.7)

Boston Naming Test (Goodglass & Kaplan,1987)

Expressive vocabulary, free-recall retrieval from long-termmemory versus cued-recall retrieval (semantic/phonemic)

Expressive Vocabulary Test (EVT;Williams, 1997)

Expressive vocabulary (picture naming); conormed with PPVT-III (see above)

Page 21: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

equal. Consultants tend to make requests at a high rate, and consultees are generally likely to re-spond by agreeing with these requests (Erchul & Chewning, 1990). It seems that many consulteesagree with consultants during meetings, but don’t really feel ownership of the interventions devel-oped during consultation, because many of these interventions are not fully implemented(Wickstrom, Jones, LaFleur, & Witt, 1998). We believe that the power issues within the consulta-tive relationship must be acknowledged and dealt with directly. Both school psychologists andteachers feel that expertise and informational power are essential in making changes with teachers(Erchul, Raven, & Whichard, 2001). You are using your expertise and knowledge to help solve aproblem, influence a teacher to make changes, and support and develop the teacher’s skills(Erchul & Martens, 2002). You can be directive and informative, such as telling a teacher aboutintervention research, without being coercive (Gutkin, 1999).

Consultation begins with the premise that the consultant works with the consultee (usuallythe classroom teacher) to solve a client’s (the teacher’s student’s) problem. Although the two pro-fessionals are presumed to be equals, working together to help the child, it is also assumed thatboth professionals have specific expertise to bring to bear on the problem. In our view, yourknowledge of neuropsychological and cognitive functions, neuropsychological assessment, the ac-ademic and behavioral intervention literature, and intervention-monitoring methodology shouldbe the core of expertise that you as the consultant bring to the relationship. The teacher’s knowl-edge of the student’s classroom performance, awareness of effective and ineffective teaching tech-niques for this child, and professional expertise as a teacher form the core of his or her expertise asthe consultee. Fully acknowledging the expertise of the consultee is one part of building rapport,but this knowledge is also necessary if an appropriate problem solution is to be found. An inter-vention plan that takes into account what resources are available and what interventions theteacher is already trying in the natural environment should have greater applicability and effec-tiveness (Riley-Tillman & Chafouleas, 2002). The following problem-solving consultation model isa summary of models presented by Erchul and Martens (2002) and Kratochwill, Elliott, andCallan-Stoiber (2002), combined with our CHT model.

Stages of Problem-Solving Consultation

Problem Identification

During the initial interview, the consultant (you) and the consultee identify a target behavior forintervention. The behavior must be defined in an observable, measurable way. In addition, infor-mation is needed about how often and when the behavior occurs, and a data collection methodshould be devised. Baseline data should begin to be collected. Problem identification in CHT issomewhat more complex than in other problem-solving models, as it includes data collected fromprereferral interventions, permanent products, observations, interview, and preliminary assess-ment results that you need to check out with the teacher to ensure that your findings have ecolog-ical validity. This stage covers the initial theory and hypothesis steps in the CHT model (referback to Figure 4.1).

Problem Analysis

During the second interview, a more in-depth study of the target behavior is made, including afunctional assessment. An excellent resource for conducting a functional assessment is a handbookwritten by O’Neill and colleagues (1997) that includes reproducible forms. This assessment

148 SCHOOL NEUROPSYCHOLOGY

Page 22: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

should include a review of the baseline data that have already been collected, and an interviewwith the teacher to identify possible causes, establish events, and determine consequences of thebehavior. Most functional assessments focus on obvious causes for the behavior, such as seekingattention or escaping from a task. The CHT process will provide information about the student’scognitive processing strengths and weaknesses to use in developing hypotheses, such as process-ing difficulties, memory problems, language deficits, or difficulty with unstructured situations. Aspart of the problem analysis, a review of interventions that have already been attempted and theireffectiveness is also helpful. Although CHT includes functional analysis in this stage, it relies onmuch more information from numerous data sources. Hierarchical ordering of preferred targetbehaviors is undertaken at this stage, but the nature of CHT may require more than one interven-tion for a particular child (e.g., reading fluency intervention, speech–language therapy for expres-sive language, occupational therapy for graphomotor skills). This stage is covered in the initialdata collection/analysis and data interpretation steps of CHT, which provide a more detailedtheory as to why the child is having difficulty.

Plan Development/Implementation

After the problem analysis, the theory is used by the consultant and the consultee to develop anintervention plan together (i.e., an effective intervention hypothesis). This plan takes into accountnot only the student’s characteristics and behavior, but also the classroom ecology and theteacher’s style and preferences. Working together, they brainstorm all possible interventions, thenchoose the intervention that is likely to be effective and can be plausibly implemented. Goals willbe set, participants will be determined, and data collection will be initiated and continued duringimplementation of the intervention.

Plan Evaluation/Recycling

After an agreed-upon period of time, the consultant and consultee meet to review the collecteddata and evaluate the intervention (i.e., data collection/analysis and data interpretation). There arenumerous methods for evaluating interventions via within-subject experimental designs, severalof which we will review later in the chapter. If the intervention is successful, either the interven-tion is extended, or it is discontinued if the target has been reached. If minor revisions appearnecessary, the consultee makes them at this time, and they decide on an additional meeting toevaluate the revised intervention. If different or more intensive interventions appear necessary(i.e., a new theory or hypothesis), a new intervention can be attempted, or additional special edu-cation support services may be needed. This process is also important as the instructional sup-ports begin to be removed and the child begins to function completely within his or her naturalenvironment with natural consequences. The theory–hypothesis–data collection/analysis–data in-terpretation cycle continues until the problem appears to be under natural stimulus–consequencecontrol. As you can see, the CHT model is not really about testing per se; it is about a way ofpractice that combines the best technologies of problem-solving consultation with comprehensiveevaluations.

Practicing Behavioral Neuropsychology

Since we are suggesting that you combine neuropsychological assessment with behavioral meth-ods, In-Depth 4.1 and Table 4.11 review the basics of behavioral interventions for those readers

Linking Assessment to Intervention 149

Page 23: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

150

IN-DEPTH 4.1. Review of Behavioral Psychology Principles

RESPONDENT CONDITIONING TECHNIQUES

Respondent conditioning is a method of eliciting behavior by manipulating a stimulus. An example of a con-ditioned stimulus is the teacher’s turning on and off the light to cue a child’s transition behavior. Behavioralexamples might include anxiety about tests or speaking in class, or fear when the teacher raises his or hervoice. Common interventions, including relaxation training and systematic desensitization, may be used totreat anxiety responses in students. However, more broadly conceived, variations in stimuli can lead to dif-ferent behaviors (e.g., varying spacing or size of letters during reading, using simultaneous visual and audi-tory teacher instructions, using an adapted pencil for sensory problems for writing, tapping on a desk to cueon-task behavior, etc.). Modeling and discriminative stimuli designed to elicit operant behaviors, though notconsidered respondant techniques, can both be related to stimulus–response psychology.

OPERANT CONDITIONING TECHNIQUES

Operant conditioning is a method of affecting behavior by manipulating the consequences of that behav-ior. Behaviors that are followed by reinforcing consequences (either presentation of something positive orremoval of something negative) will tend to recur. Behaviors that are followed by punishing conse-quences (either presentation of something negative or removal of something positive) will be less likelyto recur, as indicated in Table 4.11. One of the best uses of operant technology is the “Premack princi-ple,” in which a less reinforcing behavior is reinforced by a more reinforcing one (e.g., providing com-puter time after a certain level of reading accuracy is obtained). Positive reinforcement can includenatural consequences (these are preferable) or secondary ones (e.g., tokens, points). A good use of nega-tive reinforcement is reducing the workload if a child demonstrates mastery on an assignment.

People are often confused about the difference between positive reinforcement (presenting some-thing positive) and negative reinforcement (removing something negative). Why do children have tan-trums? Not only because they are positively reinforced for having tantrums, but their parents arenegatively reinforced as well—they get peace and quiet by giving in to the children. Most interventionsin school should use positive reinforcers, and these can even be used to teach children not to do some-thing, so (we hope) you don’t have to use punishment. You identify an alternative behavior, preferablyone that is incompatible with the negative behavior, and reinforce that behavior (i.e., differential rein-forcement of other/alternative/incompatible behavior). For example, Taniqua is always running in thehalls. Instead of punishing her for running, reinforce her for walking. In some cases, a child may not beable to do the target behavior. In these situations, reinforcing successive approximations of targetbehaviors, or “shaping,” is what we have to do with academic and behavioral deficits.

Is there a place for punishment in the schools? If a child is always being punished at school, it becomesaversive, something to avoid; it may even eventually lead him or her to drop out. A particular teacher who, ora subject that, is punishing may also be seen as aversive. There is another problem with punishment, though:The child isn’t actually learning a replacement behavior. We prefer to use school interventions to teach chil-dren how to do something, rather than just to suppress negative behavior. If you must use punishment, werecommend that you use negative punishment that involves taking away something positive (either time outfrom reinforcement or response cost) combined with differential reinforcement. For example, if Kyle is aggres-sive on the playground, you can use negative punishment by having him sit on the sidelines and miss 5 min-utes of recess, but you must also use positive reinforcement when you see Kyle playing nicely.

As you will recall from training, the schedules of reinforcement influence how a skill will be learnedand maintained. Continuous reinforcement is good for skill acquisition, but this acquired skill will alsobe extinguished quickly, so intermittent reinforcement on a variable-ratio or interval scale is more appro-priate. Think about slot machines; infrequent payoffs can maintain betting behavior for a long time! Thesame thing can happen in a classroom. If a teacher slips and accidentally reinforces an unwanted behav-ior, that behavior will be maintained longer.

Page 24: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

who may not recall the details. As part of the problem-solving model, you need to recognize thatantecedent and consequent actions affect the child’s learning and behavior, and that cognitiveprocesses interact with these determinants. Having this understanding allows you to use whatcognitive psychologists have called stimulus–organism–response (S-O-R) psychology, in whichstimulus and response are still important, but the organismic variables (i.e., childneuropsychological processes) help you determine what the best intervention is and how to carryit out. The behavior technologies become especially useful in designing the intervention,determining intervention efficacy, and managing contingencies.

Developing and Evaluating Interventions

After cycling through the first four steps of CHT, and refining a theory as to what will help thechild, you and the consultee need to use behavioral strategies combined with specific instruc-tional methods to help the child learn—through either remediation, accommodations, or both. InChapters 5–7, we offer a number of interventions for academic skills problems. Some problemstranscend academic domain boundaries, and the comorbidity among academic learning disordersis quite high. To help you understand the relationship between neuropsychological functioningand academic domains, we have provided a worksheet in Appendix 4.5. This worksheet may beuseful in your examination of the academic issues associated with a child’s neuropsychologicalfunctioning. This ensures that when you identify the cognitive pattern of performance, you are re-lating it to the academic pattern of performance seen on testing and the classroom, which shouldhelp guide intervention planning and implementation. Taking what you know about the child’scurrent level and pattern of performance, academic interventions, problem-solving consultation,and behavioral technologies, you can design, implement, and evaluate an intervention for him orher. In CHT, we recommend using single-subject (within-subject or single-case) research designsto evaluate the effectiveness of interventions. We believe that practitioners should collect childperformance data on a regular basis to ensure that interventions are effective (Fuchs & Fuchs,1986; Lindsley, 1991; Skinner, 1966; Ysseldyke, 2001). We recommend that similar models beused to evaluate any intervention, whether it is behavioral, academic, cognitive, or socio-emotional. In this section, we review the most useful designs for evaluating school-basedinterventions, illustrating each intervention model with hypothetical examples.

All of the research designs we discuss require two basic concepts. One is that you must havesome way of measuring the outcome you want. Behaviorists generally call this “taking data,” butyou can think of it as “evaluating progress” or “checking up on the intervention.” You can’t simplysay, “Yep, Jimmy’s doing better”; you must have some way to show that the child is doing better.The outcome measure you choose depends on the target behavior and the goal of the interven-tion. You can use information that the teacher already collects (i.e., authentic data—homeworkcompleted, spelling test score, office referrals or detentions, absences, etc.). You can collect infor-

Linking Assessment to Intervention 151

TABLE 4.11. Reinforcement and Punishment

Provide Remove

Positive consequence Positive reinforcement Negative punishment (response cost)

Negative consequence Positive punishment Negative reinforcement

Note. Shaded boxes increase the preceding behavior; unshaded boxes decrease the behavior.

Page 25: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

mation as part of the intervention itself (e.g., math worksheets, curriculum-based measurement[CBM] probes of reading fluency, flashcards placed in correct and incorrect piles). You can alsodevelop a data collection plan that interferes very little with the teacher’s routine (e.g., child self-monitoring, using a wrist counter, completing end-of-the-period or end-of-the-day checklists). Fi-nally, you can use systematic observation to observe the target behavior directly, using event, du-ration, latency, partial-interval, or whole-interval recording. With observational data collection, itis important to use a randomly selected peer at baseline to establish a discrepancy with the targetchild. Table 4.12 presents some suggestions for outcome measures that can be useful in theclassroom.

The second basic concept is that you must have a baseline measurement, in addition to mea-suring the behavior during the intervention. Teachers are generally used to just measuring theoutcome of teaching, such as giving a test at the end of a chapter. But to evaluate how effective anintervention is, you have to measure the child’s performance at the start (without the interven-tions), and then keep measuring as you implement the intervention to see how the child’s perfor-mance changes. Without having a baseline for comparisons, you won’t know whether the child’simprovement is really due to the intervention. In describing some of the intervention models be-low, we use the letter A to refer to the baseline condition. The other letters (i.e., B, C) representwhatever interventions you implement.

152 SCHOOL NEUROPSYCHOLOGY

TABLE 4.12. Examples of Outcome Measures for School-Based Interventions

Outcome area Possible measures

Several behaviors Pre- and postratings on a brief behavior rating form.Daily report card with ratings for day.Systematic observation using event, duration, latency, partial-interval, or

whole-interval recording.

Negative classroom behavior (e.g.,calling out, getting out of seat,yelling, aggression)

Measurement of rate via tally marks, golf wrist counter, or pennies/paper clips transferred from pockets.

Student self-monitoring of behavior on sheet or card.

Serious negative behavior Count of office referrals or detentions.

Positive classroom behavior (e.g.,raising hand, giving correctanswers)

Measurement of rate or student self-monitoring as above.Observational data as above.

Attention, on-task behavior Periodic classroom observations.Child self-monitoring of skills.

Academic work completion Worksheets or other permanent products.Measurement of accuracy, rate, or both.

Homework completion Completed homework.Daily report card signed by parent and/or teacher.

Academic skills accuracy Correct–incorrect flashcards kept in separate piles by student or peer.Worksheets graded in percentages correct and recorded in grade book.

Academic skills fluency (speed andaccuracy)

CBM probes (Shinn, 1989).

Academic skills comprehension Pre- and posttest with alternate forms.

Page 26: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

ABAB/ABAC Designs

The ABAB design is used when you have picked one intervention and you want to see if it worksbetter than the baseline condition (i.e., better than what the teacher would normally do). It is alsosometimes called a “reversal design,” because you do the intervention, then reverse to baselinefor a short while, then do the intervention again. It’s a good way to show that the intervention isreally what’s affecting the child’s performance, but it doesn’t work well for a situation where yourintervention actually teaches the child something new. For example, if you teach a child to break aword into syllables to sound it out, you can’t “unteach” that for the reversal phase. It also is not ap-propriate to do a reversal phase if the behavior you are trying to reduce is harmful to the child orothers. For instance, if you are using time out to reduce hitting, it would be unethical to do a re-versal phase. As a result, this design is best for situations where you want to change the rate atwhich a child does something that he or she already knows how to do. For an example of an ABABdesign, please see Case Study 4.1 and Figure 4.2.

The ABAC design allows you to compare two different interventions to see whether they aredifferent from the baseline, and to see which is better at changing the child’s behavior. Similar tothe ABAB design, you first collect baseline data, then implement the first intervention (B), thenreverse to baseline, and finally implement the second intervention (C). For instance, after takingbaseline data on multistep math addition item accuracy (A), you can determine whether a child ismore accurate if he or she draws lines between columns (B), or follows a step-by-step algorithm

Linking Assessment to Intervention 153

CASE STUDY 4.1. Jared’s Impulsive Calling Out

An 8-year-old boy diagnosed with ADHD, Jared, was described by his teacher as extremely impulsive.The behavior that she identified as most problematic was Jared’s calling out in class. Systematic observa-tion data suggested that the teacher typically accepted Jared’s answer when he called out, but then sheoften reminded him to raise his hand the next time. After discussing the baseline data with the teacher,we decided that she would use a wrist counter to count whenever Jared called out during whole-groupinstruction.

Figure 4.2 presents the results for the ABAB intervention designed to reduce his inappropriate call-out behaviors. During the first week, the teacher collected the baseline data. She counted Jared’s call-outs without doing anything different about them, and this information was charted. The next week, theteacher continued to count Jared’s call-outs, but she ignored him immediately after each call-out, prac-ticing negative punishment. She only acknowledged Jared if he raised his hand first and did not call out,which was differential reinforcement. Notice that at first, Jared’s call-outs increased. This is called an ex-tinction burst—a very common finding when a previously rewarded activity is being ignored. After that,Jared’s call-outs began to decline. The teacher then returned to baseline for a short time (accepting call-out answers and reminding him to raise his hand), and the call-outs became frequent again. After a fewdays of this, the intervention was reintroduced. As you look at Figure 4.2, you should notice a fewthings. Each phase is separated by lines and labeled, so the baseline and intervention phases are clear.Within the baseline phase, Jared was calling out very frequently; the average was about 20 times perday. During the first intervention phase, his call-outs increased at first and then began to decline. Assoon as the reversal to baseline took place, they increased again to about 20 times per day. During thesecond (and final) intervention phase, call-outs declined to an average of only 8 times per day. You canclearly see that the intervention was what was affecting Jared’s behavior (this is called establishing func-tional control), because every time the intervention was implemented, he changed his behavior.

Page 27: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

sheet on how to complete the problems (C). Case Study 4.2 and Figure 4.3 provide an example ofan ABAC design.

Multiple-Baseline Design

A multiple-baseline design is useful when you expect the child’s learning to be cumulative, so youdon’t want to reverse success. This design can teach children to display target behaviors acrosssettings, people, or behaviors. For instance, if staying on task is the target behavior, you first seekon-task behavior in one class, then another, and so forth. In this design, you collect baseline datain two or more subjects or at two or more times during the day. Then you start the intervention inone subject or at one time during the day, while continuing to take baseline data at the othertime(s). Later, you introduce the intervention in the other subject or at the other time. If thechild’s performance changes in each setting only when the intervention is in place, you will knowthat the intervention is responsible for the change. An example of this design can be found in CaseStudy 4.3 and Figure 4.4.

Pre- and Posttest Design

A pre- and posttest design is useful when you can’t collect data every day, but you want to measurethe effectiveness of an intervention via direct observation, test, or rating scale. Although it is moredifficult to establish functional control, it is an easier method of data collection and is more likelyto be acceptable to teachers. For this design, it is important to choose a test (preferably one withalternate forms) or a rating scale that can be given repeatedly with minimal practice effects. The

154 SCHOOL NEUROPSYCHOLOGY

FIGURE 4.2. Jared’s calling out.

Page 28: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Linking Assessment to Intervention 155

CASE STUDY 4.2. Increasing Marcie’s Reading Speed

Marcie was a 9-year-old girl who was pleasant, cooperative, and hard-working. However, she was a slow,choppy reader, and her teacher sought support in helping Marcie to read more fluently. Marcie was in asmall reading group with three other children, and the teacher worked individually with Marcie for 15minutes every day, but she was still struggling. The teacher now had an aide in class and wanted toknow what the aide could do with Marcie. Based on the CHT evaluation information, I found thatMarcie had good phonemic awareness skills, and her phonemic segmentation and blending were notproblems, but her word finding and rapid naming skills were quite poor. I met with the teacher, and wethought of two possible interventions for Marcie: one where the aide would use flashcards to improveMarcie’s speed at identifying words, and one where the aide would read orally with Marcie to increasethe fluency of her reading. We decided that CBM of reading fluency, using daily 1-minute probes, wouldbe a good outcome measure. As can be seen in Figure 4.3, her fluency was quite low at baseline (A).During the first intervention phase (B), the aide pronounced each word for Marcie; Marcie repeated it;Marcie and the aide then practiced with the flashcards for about 10 minutes; and they finished with an-other 1-minute CBM probe. After this intervention, the teacher returned Marcie to the baseline condi-tion (A), but the aide continued to take CBM probes during this time. Finally, the second interventionphase was introduced (C). This intervention involved the aide’s reading the passage to Marcie one timewith expression and fluency, and then their reading it together in tandem for about 10 minutes. Again,the sessions ended with another 1-minute CBM probe. As you can see from looking at Marcie’s chart,the flashcard drill improved her fluency over baseline, but the tandem reading was much more effective.This is not to say that tandem reading is a better intervention for all children; it just appeared to bebetter for Marcie.

FIGURE 4.3. Marcie’s reading fluency.

Page 29: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

156 SCHOOL NEUROPSYCHOLOGY

CASE STUDY 4.3. Ellen’s Accuracy Problem

Ellen was a 7-year-old girl who presented as a fast, careless worker. She reportedly completed herseatwork as fast as possible, without worrying about the accuracy of her responses. I (Fiorello) met withEllen’s teacher, and we decided to try to increase Ellen’s accuracy by using rewards for correct respond-ing. The teacher used Ellen’s number correct on her seatwork papers to measure the outcome. She madesure that there were exactly 10 questions on each worksheet in math and spelling, and noted in hergrade book the number correct for each day. For the first week, the teacher collected baseline data inboth subjects for each day, and these data were charted on a multiple-baseline graph (see Figure 4.4).After collecting a week of baseline data, the teacher explained to Ellen that she could earn 1 point foreach spelling word she copied correctly during seatwork, and the points could be traded for free time atthe end of the morning classes. At the same time, Ellen’s math work was kept in the baseline condition,with no rewards offered. As you can see from Ellen’s chart, her spelling accuracy improved when re-wards were added, but her math remained inaccurate. The next Monday, the teacher explained that thepoint system would apply to math as well, and as you can see from the figure, Ellen’s accuracy in mathimproved thereafter.

FIGURE 4.4. Ellen’s accuracy.

Page 30: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

pretest results become your baseline, and then you test again after implementation of the inter-vention to judge its effectiveness. Observations and brief rating scales can be used repeatedly ifyou choose to gather multiple data points during the intervention. Case Study 4.4 and Figure 4.5provide an example of how to use a pre- and posttest design.

CBM Progress Monitoring

CBM is useful for evaluating the effectiveness of instructional interventions on reading, mathe-matics, and writing. A brief probe is completed for several days during baseline, and then re-peated every 1–2 days following the intervention session. These data are plotted to gauge progressover time. An aimline shows the goal that has been set for the student. The beginning of the line isdetermined by the child’s baseline performance or behavior; the end of the line is determined bywhere the child should be, compared to his or her peers, and how long it will take for the child to“catch up” once the intervention is in place. Unfortunately, there are no explicit guidelines for“how long it should take.” For instance, if the child is 2 years behind, saying that he or she willmake it up in a month is unrealistic. Conversely, it is inappropriate to give a child too long to catchup. After you establish an aimline, a trendline is drawn, which shows the rate of improvement inthe skill. If the trendline is below the aimline for several days, the intervention should be adjustedor changed, or possibly you have set too high a goal for the child. Case Study 4.5 and Figure 4.6highlight the use of CBM progress monitoring.

Multiple-Intervention Design

Before we leave our section on behavioral neuropsychology and problem-solving consultation, itis important to recognize that not all intervention designs discussed will fit nicely with the needsof a child, teacher, or parent. Certainly you want experimental control and good outcome data, butbeyond that, you have to be sensitive to the needs of all parties, or the intervention effort will notbe effective. Interventions that are easy are preferred, but they may not be effective. Others maybe labor-intensive and have good experimental control, but because they are so cumbersome,treatment adherence or integrity is limited. This is where you, as the consultant, must work withthe consultee to take into account the nature of the problem, the environmental determinants ofthe problem, and the resources available to affect behavior change. Case Study 4.6 and Figure 4.7provide an example of alternative treatments for a child who does not respond easily tointerventions.

Linking Assessment to Intervention 157

CASE STUDY 4.4. Herman’s Auditory Processing

Herman was a boy with a common problem: a history of frequent ear infections (otitis media) and poorauditory processing. He was having difficulty learning the letter sounds in his kindergarten class. Histeacher referred him to the reading specialist, who arranged for Herman to complete a 6-week com-puter-based auditory processing and phonics program. Before Herman began the program, I (Fiorello)was called in to develop a method for monitoring the efficacy of the program. We agreed that I wouldadminister the CTOPP and CBM of the alphabet sounds and would chart his scores, as depicted inFigure 4.5. After 6 weeks, I administered both tests again. Since the CTOPP has age-based SSs, you cansee that Herman’s auditory processing improved over the course of the program. In addition, chartinghis improvement in letter sound knowledge helped the teacher compare Herman to other children, toguide her expectations for his curricular progress.

Page 31: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

158 SCHOOL NEUROPSYCHOLOGY

FIGURE 4.5. Herman’s auditory processing and letter sound knowledge.

CASE STUDY 4.5. Beverly’s Limited Expressive Language

When I (Fiorello) was called in to consult with Beverly’s teacher, Beverly was having considerable diffi-culty with expressive language, primarily because she spoke very little during conversations with herteacher and peers. CHT results revealed difficulty with word retrieval, oral fluency, and expressivesyntax. Data collection with an audiotape recorder began, and Beverly’s oral fluency at baseline wasfound to be only 23 words per minute on average (see Figure 4.6). Her teacher set a goal of 45 wordsper minute, and we decided that a peer tutoring program would be implemented. The teacher picked achild who was not only friendly with Beverly, but also talkative, social, caring, and supportive. Each timethe two children would get together, they would discuss a topic of interest. To facilitate this process, theteacher brainstormed possible topics with them before the intervention. As you can see, the peer tutor-ing improved Beverly’s oral fluency at first, but on Days 10, 11, and 12, Beverly’s fluency scores fellbelow the aimline. When three data points fall below the aimline, a decision point is reached. Thismeans that it is time either to adjust or change the intervention, or to readjust the aimline. In Beverly’scase, this ensured that goals would be set at a level where they could realistically be attained, while stillensuring that Beverly was making appropriate progress. It was decided that Beverly’s goal might havebeen a little ambitious; however, she was making progress in the program and was developing a goodrelationship with the peer.

Page 32: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

Linking Assessment to Intervention 159

FIGURE 4.6. Beverly’s CBM chart.

CASE STUDY 4.6. Coping with Gary’s OCD

Gary was a student diagnosed with OCD. His classroom teacher’s main concern was Gary’s incessantquestioning about assignments during seatwork. Gary typically asked for clarification of the directions,and the meaning of individual items. The teacher wanted to decrease Gary’s questioning and increasehis on-task behavior. She agreed to count Gary’s questions with a wrist counter during the seatworkperiod in her class. As can be seen in Figure 4.7, Gary’s baseline average was a little over 10 questionsper period. We decided to try a number of interventions, starting with the easiest to implement andgradually adding more intrusive ones. This called for a variation on the ABAC design, where the inter-ventions were cumulative (it might be called an A-B-BC-BCD design). First, the teacher developed achecklist for completing seatwork, and she taught Gary to use it to answer his own questions. She thenlaminated it and let him check off each item for himself. During this intervention, Gary’s questions de-creased slightly, to an average of about eight per period. The next intervention added was a set of fivetokens that Gary had to use to ask questions. He would turn in one token every time he asked a ques-tion; any question after that would not be answered. Gary’s questions decreased again, eventually set-tling at five per period. At this point, the teacher added one more intervention: She provided Gary areward—a choice of activity during the last 5 minutes of class—if he had one token left at the end of theperiod. This lowered Gary’s questions to four immediately. If the teacher had felt that even fewer ques-tions would be allowed (based on what was normally acceptable in class, perhaps one or two), she couldhave gradually increased the number of tokens necessary for a reward.

Page 33: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

LINKING ASSESSMENT TO INTERVENTION: A CASE STUDY

Considerations and Caveats

Now that we have given you a good understanding of assessment practices and measures, brain–behavior relationships, and consultation and intervention technologies, the next step is to bridgethe gap between these apparently disparate areas of psychology. We provide you with one morecase study, and detailed information in Chapters 5–8, in an attempt to make assessment informa-tion meaningful for individualized interventions for children with unique assets and deficits. Asnoted previously in this book, this is a tall order; it is a path that some have chosen, but few havefound success in their quest. You may be disappointed to find that we don’t offer you diagnostic–prescriptive advice in the following chapters. We feel that this is where the early researchers onaptitude–treatment interactions went astray: Not all children learn the same way, even if theyshow similar neuropsychological profiles, so we don’t oversimplify things by saying, “If you havethis disorder, then do this intervention.”

To paraphrase an old adage, some interventions work for some children some of the time, butno interventions work for all children all of the time. You may feel confident that you have a goodunderstanding of a child’s neuropsychological strengths and weaknesses, but if you don’t haveecological and treatment validity, then your results are of questionable value. Even if you have agood handle on the problem and the findings have ecological validity, the intervention you andthe teacher choose may be ineffective. Don’t dismiss the original findings; rather, try to under-stand why the intervention you thought would be effective was not, and try to modify it or try an-other intervention. This recycling of interventions is necessary, whether you use a CHT approachor a regular behavioral consultation method. We provide you with assessment and intervention in-formation about various learning and behavior disorder subtypes, but it is up to you to use CHT

160 SCHOOL NEUROPSYCHOLOGY

FIGURE 4.7. Gary’s teacher questions.

Page 34: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

with the technologies presented in this chapter to individualize interventions for the children youserve.

Cognitive Hypothesis Testing for Scott’s Motor Problem

Case Study 4.7 and Figure 4.8 present the completed CHT worksheet (see Appendix 4.3) forScott, a student referred for “motor problems” in the classroom. We have purposely picked Scott’scase because it highlights the use of CHT without the use of “neuropsychological” tests. We dothis so that you can become familiar with the CHT procedure while using tests you already know.This also demonstrates that CHT and neuropsychological analysis of the data can occur with typi-cal cognitive/intellectual measures. In Chapters 5–8, we will provide you with several reading,mathematics, written language, and emotional/behavior disorder case study examples that useCHT and the neuropsychological tests described earlier in the chapter.

As you can see from Scott’s case, the original “theory” about motor problems was not quiteright, as the deficit appeared to be related to visual–spatial dorsal stream functions, or poor per-ceptual feedback to the motor system. The process would have continued with this case had all re-sults come back negative. For instance, we may have wanted to check out left parietal somato-sensory functions, but Scott didn’t show differences in writing pressure. He could have also haddifficulty with integration of information across the midline or bimanual functions, suggestingproblems with the corpus callosum. We could have done additional neuropsychological tests tolook at these, but found enough testing and ecological validity evidence to support our hypothesis.

Although Case Study 4.7 and Figure 4.9 suggest that Scott’s intervention was effective, itshould be noted that Scott was receiving occupational therapy during this time, so the positive re-sults could have been related to this intervention. Obviously, as time went on, both interventionsmay have had a positive and complementary effect. This is not a good empirical practice per se, aswe don’t want two interventions going on at the same time. However, our experience suggests thatthe experimental rigor required of articles published in, say, the Journal of Applied Behavior Anal-ysis may not always be feasible in the field. The bottom line is that we need to help children, andif they get better and we have data that show it, we are better off as a result. Now that we have themethods to link assessment to intervention, the remainder of this book will focus on theneuropsychological aspects of specific academic and behavior problems experienced by thechildren we serve.

Linking Assessment to Intervention 161

Page 35: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

162

CASE STUDY 4.7. CHT for Scott’s Motor Problem

Scott, aged 9-9, had attention, social, and handwriting problems. The teacher referred him for “finemotor problems,” because his work was always messy, and there were many erase marks and smudges onthe work he turned in. His poor alignment of columns resulted in many math calculation errors onmultistep problems. After prereferral strategies were unsuccessful at improving the quality of this work,he was referred for a comprehensive CHT evaluation. As can be seen in Figure 4.8, the initial assess-ment with the WISC-IV suggested strengths in auditory working memory, and three possible weak-nesses: spatial visualization, visual–motor coordination, and/or visual memory. Having developed a theoryas to what was difficult for Scott, I (Fiorello) needed to test my hypotheses one by one to see which oneswere correct.

To examine these possible problems, I wanted to use untimed visual processing tasks that did notrequire motor output. I picked the WJ-III Spatial Relations and Picture Recognition subtests to look atspatial visualization and visual memory. Then I decided to choose a task measuring motor coordinationand speed without significant visual processing to look at motor functioning. For this, I picked the motorportion of the Beery Developmental Test of Visual–Motor Integration (VMI). Based on the overall profileand results of these hypothesis testing subtests, only the Spatial Relations subtest was impaired; this sug-gested that Scott’s difficulty was more of a dorsal stream problem than a ventral stream or frontal motorproblem.

However, these findings would be considered tentative until I checked to make sure that the resultshad ecological validity. The information from the teacher interview and ratings, classroom behavior ob-servations, and work samples provided the necessary confirmation that Scott had difficulty with spatialprocessing and perceptual feedback to the motor system. At this stage, it is important to remember tocheck for possible alternative explanations to the hypothesis, in order to avoid confirmation bias. ForScott, you will notice that work samples showed problems with spatial organization on the page and poorcolumn alignment in math. In addition, the teacher interview indicated that Scott had problems duringrecess and gym with respecting peers’ personal space. As a result, these apparently disparate findingswere entered in the ecological validity section.

At this point, I felt I had a fairly clear understanding of Scott’s strengths and weaknesses. My un-derstanding of neuropsychology helped to clarify why Scott was having attention and social problems aswell, since right parietal lobe dysfunction can lead to neglect of self and environment. I now had a“theory” as to why Scott was having problems with learning and social functioning, and I could nowmeet with the teacher to discuss interventions, developing hypotheses about what interventions mightwork, implementing the most probable one, and determining whether it was successful.

To begin this process, I completed the assessment of academic skill problems and cognitive weak-nesses. Next, I examined resources available and cognitive strengths for possible use in the intervention.For Scott, the team referred him for occupational therapy, and I made classroom recommendations toimprove his current academic functioning. I met with the teacher, and we decided to focus on his messywork/handwriting problem. The teacher liked the idea of using graph paper, and we decided that Scottwould be rewarded for staying within the lines on his writing assignments first, and on his math assign-ments second. After completing his assignments, Scott completed a checklist that indicated how manytimes his writing went outside the prescribed lines. For each word or problem Scott stayed within thelines on, he received a token reinforcer that could be traded in at the end of the day for a computertime reward. This setup called for a multiple-baseline design as described earlier. In Figure 4.9, noticehow Scott showed some improvement in writing, but math difficulties were still prominent. After the in-tervention was implemented during math class, Scott began to improve in both areas.

Page 36: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

163

Student’s name: Scott Age: 9-9 Grade: 4

Reason for referral: Messy written work, poor handwriting

Preliminary hypotheses—Based on presenting problem and initial assessment, the following cognitivestrengths and weaknesses are hypothesized:

Strengths:

Auditory working memory

Possible weaknesses:

Spatial visualization

Motor coordination

Visual memory

Hypothesis testing—Follow up with related construct tests:

Areas of suspected weakness:

Spatial visualization

Motor coordination

Visual memory

Follow-up tests:

WJ-III Spatial Relations

Beery VMI inc. motor section

WJ-III Picture Recognition

Strengths/weaknesses:

Spatial relations on WJ-III well below average—Weak spatial visualization

Motor coordination on VMI average—No motor weakness

Picture recognition on WJ-III average—No visual memory weakness

Associated with academic and/or behavior problems?:

Yes—Spatial visualization weakness can lead to poor handwriting (spacing and letter formation) and messy

work layout on page.

(continued)

FIGURE 4.8. Completed Cognitive Hypothesis-Testing Worksheet (see Appendix 4.3) for Scott.

Page 37: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

164

Ecological validity—Information from observations and teacher ratings:

Strengths:

Participating in class discussion

Possible weaknesses:

Spatial organization—layout on page, trouble aligning columns in math, difficulty with peers in recess and gym

re: “space”

Evaluation summary—Based on analysis of all evaluation information, the following cognitive strengthsand weaknesses are identified, and concordance or discordance is calculated if necessary:

Cognitive strengths:

Oral language

Auditory memory

Concordant with academic and/or behavioral strengths?:

Yes—class discussion relies on oral language and auditory skills.

Weaknesses:

Spatial visualization and organization in space

Concordant with academic and/or behavioral weaknesses?:

Yes—spatial visualization is related to work layout, handwriting, and interpersonal space issues.

Discordant with cognitive strengths?:

Yes—spatial visualization is mediated by right occipital lobe, while oral language and auditory memory is

primarily mediated by left hemisphere.

(continued)

FIGURE 4.8. (continued)

Page 38: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

165

Summary of evaluation information for intervention development

Academic/behavioral presenting problems:

Messy work

Poor handwriting

Cognitive weaknesses:

Spatial organization and processes

Resources for intervention in environment:

Consultant available

Special education and OT consult and materials

Cognitive strengths:

Oral language

Auditory memory

Potential interventions:

Use paper with raised lines and graph paper for written work.

Allow dictation for lengthy written assignments.

Teach keyboarding skills.

Work with psychologist on interpersonal space issues.

FIGURE 4.8. (continued)

FIGURE 4.9. Scott’s graph paper intervention.

Page 39: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

166

APPE

NDIX

4.1.

Dem

ands

Anal

ysis

Stud

ent’

sna

me:

Age

:G

rade

:

Test

/sub

test

Inpu

t(c

heck

all

that

appl

y)Pr

oces

sing

(che

ckal

lth

atap

ply)

Out

put

(che

ckal

lth

atap

ply)

Inst

ruct

ions

❏D

emon

stra

tion/

mod

elin

g❏

Ges

ture

/pan

tom

ime

❏Br

ief

oral

dire

ctio

ns❏

Leng

thy

oral

dire

ctio

nsTi

min

g❏

Ove

rall

time

limit

❏Sp

eed

bonu

sTe

achi

ng❏

Sam

ple

item

❏Te

achi

ngite

m(s

)❏

Dyn

amic

asse

ssm

ent

❏Fe

edba

ckw

hen

corr

ect

❏Q

uery

ing

Vis

ual

stim

ulus

❏Pi

ctur

es/p

hoto

s❏

Abs

trac

tfig

ures

❏M

odel

s❏

Sym

bols

(lett

ers,

num

bers

)

Left

hem

isph

ere

❏C

onco

rdan

t/co

nver

gent

(“ex

plic

it”)

Righ

the

mis

pher

e❏

Dis

cord

ant/

dive

rgen

t(“

impl

icit”

)Ex

ecut

ive

func

tions

(fro

ntal

–sub

cort

ical

circ

uits

)❏

Sust

aine

dat

tent

ion/

conc

entr

atio

n❏

Inhi

bitio

n/im

puls

ivity

❏W

orki

ngm

emor

y(s

peci

fy)

❏Fl

exib

ility

/mod

ify/s

hift

set

❏Pe

rfor

man

cem

onito

ring/

bene

fitfr

omfe

edba

ck❏

Plan

ning

/org

aniz

atio

n/st

rate

gyus

e❏

Mem

ory

enco

ding

/ret

rieva

l❏

Nov

elpr

oble

mso

lvin

g/re

ason

ing

❏Te

mpo

ral

rela

tions

hips

/seq

uent

ial

proc

essi

ng❏

Expr

essi

vela

ngua

ge(L

R)

Neu

rops

ycho

logi

cal

func

tiona

ldo

mai

ns❏

Sens

ory

atte

ntio

n(T

OP)

(LR)

❏Pr

imar

yzo

nes

(TO

P)(L

R)❏

Seco

ndar

y/te

rtia

ryzo

nes

(TO

P)(L

R)

Ora

l❏

Brie

for

al❏

Leng

thy

oral

❏Re

port

ofst

rate

gyus

eM

otor

❏Fi

nem

otor

—po

int

❏Fi

nem

otor

—gr

apho

mot

or❏

Fine

mot

or—

man

ipul

ativ

es(e

.g.,

bloc

ks,

pict

ures

)❏

Vis

ual–

sens

ory–

mot

orin

tegr

atio

n❏

Gro

ssm

otor

Writ

ten

lang

uage

❏Br

ief

writ

ten

resp

onse

❏Le

ngth

yw

ritte

nre

spon

seRe

spon

sefo

rmat

❏O

pen/

free

-resp

onse

❏C

onst

rain

ed/m

ultip

lech

oice

Page 40: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

167

❏W

ritte

nla

ngua

ge❏

Larg

e–sm

all

❏C

olor

impo

rtan

tA

udito

ryst

imul

us❏

Brie

fve

rbal

❏Le

ngth

yve

rbal

❏Sp

oken

❏Ta

pe/C

D(h

eadp

hone

sus

ed?

YN

)❏

Back

grou

ndno

ise

Con

tent

LM

HC

ultu

ral

know

ledg

eL

MH

Engl

ish-

lang

uage

know

ledg

eL

MH

Emot

iona

lco

nten

t

❏Pr

ior

lear

ning

/lon

g-te

rmm

emor

y❏

Sens

ory–

mot

orco

ordi

natio

n❏

Mul

timod

alin

tegr

atio

n❏

Dor

sal

stre

am(o

ccip

ital–

parie

tal)

❏V

entr

alst

ream

(occ

ipita

l–te

mpo

ral)

❏Re

cept

ive

lang

uage

(LR)

CH

Cab

ilitie

san

dna

rrow

abili

ties

Hig

her-l

evel

proc

essi

ng❏

Gf—

fluid

reas

onin

g❏

Glr—

long

-term

stor

age

and

retr

ieva

l❏

Gv—

visu

alpr

oces

sing

❏G

a—au

dito

rypr

oces

sing

Low

er-le

vel

proc

essi

ng❏

Gs—

proc

essi

ngsp

eed

❏G

sm—

shor

t-ter

mm

emor

yA

cqui

red

know

ledg

ean

dac

hiev

emen

t❏

Gc—

crys

talli

zed

inte

llige

nce

❏G

rw—

read

ing/

writ

ing

❏G

q—qu

antit

ativ

eab

ility

Oth

er

Inpu

t:

Proc

essi

ng:

Out

put:

Com

men

ts:

Inth

e“I

nput

”co

lum

n:Y

N,y

esor

no;L

MH

,low

,med

ium

,or

high

.In

the

“Pro

cess

ing”

colu

mn:

LR

,lef

tor

righ

t;T

OP,

tem

pora

l,oc

cipi

tal,

orpa

riet

al;C

HC

,Cat

tell–

Hor

n–C

arro

ll.

From

Scho

olN

euro

psyc

holo

gy:A

Prac

titio

ner’

sH

andb

ook

byJa

mes

B.H

ale

and

Cat

heri

neA

.Fio

rello

.Cop

yrig

ht20

04by

The

Gui

lford

Pres

s.Pe

rmis

sion

toph

otoc

opy

this

appe

ndix

isgr

ante

dto

purc

hase

rsof

this

book

for

pers

onal

use

only

(see

copy

righ

tpa

gefo

rde

tails

).

Page 41: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

168

APPE

NDIX

4.2.

Brie

fDem

ands

Anal

ysis

Stud

ent’

sna

me:

Age

:G

rade

:

Test

/sub

test

Inpu

tPr

oces

sing

Out

put

Stre

ngth

s

Wea

knes

ses

From

Scho

olN

euro

psyc

holo

gy:A

Prac

titio

ner’

sH

andb

ook

byJa

mes

B.H

ale

and

Cat

heri

neA

.Fio

rello

.C

opyr

ight

2004

byT

heG

uil’f

ord

Pres

s.Pe

rmis

sion

toph

otoc

opy

this

appe

ndix

isgr

ante

dto

purc

hase

rsof

this

book

for

pers

onal

use

only

(see

copy

righ

tpa

gefo

rde

tails

).

Page 42: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

169

APPENDIX 4.3. Cognitive Hypothesis-Testing Worksheet

Student’s name: Age: Grade:

Reason for referral:

Preliminary hypotheses—Based on presenting problem and initial assessment, the following cognitivestrengths and weaknesses are hypothesized:

Strengths:

Possible weaknesses:

Hypothesis testing—Follow up with related construct tests:

Areas of suspected weakness:

Follow-up tests:

Strengths/weaknesses:

Associated with academic and/or behavior problems?:

(continued)

From School Neuropsychology: A Practitioner’s Handbook by James B. Hale and Catherine A. Fiorello. Copyright 2004 by TheGuilford Press. Permission to photocopy this appendix is granted to purchasers of this book for personal use only (see copyrightpage for details).

Page 43: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

170

APPENDIX 4.3. (page 2 of 3)

Ecological validity—Information from observations and teacher ratings:

Strengths:

Possible weaknesses:

Evaluation summary—Based on analysis of all evaluation information, the following cognitivestrengths and weaknesses are identified, and concordance or discordance is calculated if necessary:

Cognitive strengths:

Concordant with academic and/or behavioral strengths?:

Weaknesses:

Concordant with academic and/or behavioral weaknesses?:

Discordant with cognitive strengths?:

(continued)

Page 44: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

171

APPENDIX 4.3. (page 3 of 3)

Summary of evaluation information for intervention development

Academic/behavioral presenting problems: Cognitive weaknesses:

Resources for intervention in environment: Cognitive strengths:

Potential interventions:

Page 45: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

172

APPENDIX 4.4. Neuropsychological Assessment Observations Checklist

Student’s Name: Age: Grade:

1. Pays close attention to task. 1 2 3 4 5 Has difficulty with selective or sustainedattention.

2. Attention is consistent despitedistraction.

1 2 3 4 5 Is easily distracted by external stimuli.

3. Shows good impulse control. 1 2 3 4 5 Is overly impulsive.

4. Shows appropriate activity level. 1 2 3 4 5 Has inappropriate activity level (specify:low or high).

5. Affect/mood is appropriate. 1 2 3 4 5 Affect is not appropriate (specify: ).

6. Works quickly when appropriate. 1 2 3 4 5 Pace is too slow.

7. Can hold information in workingmemory to respond to questions.

1 2 3 4 5 Has difficulty retaining information inworking memory to answer questions.

8. Can switch easily from one task toanother.

1 2 3 4 5 Has difficulty switching tasks.

9. Plans/organizes before responding. 1 2 3 4 5 Responds without planning ororganization.

10. Evaluates performance/modifiesbehavior.

1 2 3 4 5 Does not evaluate performance or modifybehavior.

11. Comprehends orally presentedinformation.

1 2 3 4 5 Does not comprehend orally presentedinformation.

12. Follows directions or answersquestions without repetition.

1 2 3 4 5 Requires frequent repetition of directionsand questions.

13. Has adequate syntax and grammar. 1 2 3 4 5 Has difficulty with syntax and grammar.

14. Completes directions with one ormore steps.

1 2 3 4 5 Has difficulty with sequential processing ofdirections.

15. Expresses self fluently. 1 2 3 4 5 Has difficulty expressing self fluently.

16. Does not exhibit word-findingdifficulty.

1 2 3 4 5 Has word-finding difficulty.

17. Verbalizations are logical andorganized.

1 2 3 4 5 Verbalizations are rambling and tangential.

18. No difficulty with nonliteral,metaphoric, or figurative language.

1 2 3 4 5 Language is overly literal and concrete.

19. Articulation is clear. 1 2 3 4 5 Has poor articulation or phonemicparaphasias.

20. Can easily recall information fromlong-term memory.

1 2 3 4 5 Has difficulty recalling information fromlong-term memory.

(continued)

From School Neuropsychology: A Practitioner’s Handbook by James B. Hale and Catherine A. Fiorello. Copyright 2004 by TheGuilford Press. Permission to photocopy this appendix is granted to purchasers of this book for personal use only (see copyrightpage for details).

Page 46: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

173

APPENDIX 4.4. (page 2 of 2)

21. Learns new material withoutrepetition.

1 2 3 4 5 Needs many repetitions to learn newmaterial.

22. Can learn new associations withfew errors.

1 2 3 4 5 Makes frequent errors when learning newassociations.

23. Can perceive and differentiatecolors.

1 2 3 4 5 Appears to be partially or completelycolor-blind.

24. Easily discriminates/perceives visualstimuli.

1 2 3 4 5 Has poor visual acuity or visual perception.

25. Perceives visual stimuli throughoutvisual fields.

1 2 3 4 5 Has visual neglect. (Side? )

26. Easily understands body language. 1 2 3 4 5 Has difficulty understanding bodylanguage.

27. Perceives spatial/holistic/globalrelationships.

1 2 3 4 5 Does not readily identify spatial/holistic/global relationships.

28. Shows no spatial configurationbreaks.

1 2 3 4 5 Shows configuration breaks.

29. Shows no directional confusion. 1 2 3 4 5 Has directional confusion/orientationproblems/reversals.

30. Perceives objects and faces. 1 2 3 4 5 Has difficulty perceiving objects and faces.

31. Can easily perceive auditorystimuli.

1 2 3 4 5 Has difficulty perceiving auditorystimuli in the R and/or L ear.

32. Hears and uses prosody effectively. 1 2 3 4 5 Has difficulty with receptive or expressiveprosody.

33. Perceives tactile stimuli well. 1 2 3 4 5 Has difficulty discriminating tactile stimuli.

34. Handles materials smoothly. 1 2 3 4 5 Is clumsy when handling materials.

35. Has good pencil control/graphomotor skills.

1 2 3 4 5 Has poor pencil control/graphomotorskills.

36. Has established handedness (side?).

1 2 3 4 5 Has not established handedness.

37. Has good bimanual control. 1 2 3 4 5 Has difficulty with bimanual control orcrossing the midline.

38. Has good gross motor skill. 1 2 3 4 5 Has poor gross motor skill (clumsy orawkward).

39. Has good balance. 1 2 3 4 5 Has poor balance.

40. Has good muscle tone. 1 2 3 4 5 Has tone problems (too floppy, too rigid).

Page 47: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

174

APPENDIX 4.5. Psychological Processes Worksheet

Client’s name: Date of birth:

Clinician’s name: Date:

Identify the psychological processes associated with the student’s identified learning deficits with a (–)sign, and the strengths with a (+) sign. Remember that more than one psychological process should beinvolved for identified deficits.

Attention and Executive Frontal Lobe Processes

Basicreading

Readingcomp.

Basicmath

Mathreasoning Spelling

Writtenlang.

Oralexp.

Listeningcomp.

Sustained attention

Selective attention

Overall tone

Planning

Strategizing

Sequencing

Organization

Monitoring

Evaluation

Inhibition

Shifting/flexibility

Maintenance

Change

Motor overactivity

Motor underactivity

Constructional apraxia

Ideomotor apraxia

Ideational apraxia

Visual scanning

Sensory–motor integration

Expressive language

Long-term memory retrieval

Working memory

Perseveration

Grammar

Syntax

Math algorithm

Problem solving

Fluency/nonfluent aphasia

Dysnomia

(continued)

From School Neuropsychology: A Practitioner’s Handbook by James B. Hale and Catherine A. Fiorello. Copyright 2004 by TheGuilford Press. Permission to photocopy this appendix is granted to purchasers of this book for personal use only (see copyrightpage for details).

Page 48: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

175

APPENDIX 4.5. (page 2 of 3)

Attention and Executive Frontal Lobe Processes (continued)

Basicreading

Readingcomp.

Basicmath

Mathreasoning Spelling

Writtenlang.

Oralexp.

Listeningcomp.

Paraphasia

Circumlocution

Confabulation

Concept formation

Comments:

Concordant/Convergent Left-Hemisphere Processes

Basicreading

Readingcomp.

Basicmath

Mathreasoning Spelling

Writtenlang.

Oralexp.

Listeningcomp.

Sensory memory

Discrimination

Perception (meaningful)

Phonemic awareness

Phonemic segmentation

Phonemic blending

Sound–symbol association

Morpheme comprehension

Lexicon/word comp.

Sentence comprehension

Literal/concrete/explicit comp.

Math fact automaticity

Long-term memory

Declarative memory

Automaticity

Simple/rote sensory–motor integration

Detail perception

Sight word recognition

Local/part/fine processing

Dysphonetic

Convergent thought

Concordant thought

Fluent aphasia

Paraphasia

Neologism

Left–right confusion

Comments:

(continued)

Page 49: Linking Assessment to Intervention - Guilford PressLinking Assessment to Intervention 129 FIGURE 4.1. The cognitive hypothesis-testing (CHT) model. ent their findings in a team meeting;

176

APPENDIX 4.5. (page 3 of 3)

Discordant/Divergent Right-Hemisphere Processes

Basicreading

Readingcomp.

Basicmath

Mathreasoning Spelling

Writtenlang.

Oralexp.

Listeningcomp.

Sensory memory

Discrimination

Perception (abstract)

Spatial processing

Perceptual analysis

Visualization

Ambiguity

Asomatognosia

Prosopagnosia

Agnosia

Neglect

Object visual perception

Spatial visual perception

Grapheme awareness

Sensory integration

Complex sensory–motor integration

Constructional apraxia

Prediction

Inference

Metaphor/idiom/humor

Nonliteral/figurative/implicit comp.

Social perception/judgment

Prosody

Word Choice

Holistic/global/gestalt processing

Whole/coarse processing

Novelty/new learning/encoding

Pragmatics

Facial/body gestures

Problem solving

Dyseidetic

Divergent thought

Discordant thought

Fluent aphasia

Paraphasia

Neologism

Comments:

Copyright © 2004 The Guilford Press. All rights reserved under International CopyrightConvention. No part of this text may be reproduced, transmitted, downloaded, or stored inor introduced into any information storage or retrieval system, in any form or by anymeans, whether electronic or mechanical, now known or hereinafter invented, without thewritten permission of The Guilford Press.

Guilford Publications72 Spring Street

New York, NY 10012212-431-9800800-365-7006

www.guilford.com