e Florida State University DigiNole Commons Electronic eses, Treatises and Dissertations e Graduate School 11-13-2009 Linguistic Feature Development in Elementary Writing: Analysis of Microstructure and Macrostructure in a Narrative and an Expository Genre Shannon S. Hall-Mills Florida State University Follow this and additional works at: hp://diginole.lib.fsu.edu/etd is Dissertation - Open Access is brought to you for free and open access by the e Graduate School at DigiNole Commons. It has been accepted for inclusion in Electronic eses, Treatises and Dissertations by an authorized administrator of DigiNole Commons. For more information, please contact [email protected]. Recommended Citation Hall-Mills, Shannon S., "Linguistic Feature Development in Elementary Writing: Analysis of Microstructure and Macrostructure in a Narrative and an Expository Genre" (2009). Electronic eses, Treatises and Dissertations. Paper 4317.
95
Embed
Linguistic Feature Development in Elementary Writing- Analysis of.pdf
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
The Florida State UniversityDigiNole Commons
Electronic Theses, Treatises and Dissertations The Graduate School
11-13-2009
Linguistic Feature Development in ElementaryWriting: Analysis of Microstructure andMacrostructure in a Narrative and an ExpositoryGenreShannon S. Hall-MillsFlorida State University
Follow this and additional works at: http://diginole.lib.fsu.edu/etd
This Dissertation - Open Access is brought to you for free and open access by the The Graduate School at DigiNole Commons. It has been accepted forinclusion in Electronic Theses, Treatises and Dissertations by an authorized administrator of DigiNole Commons. For more information, please [email protected].
Recommended CitationHall-Mills, Shannon S., "Linguistic Feature Development in Elementary Writing: Analysis of Microstructure and Macrostructure in aNarrative and an Expository Genre" (2009). Electronic Theses, Treatises and Dissertations. Paper 4317.
The members of the committee approve the dissertation of Shannon S. Hall-Mills
defended on November 13, 2009.
__________________________________ Kenn Apel Professor Directing Dissertation
__________________________________ Barbara Foorman University Representative
__________________________________ Lisa Scott Committee Member
__________________________________
Shurita Thomas-Tate Committee Member
Approved: ____________________________________________________ Juliann Woods, Director, School of Communication Science and Disorders ____________________________________________________ Lawrence C. Dennis, Dean, College of Communication and Information
The Graduate School has verified and approved the above-named committee
members.
ii
Dedicated to: My family
for their love and support throughout this process
and for teaching me to value education.
iii
ACKNOWLEDGEMENTS
This dissertation is dedicated to my family, especially my parents, husband, and sister for their limitless love and steadfast support. Many thanks go to my major professor, Dr. Kenn Apel, for sharing his expertise, demonstrating his passion for research, teaching, and service, and guiding me through this process with patience and enthusiastic encouragement. I am forever grateful to my entire committee, including Dr. Apel, Dr. Foorman, Dr. Scott, and Dr. Thomas-Tate for their interest and expertise devoted to enhancing this project. I appreciate the mentorship of Drs. Howard Goldstein, Juliann Woods, Amy Wetherby, and Barbara Palmer throughout my doctoral education. I am grateful to Drs. Michelle Bourgeois and Julie Stierwalt for their mentorship throughout my teaching endeavors. Additionally, I owe thanks to several individuals from Volusia County Schools and Florida Department of Education for their role in my education (you know who you are!). I am thankful to the students and teachers that participated in this project and give our work meaning and value. I could not have done this study without the support of doctoral students Elizabeth and Danielle, and am thankful for their assistance with data collection and numerous logistics. The education field will continue to benefit over the long haul from your enthusiasm and commitment to children. I am indebted to graduate research assistants Sherilyn, Jennifer, and Tiffany for their dedication and long hours coding SALT files, and for our well-timed chats. I appreciate your interest in written language development, and hope this experience was something you can build on. Also, thanks to their classmate Sara, who consistently offered support in juggling the course I was teaching while analyzing the data. The future is bright for all four of you! And not far behind are four wonderful undergraduate volunteers, Liz, Amanda, Mary, and Katie, whose voluntary involvement in the post hoc analysis is greatly appreciated. Your enthusiasm for the field is encouraging! Numerous people supported my doctoral endeavors financially. My sincerest gratitude goes to Drs. Goldstein and Woods for assistantship funding through their language and literacy leadership training grant. Furthermore, the Kappa Kappa Gamma Foundation graciously provided financial support through scholarships, and I am grateful to the local chapter and alumni as well for their moral support. Many thanks go to the FSU Congress of Graduate Students for presentation grants for multiple conferences, and to the Graduate School, for the FSU Dissertation Research Grant that afforded the researcher version of SALT and the GRADE measure. To my fellow doc students: you made the journey an absolute pleasure! I can hardly wait to see what you do next! Your passion for what you do, and your faith and perseverance are truly inspirational. I’m especially grateful for Alisha, who strengthens my faith and has been a constant supporter, Elizabeth, who helps me see the forest through the trees and knows precisely when happy hour is order, Rachel, for her never ending smile and contagious laughter, Naomi for her work ethic, and David, for his fresh perspective. Thanks also to Danielle, Lori and Janine specifically for their helpful feedback on the presentation. From early on in the program, Jessika, Kimberly, Leila, and Kerry have been fantastic cheerleaders. I also appreciate friends outside of the program who didn't give up on me, especially Jennifer and Christa. Last, but certainly not least, thank you to each teacher in my past who took the time to support my learning and cheer me on. The world is a better place because of you!
iv
TABLE OF CONTENTS List of Tables ................................................................................. vii List of Figures ................................................................................ viii Abstract .................................................................................... ix 1. Introduction .............................................................................. 1 Written Product ......................................................................... 2 Microstructure ................................................................... 3 Macrostructure.................................................................. 7 Microstructure and Macrostructure in a Single Genre....... 11 Microstructure and Macrostructure in Multiple Genres ..... 15 Research Questions and Hypotheses....................................... 16 2. Method .................................................................................... 18 Participants ............................................................................... 18 Measures .................................................................................. 19 Procedure ................................................................................ 22 Data Analysis............................................................................ 22 Dependent Measures ....................................................... 22 Research Assistant Training ..................................................... 25 Inter-Rater Reliability ............................................................... 26 Research Design ...................................................................... 27 Power Analysis ......................................................................... 27 3. Results ……… ........................................................................... 29 Preliminary Analyses ……………………………………………… 29 Multiple Analyses of Covariance (MANCOVA) ………………… 33 Relations among Measures of Microstructure and Macrostructure 39 Findings Related to Ethnicity, Gender and SES ...................... 40 Post Hoc Analysis .................................................................... 41 4. Discussion …………………………………………………………… 52 Effect of Grade Level on Microstructure ………………………… 52 Effect of Genre on Microstructure ……………………………….. 55 Effect of Grade Level on Macrostructure ……………………….. 55 Effect of Genre on Macrostructure ………………………………. 58 Relations among Measures of Microstructure and Macrostructure 59 Effects of Ethnicity, Gender, and SES …………………………... 60
v
Limitations and Future Research …………………………………. 62 Educational Implications …………………………………………… 64 Conclusion ………………..………………………………………… 65 APPENDICES ................................................................................ 66 A Consent Form and IRB Approval ...................................... 66 B Writing Instructions and Prompts ...................................... 67 C SALT Protocol for Microstructure Variables ………………. 68 D Protocol for Macrostructure Variables …………………….. 73 REFERENCES .............................................................................. 77 BIOGRAPHICAL SKETCH............................................................. 83
vi
LIST OF TABLES
Table 1: Demographic Characteristics of Participants by Grade Level ...... 19 Table 2: Means and Standard Deviations for Independent Measures ....... 21 Table 3: Dependent Writing Variables ....................................................... 25 Table 4: Cohen’s Kappa Coefficients and Percent Agreement .................. 27 Table 5: Four Factor Solution of Microstructure......................................... 42 Table 6: Factor Loadings ........................................................................... 43 Table 7: One Factor Solution of Macrostructure ........................................ 43 Table 8: Factor Loadings ........................................................................... 44 Table 9: Factors and Respective Dependent Variables ............................ 44 Table 10: Descriptive Statistics for Dependent Measures, Narrative........ 45 Table 11: Descriptive Statistics for Dependent Measures, Expository....... 46 Table 12: Adjusted Means and Standard Deviations; Narrative ................ 47 Table 13: Adjusted Means and Standard Deviations; Expository ............. 47 Table 14: Overall and Grade Level Correlations ....................................... 48 Table 15: Correlation Matrices; Narrative and Expository.......................... 48 Table 16: Grade Level Correlations Matrices; Narrative and Expository ... 49 Table 17: Descriptive Statistics for 5 Writing Factors by Ethnicity, Gender, and SES .................................................................................................... 51
vii
LIST OF FIGURES
Figure 1: Productivity; Narrative and Expository ........................................ 37 Figure 2: Grammatical Complexity; Narrative and Expository.................... 38 Figure 3: Grammatical Accuracy; Narrative and Expository....................... 38
Figure 4: Lexical Diversity; Narrative and Expository................................. 39
Figure 5: Macrostructure; Narrative and Expository................................... 39
viii
ABSTRACT
The purpose of this study was to examine multiple dimensions of written language
produced by eighty-nine children in grades 2, 3, and 4 in narrative and expository writing
samples. Two written composition samples were collected from students exhibiting typical
development in second, third, and fourth grades using one narrative and one expository writing
prompt via a scripted, generated elicitation method. Additionally, participants completed group-
administered, norm-referenced measures of receptive vocabulary, word level reading, and
reading comprehension. The writing samples were transcribed into Systematic Analysis of
Language Transcripts (SALT; Miller & Chapman, 2005), coded, and analyzed for developmental
progression of linguistic elements represented by the five factors of productivity, grammatical
complexity, grammatical accuracy, lexical diversity, and macrostructure. Reading
comprehension scores were used as covariates in the multivariate analyses of variance.
Results indicated that levels of productivity and macrostructure increased steadily with
age. Across the narrative and expository samples examined, levels of productivity were highly
correlated and nearly equivalent within each grade, whereas a trend was noted for levels of
macrostructure in the expository genre to increase more sharply from second to third grade than
in the narrative genre. There was a grade effect for grammatical complexity in the expository
genre, whereas there were no significant differences between grade levels for narrative
grammatical complexity. Interestingly, the second graders scored higher than the third and
fourth graders on measures of grammatical complexity (especially MLTu) in their expository
samples. Comparison of grammatical complexity levels across genres revealed a small,
negative correlation across all three grade levels. No grade level differences were detected for
grammatical accuracy and lexical diversity in either genre; although, there was a trend for fourth
graders to produce a higher number of grammatical errors than second and third graders.
Students in each grade performed similarly regardless of genre type on measures of
grammatical accuracy and lexical diversity. Relations among measures of microstructure and
macrostructure were revealed between productivity and macrostructure in both genres and
between macrostructure and grammatical accuracy in the expository genre. Inter-correlations of
measures within grade level are discussed. There were no significant effects of ethnicity,
socioeconomic status, or gender on writing outcomes. Interestingly, trade-offs in performance
on certain linguistic features appeared to occur for second and fourth graders.
Results of this study suggest that variables of written microstructure and macrostructure
were sensitive to grade and genre level differences, that productivity (a measure of
microstructure), and macrostructure were related in both genres for all three grade levels, and
ix
that one cannot assume the older students will outperform younger students on all measures.
This latter finding was thought to be due to a trade-off between linguistic and cognitive demands
for second and fourth graders. Consequently, future research needs to establish these trade-off
trends occur in larger samples and examine the effects of different academic contexts (e.g.,
variable elicitation techniques, discourse structures, content specific assignments) on this
phenomenon. The findings of this investigation are discussed in light of grade level standards
for writing and the identification of students with writing difficulties. Multiple suggestions are
presented for educational implications of the results, and specific directions provided for future
research.
x
CHAPTER 1
INTRODUCTION
The arrival of the “information age” illuminates the importance of literacy in a constantly
changing world. Throughout the first decade of the 21st century, we have seen a tremendous increase
in the intensity of literacy demands needed to function within the world, needs that largely were
anticipated a decade prior (Kennedy, 1993). Literacy encompasses both reading and writing
proficiency. To address gaps in students’ reading proficiency, programs such as Reading First (US
Department of Education, 2008) and reports such as Reading Next (Biancarosa & Snow, 2006) have
highlighted the reading difficulties of elementary and adolescent students, summarized relevant
research, and expressed recommendations for the provision of quality reading instruction.
Like reading, writing is central to the quality of education from early childhood to postsecondary
schooling; it is an essential skill for literacy success in school and beyond (Troia, 2009). In the United
States, in an age of standards-based educational reform, writing is used to monitor adequate yearly
progress (AYP) as required by the No Child Left Behind Act (NCLB, 2001), and to determine grade
promotion and high school graduation. Writing proficiency is a significant predictor of reading
performance (Fitzgerald & Shanahan, 2000; Jenkins, Johnson, & Hileman, 2004) and is required for
college entrance and graduation. Furthermore, writing proficiency enables an adult individual to fully
participate in civic life and the economy (Graham & Perin, 2007). Writing is integrated in all aspects of
society for purposes of communication (e.g., medium for conveying knowledge and ideas, exploring self
expression, preserving history, achieving order based on written law, facilitating communication across
distances and time; MacArthur, Graham, & Fitzgerald, 2006). Now more than ever before, writing
proficiency is crucial for obtaining and maintaining employment (Smith, 2000).
More than 30 years ago, Lerner (1976) conjectured that "poor facility in expressing thoughts
through written language is probably the most prevalent disability of the communication skills" (p. 266).
Unfortunately, poor writing performance continues to plague the majority of students in our nation’s
schools. The National Center for Education Statistics (NCES) indicates that the majority of students,
70% of 4th graders (NCES, 2003) and 68% of 8th graders and 76% of 12th graders (NCES, 2007)
performed at or below the basic achievement level in writing based on results of the 2002 and 2006
National Assessment of Educational Progress writing assessment (NCES, 2003, 2007). This
prevalence of writing difficulties has fueled a surge of interest in recent years to further investigate
1
various aspects of writing development and writing proficiency to improve the writing outcomes of
students.
In the past, investigators have employed a variety of measures to study two general aspects of
writing: the writing process, and the writing product. When examining the writing process, researchers
have evaluated children’s ability to plan, generate, and revise text (Graham & Harris, 2003; Nelson &
Van Meter, 2002; Singer & Bashir, 2004). Those who have studied the writing product have examined
compositions for specific linguistic components such as productivity, grammatical complexity, lexical
diversity, text structure, organization, and coherence (Nelson & Van Meter, 2007; Puranik, Lombardino,
& Altmann, 2008; Scott & Windsor, 2000). The purpose of the present paper is to look more specifically
at the current knowledge regarding development of the written product in the elementary years.
Written Product
When examining the written product, investigators have analyzed children’s writing generated
under certain variations of the rhetorical task (Singer & Bashir, 2004). For example, children have been
instructed to generate text using a writing prompt that is expected to elicit a particular text structure for
a given discourse genre (Scott, 2009). Discourse genres represent different forms and styles of writing
and reflect a range of purposes and contexts for writing (Graham & Harris, 2003; Graham & Perin,
2007). In the school environment, narrative and expository genres are the most commonly encountered
discourse genres (Donovan & Smolkin, 2006), and therefore, the most commonly used writing prompts
in the elementary grades are intended to elicit either narrative or expository texts. Narrative discourse
involves telling a story, often about personal events or other life experiences (e.g., novels, personal
letters, and short stories). In contrast, expository discourse involves conveying facts or describing
procedures, sharing basic information, relating cause-effect relationships, or arguing a point of view
(e.g., essays, editorials). The ability to write proficiently in both narrative and expository genres is linked
to academic success (Nelson, Bahr, & Van Meter, 2004; Singer, 2007).
Knowledge of discourse genres is acquired in a developmental progression and is related to
reading comprehension and writing achievement (Englert & Thomas, 1987). Awareness and use of
narrative discourse in written language typically develops first, often through storytelling experiences
(Nelson, et al. 2004). Compared to narrative discourse, the structure of expository discourse is typically
mastered later in the school years and, as a consequence, is more difficult to produce and comprehend
for many students (Berman & Verhoeven, 2002). Much of the recent research regarding discourse
genres in written language has centered on text comprehension; in contrast, fewer studies have
focused on text production (i.e., writing). Furthermore, when researchers have examined linguistic
features at the discourse level in written language, their investigations often have been limited to
narrative discourse. There is a need, then, to examine students’ writing skills across additional
2
discourse genres, such as expository genre, especially considering that by the 4th grade, 60% of writing
assignments are expository in nature (Graham & Perin, 2007; Persky et al., 2003).
When examining the written product in different discourse genres, investigators have conducted
analyses of elements of microstructure and macrostructure. Analysis of linguistic elements at the
microstructural and macrostructural levels of a text have great potential for capturing the development
of linguistic features, and describing the challenges faced by students who struggle with the generation
of written language.
Microstructure
Analysis of elements of microstructure in a written product can occur on multiple levels,
including examination of linguistic elements at the word, sentence, and/or discourse levels.
Microstructure analysis generally examines a writer’s conveyance of meaning at these levels and
typically includes measures of productivity (e.g., number of words, T-units, or ideas), grammatical
complexity (e.g., mean length of T-unit, clause density), and lexical diversity (e.g., type-token ratio,
number of different words) (Nelson et al., 2004; Puranik, Lombardino, & Altmann, 2007, 2008). T-units,
or “minimal terminable syntactic units” (Hunt 1966), are defined as one main clause and any
subordinate clauses and are the most common unit of segmentation for written language transcripts.
Narrative Microstructure
There is a paucity of investigations focusing solely on the development of elements of
microstructure in products written in a narrative genre. Houck and Billingsley (1989) analyzed the
development of microstructure in narrative samples of 16 students with typical development in grades
4, 8, and 11. Participants were allowed 20 minutes to write a story about a trip. Their written narratives
were analyzed for various measures of productivity, grammatical complexity, and lexical density (e.g.,
number of words, sentences, words per sentence, and sentence fragments, words with more than 7
letters, and T-units, mean morphemes per T-unit), percentage of correct capitalization, and percentage
of correct spelling. Results indicated that the 4th graders in the sample produced an average of 152.75
words, 10.63 sentences, 16.02 words per sentence, 13.44 T-units, 12.54 mean morphemes per T-unit,
1.13 sentence fragments, 10.06 words with 7 or more letters, 91.5% correct capitalization, and 94.9%
correct spelling. Significant grade effects were found for 3 of the 9 variables: number of T-units,
spelling, and lexical density, indicating that increases in productivity, lexical diversity, and spelling
proficiency could be detected among groups at the elementary, middle, and high school levels.
Expository Microstructure
In contrast with the narrative genre, a greater number of previous studies exist that have
examined elements of microstructure in products written in an expository genre. Morris and Crump
(1982) compared the expository writing development of 72 students with typical development ranging in
3
ages from 9 to 15 years on measures of syntactic and vocabulary development. Participants were
instructed to watch a video and subsequently composed an essay. The written products were analyzed
for mean length of T-unit (MLTu), Syntactic Density Score (SDS), type/token ratio (TTR), and
Vocabulary Intensity Index (VII). The SDS was calculated based on a formula that considered the T-unit
length as well as clause length, number of subordinate clauses, embeddings, and verb expansions and
had previously been shown to detect increases in syntactic density between adjacent elementary
grades (Blair & Crump, 1984). The Vocabulary Intensity score was calculated using the Vocabulary
Intensity computer program (Kidder, 1974). The results indicated that MLTu, TTR, and SDS increased
consistently across the age levels. Eighteen students with typical development at age level 9.0-10.5
years (similar to students in grades 4 and 5 of other investigations), produced an average MLTu of
7.45, mean SDS of 2.02, and mean TTR of 3.27. The mean scores on the Vocabulary Intensity Index
were not reported and no differences were found between age levels on this measure). These findings
indicated that commonly employed measures such as MLTu and TTR were sensitive to differences in
expository grammatical complexity and lexical diversity between successive age levels, beginning with
age 9 years and up, and that SDS provided additional qualitative information about the quality of syntax
reflected in the written expository products of students in the varying age level groups.
In a later study conducted by Puranik, Lombardino, and Altmann (2008), the development of the
expository writing of 120 children exhibiting typical development in grades 3 (mean age = 8.7 yrs.), 4
(mean age = 9.7 yrs.), 5 (mean age = 10.8 yrs.), and 6 (mean age = 11.7 yrs.) was examined. One
writing sample per participant was collected using an expository text-retelling paradigm (to reduce
working memory load). Participants were instructed to listen to an expository passage and then write
what they remembered about the passage. No time restrictions were enforced, but the majority of
participants completed their writing in 10 minutes. The T-unit was used as the unit of segmentation. In
perhaps the most in-depth analysis of expository microstructure development to date, Puranik
examined 13 variables of microstructure at the word, T-unit, sentences, and discourse levels, including
number of words, ideas, T-units, clauses, sentences, and different words, MLTu, clause density, errors
per T-unit, percentage of grammatically correct sentences, sentence complexity, percentage of spelling
errors, and writing conventions.
Puranik et al's results indicated that measures of productivity and grammatical complexity
increased with age. A significant multivariate effect of grade and significant main effect for total number
of words were indicated, and pairwise comparisons indicated significant differences in total number of
words between participants in grades 3 and 4, (d=1.09, p<.001). Significant differences were evident
between the 3rd and 4th grade groups for the variables of total words, total ideas, number of T-units,
number of clauses, number of sentences, sentence complexity, and number of different words. The
4
group mean performance of participants in grade 3 was as follows for the 13 microstructure variables:
total words (61.0), total ideas (6.8), number of T-units (6.5), MLTu (9.6), clause density (1.78), number
of clauses (11.2), errors per T-unit (0.30), number of sentences (5.9), % grammatical sentences (73%),
sentence complexity rating (8.8), number of different words (33.8), percentage of spelling errors (7.2%),
writing conventions (88.6). The following descriptive statistics provide the group mean scores for the
participants in the 4th grade group: total words (89.2), total ideas (9.4), number of T-units (8.5), MLTu
(10.5), clause density (1.77), number of clauses (15.0), errors per T-unit (0.21), number of sentences
(7.9), % grammatical sentences (81%), sentence complexity rating (14.7), number of different words
(41.7), percentage of spelling errors (5.5%), writing conventions (90.1). Additionally, results of a factor
analysis confirmed that the 13 microstructure variables examined clustered into 4 dimensions of written
language microstructure: productivity, complexity, accuracy, and mechanics.
Cross Genre Microstructure
Occasionally, investigators have been interested in microstructure performance across more
than one discourse genre. Scott and Windsor (2000) studied the spoken and written language samples
of 60 students, including a group of 20 students with typical development (mean age = 11:5) across two
discourse genres (narrative, expository). Participants were instructed to write a story in response to a
19-minute narrative video, and to write a summary of a 15-minute expository video. The participants
were shown a model paper of expected length and allowed 20 minutes to write. The written samples
were transcribed and coded in SALT with the T-unit as the unit of segmentation, and analyzed for
elements of productivity/fluency (e.g., total number of T-units, words, and time, T-units per minute, and
words per minute), lexical diversity (e.g., number of different words based on first 100 words in the
sample for narrative, first 50 words for expository), grammatical complexity (e.g., words per T-unit,
clauses per T-unit), and grammatical error (e.g., errors per T-unit).
Across the written narrative products, students (mean age = 11:5) exhibiting typical
development (TD) wrote for an average of 23.6 minutes and produced an average of 32.3 T-units, 341
words, 1.4 T-units per minute, 14.2 words per minute, 60.6 different words, 10.4 words per T-unit, 1.94
clauses per T-unit, and .12 errors per T-unit. For the expository writing products, the TD participants
wrote for an average of 21 minutes and produced an average of 18.5 T-units, 216 words, 0.9 T-units
per minute, 10.3 words per minute, 61.6 different words, 12.1 words per T-unit, 1.74 clauses per T-unit,
and .15 errors per T-unit. Significant differences were found between genre means; all 5
productivity/fluency measures showed higher values in written narrative than in written expository
products. Regarding grammatical complexity, the direction of the effect of genre differed; there were
more clauses per T-unit produced in narrative products but more words per T-unit produced in the
expository products. There were no statistically significant genre effects for grammatical error. For all
5
microstructure variables except words per T-unit, the main effects for genre indicated higher levels of
performance in the narrative genre. The authors concluded that more fine-grained analyses of the
influence of genre on clause types are warranted. Differences were not examined across grade levels
or for age effects.
In a cross-linguistic study of 7 languages, including English, comparing four age levels (grades
4, 7, 11, and adult), 2 genres (narrative and expository) and 2 modalities (spoken and written), Berman
and Verhoeven (2002) examined multiple aspects of development of narrative and expository
microstructure. The age range of the 20 English participants in grade 4 was 9-10 years. Participants
were shown a 3-minute video that included scenes of conflict between people. To elicit narrative
writing, the participants were then instructed to write a story about a conflict or a problem they
experienced with someone. To elicit an expository composition, participants were asked to write on the
topic of problems between people and to express their thoughts on the subject, not to write a story.
Participants completed the spoken and written samples over two sessions, with genre order
counterbalanced across groups of participants. The writing samples were produced via a traditional
paper and pencil task, and were subsequently transcribed into a computer data base following standard
Codes for the Human Analysis of Transcripts (CHAT) conventions (MacWhinney, 1995), with the
clause as the segmentation unit. Measures included lexical diversity (vocabulary density, VOCD,
measured by a ratio of word types per token), total number of words, and mean clause length.
Specific means and standard deviations for the 20 English-speaking fourth graders in the study
were not reported. However, visual analysis of graphic illustrations indicates that the mean score for
total words in the narrative and expository genres was approximately 90, and 60 respectively. These
group means for narrative and expository productivity were markedly lower than for the participants in
the Scott and Windsor (2000) investigation. However, the mean age of the participants in Scott and
Windsor was higher (11.5 yrs) than those in the Berman and Verhoeven sample (age range 9-10
years), suggesting age level differences may exist between ages 9-10 and 11 years for both narrative
and expository productivity. Compared to the 4th graders in Puranik’s (2006) study, the mean number of
words in the expository products was lower for the 4th grade English sample in Berman and
Verhoeven’s study (mean = 89.2 and approx. 60, respectively), despite similar elicitation methods.
Regarding mean clause length, the 4th graders in Berman and Verhoeven’s sample produced an
approximate mean of 5.7 clauses in the narrative genre products and 5.25 clauses in the expository
genre products. Main effects across the entire sample for genre and age were indicated for total words,
number of clauses, and vocabulary density, all qualified by significant interactions between genre and
age. These results indicate developmental trends as exhibited in increasing levels of productivity,
grammatical complexity, and lexical diversity across genres (generally favoring the narrative genre) and
6
age levels (greatest levels of performance for oldest participants). However, visual analysis of the
results for the 4th grade English group’s VOCD scores indicate little difference between narrative and
expository lexical density at the 4th grade level (approximately mean VOCD of 50 and 52.5,
respectively). This result can be further explained by the significant interaction effect, for the entire
sample, between genre and age for lexical density. In other words, age differences may play a more
substantial role in one genre than in another for these particular variables.
Like Morris and Crump (1982), and Houck and Billingsley (1989), Berman and Verhoeven
(2002) found that measures of microstructure were sensitive to developmental change across
elementary, middle, and high school age levels. Furthermore, when considering the potential influence
of genre in development of microstructure, the Berman and Verhoeven results indicated little to no
effect of genre on 4th grade narrative and expository lexical diversity, and thus provide further support to
the similar findings of Scott and Windsor (2000), who reported the total number of different words as a
measure of lexical diversity.
In summary, previous investigations have examined the development of microstructure features
in one or more discourse genres. The findings have indicated that measures of productivity,
grammatical complexity, and lexical diversity can be sensitive to age and grade level. Furthermore,
Puranik (2006) detailed how various measures of microstructure cluster into 3-4 dimensions of written
language. This information is collectively important for explaining what children do linguistically with
their written products. However, multiple gaps remain regarding writing development of typically
developing students in grades 2-4. For example, more needs to be known about children's
development of certain microstructure elements (e.g., productivity, grammatical complexity, lexical
diversity) of the written product across multiple genres (e.g., narrative, expository) and between
subsequent grade levels within the elementary years (e.g., comparisons between performance in
grades 2, 3, 4).
Furthermore, there are additional levels of text that can be examined beyond the microstructure.
For example, there is also the knowledge of text structure and organization and coherence of the text
that can provide a different view or perspective on typical writing development in young children in the
2nd through 4th grades.
Macrostructure
In contrast to microstructure analysis, which usually involves comparison of features at the
word, T-unit, sentence and/or discourse levels, macrostructure analysis occurs mainly at the discourse
level (Scott, 2009). With the microstructure as a text base, macrostructure is the “abstract
representation of the global meaning structure…” which represents the “gist” of the text (Sanders &
Schilperood, 2006, p.387). Macrostructure analysis examines a writer’s conveyance of meaning at the
7
discourse level and may include measures of organization, cohesion, and genre-specific text structure.
Elements of macrostructure are often included in qualitative writing analyses, such as in holistic or
analytic scoring systems, or can be depicted quantitatively by counting cohesive ties or genre-specific
text structure elements present in a written product (e.g., counting story grammar elements in a
narrative text, or marking whether an introduction, body, and conclusion are present in an expository
text).
Narrative Macrostructure
Researchers have examined development of elements of macrostructure in products written in a
narrative genre. Laughton and Morris (1989) compared stories generated by 96 students with typical
development in grades 3 through 6 (n=24 per grade; age range 9-12 years) for inclusion of story
grammar elements. Students viewed a filmstrip within their classrooms and were asked to write a story
about the film. No time limits were imposed and the writing was generated using paper and pencil. The
narrative writing samples were scored for presence or absence of the major story components:
exposition (introduction of main character and supporting characters, relationship between characters,
and scene set), complication (defined a problem or conflict), causal and temporal relationships
statements, and resolution (statements focused on solving the problem or achieving the goal).
Results revealed that 54% of students in 3rd and 4th grades produced complete stories. Results
were reported for 9 story grammar components. Percentages for these components for grades 3 and 4,
respectively were: main and other characters = 100% of students in both grades, character relationship
46% (3rd) and 38% (4th), location 50% (3rd), 63% (4th), and time 63% (3rd), 63% (4th). For the
components of complication, the following percentages were obtained: defining the problem 75% (3rd),
63% (4th), causal statements 46% (3rd), 63% (4th), and temporal statements 50% (3rd) and 67% (4th).
Fifty-four percent of both third and fourth graders included the component of resolution. While these
percentages were not compared statistically, the results are visibly suggestive of developmental effects
on the inclusion of story grammar components between these two grade levels. (A general
developmental trend was noted to occur from third to sixth grade for causal and temporal relationships).
It is plausible that a more fine-grained analysis of story grammar structure features, such as through an
analytic scoring technique with multiple levels of performance possible for each variable, would reveal
additional information about the intra-grade development of these features in the written product.
Montague, Maddux, and Dereshiwsky (1990) measured the development of narrative
macrostructure in 36 students with typical development in grades 4/5, 7/8, and 10 (12 participants per
grade level). The researchers compared stories generated under 2 conditions: oral story retell (a story
conforming to canonical story grammar framework) and writing in response to a story starter (a paper
and pencil task). Both tasks were generated individually in a single session in a counterbalanced
8
fashion to control for order effects. No time limits were imposed, and the participants completed both
tasks in a total of 45 minutes. The oral retell samples were scored for presence or absence of 25
propositions from the story. The propositions were categorized into major setting (introducing the
protagonist), initiating event (change in state of affairs requiring protagonist response), attempt (goal-
related actions of protagonist), internal response (affective, emotional responses), direct consequence
(indicating whether goal is attained and any resulting changes), and reaction (character’s feelings,
thoughts related to outcome). The raters also identified the intercategory errors (temporal reversal of
two statements from different categories), intracategory (reversal of two statements within the same
category), and single statement reversal errors. Furthermore the oral retell samples were scored for
substitutions, additions, and deletions of material. The written products produced with a story starter
were scored using two procedures; the first for parsing and categorizing propositions, and the second
for a holistic rating of the cohesion, organization, and episodic structure of the story on a 5 point Likert
scale. Results indicated that no developmental differences existed between or within tasks. However,
the small sample size per grade level (n=12) may have masked potential developmental differences.
Expository Macrostructure
As with the narrative genre, investigators have been interested in the development of expository
macrostructure in children's written products. Englert, Raphael, Anderson, Gregg and Anthony (1989)
measured metacognitive processes and use of expository text structure features in 138 students in
grades 4 and 5 (age range 9-11 years) who were grouped according to their reading achievement
levels (high achievement, low achievement, learning disabilities; 46 students per ability group).
Participants completed 2 expository compositions using 2 different expository sub-genre text structures
(e.g., comparison/contrast, explanation), read and recalled expository texts with the same text structure
as the writing products, and wrote summaries of the expository text they read. No time limits were
imposed for completion of the tasks. Written products were given a primary trait score (degree to which
the product incorporated the required organizational pattern for a specific text structure and appropriate
key words and phrases) and a holistic quality score (degree to which the product was interesting and
effectively communicated a particular text structure form). Products reflecting the explanation
expository text structure were scored for the following traits: introduction, comprehensive sequence of
steps, key words, and adherence to explanation organization (introduction, sensible sequence,
closure). Compare/contrast structure products were scored for the following traits: identification of 2
items for compare/contrast, description of similarities, explanation of differences, use of key words, and
adherence to compare/contrast organization (introduction, similarities/differences, conclusion). The trait
and holistic scores were combined to reflect the product's overall organizational score. Productivity was
rated based on the number of ideas in the product. Multiple analysis of variance (MANOVA) revealed
9
significant effects for group and text structure; performance on organizing compare/contrast expository
compositions was significantly greater than that for explanation products. In contrast, the total number
of ideas was higher for the explanation products. Mean scores were not examined across grade levels
4-5.
Cross Genre Macrostructure
Fewer investigators have been interested in the development of macrostructure features across
more than one discourse genre. In an investigation of the relation between reading performance and
use of cohesion in writing, Cox, Shanahan, and Sulzby (1990) examined cohesion in the narrative and
expository writing products of 48 students in grades 3 and 5. The participants were grouped based on
reading performance (e.g., low achieving readers and high achieving readers). Each student completed
2 narrative and 2 expository writing tasks. Narrative writing was elicited following a discussion of 2 sets
of 3 pictures; expository writing was elicited following discussion of 2 researcher-made expository
articles. The narrative discussion focused on story grammar categories (e.g., setting, event, reaction).
The expository discussion included activation of prior knowledge and focused on the organizational
structure of the articles. The participants completed the writing tasks in groups, counterbalanced for
genre and task. The written products were segmented into T-units and analyzed for appropriate or
inappropriate use of simple coreferential and coclassificatory cohesive ties (e.g., pronoun reference,
use of "the" as a specific determiner, comparatives, demonstratives, ellipsis). The raw score count of
these cohesive ties were divided by total number of T-units for 2 proportional scores: appropriate and
inappropriate cohesive ties. The products also were analyzed for cohesive harmony (another
proportional score) and overall quality using a holistic rating scale.
Results of repeated measures ANOVAs for each genre with grade and reading comprehension
levels as independent variables indicated varying and significant effects of genre, grade, and reading
levels. For example, in both genres, there were significant main effects for grade and reading levels for
appropriate cohesive ties, indicating that fifth grade students used appropriate cohesive ties more
frequently than third graders, and good readers used appropriate cohesive ties more frequently than
poorer readers. (There were no significant interactions between grade and reading levels.) However,
the results differed for harmonic cohesion. In the narrative genre, main effects for grade and reading
levels indicated that 5th graders and stronger readers used cohesive harmony more frequently (no
significant interactions). However, in the expository genre, there was not a significant effect for grade
level, only for reading level, indicating that good readers used cohesive harmony more often than lower
achieving readers, but 5th graders did not do so any more frequently than third graders. Furthermore,
results indicated that in the narrative genre, third graders and poor readers used significantly more
inappropriate cohesive ties (no interaction effects). However, in the expository genre, the only main
10
effect was for reading level, indicating that the poor readers used significantly more inappropriate
cohesive ties, but no difference existed based on grade level alone. Interestingly, a significant
interaction between grade and reading level existed. Post hoc analyses revealed the poor readers in 5th
grade more frequently used inappropriate cohesive ties. These findings collectively demonstrated a link
between reading development and knowledge of cohesion.
Most recently, Crawford, Helwig, and Tindal (2004) examined the written products of 169 and
134 students with typical development in 5th grade and 8th grade respectively produced in narrative,
imaginative, persuasive, and expository genres using a trait scoring system for ideas, organization,
sentence fluency, and conventions. Participants completed two assessment tasks, one lasting 30
minutes, the other occurring over a 3-day period. The trait scores from each assessment task were
compared. The results of an analysis of variance indicated there was no significant effect of discourse
genre on the writing trait performance scores of the participants in 5th grade. Paired t-tests revealed a
significant difference in the individual trait performance of the fifth graders for the traits of sentence
fluency and conventions, favoring the 3-day assessment task. Explicit comparisons of the students'
performance between grade levels were not reported. However, visual analysis of the data suggest a
possible age effect for mean composite trait scores favoring the 8th grade sample, at least for the 3-day
assessment (mean composite trait score for 30-minute assessment = 31.84, 35.83; for 3-day
assessment = 34.76, 35.38; fifth grade and eighth grades respectively).
Crawford et al. (2004) was unique in the number and type of discourse genres within which
student compositions were elicited and analyzed for macrostructure elements. A comprehensive
literature search did not reveal any previous investigations involving this type of cross-genre analysis
with the written products of younger students, such as those in grades 2-4, the primary focus of the
present investigation.
Microstructure and Macrostructure in a Single Genre
In many instances, investigators have sought to document developmental trends across both
microstructure and macrostructure variables within a single genre. Nodine, Barenbaum, and Newcomer
(1985) measured the extent to which 31 children with typical development in grades 5 and 6 (mean age
= 11:6) used story schema in their narrative writing products. Their performance was compared to one
group of 30 students with learning disabilities (LD; mean age = 11:7) and another comparison group of
31 students with reading disabilities (RD; mean age = 11:6). The researchers assessed the children's
macrostructure elements of story schema, cohesion, and microstructure elements of productivity using
the creative writing component of the Diagnostic Achievement Battery (DAB; Newcomer & Curtis,
1984), a standardized test designed to evaluate multiple aspects of writing. Participants were shown a
series of pictures and instructed to write a story about them using paper and pencil. The students were
11
encouraged to plan their writing during the first five minutes and then were allowed 20 minutes to write.
The writing samples were scored in three areas: writing categories (the extent to which a story was
produced; story, story-like, descriptive, expressive), measures of productivity/fluency (total words, mean
length of T-unit), and a measure of cohesion (authors noted whether stories were incoherent,
confusing, or included unclear referent).
Nodine et al’s results indicated that the majority of the participants in the group of students
exhibiting typical development (TD; n = 22) generated compositions that were determined to represent
stories, while fewer students produced story-like (6) and descriptive (3) compositions (0 expressive),
and produced an average of 104.4 words and mean T-unit length of 8.6. Four students generated
writing that was rated as confusing and four students included at least one unclear referent. The
findings suggest that 11-year-old children exhibiting typical development have mastered story schema
well enough to write a short story successfully, meeting the basic requirements. The researchers
discussed the possibility that productivity (microstructure) and use of story schema (macrostructure)
were related; however, direct inferential methods of analysis to this effect (i.e., correlation analyses)
were not reported.
Barenbaum, Newcomer, and Nodine (1987) analyzed the narrative written products of students
in grades 3, 5, and 7 for elements of both microstructure (productivity: total words) and macrostructure
(story categories, composition consistency). Similar to Cox et al. (1990), participants were grouped
based on reading achievement levels (low achieving, typically achieving, and learning disabled). The
typically achieving participant group included 19 third graders (age range = 8-10 years), 19 fifth graders
(age range = 10-12 years), and 17 seventh graders (age range = 12-15 years). Two stories were
elicited per participant using a picture prompt in individual sessions. In both sessions, participants were
encouraged to plan, and for the second story, participants were instructed to draw a picture prior to
writing their story. Stories were classified in the following categories as either a story, primitive story,
action sequence, descriptive, or expressive. Significant differences were noted in the composition
category between tasks, grade levels, and ability group. The written products of the third graders as an
age group, regardless of ability level, included fewer "stories" proportionally than the 5th and 7th grade
groups. Interestingly, the 5th graders produced more "stories" than both the 3rd and 7th grade groups.
Possible explanations for this included the increased knowledge of story schema in the 5th grade
beyond what is typically experienced at the third grade level, or other internal factors such as motivation
to write about topics that may not have been grade-appropriate for the older students. The researchers
concluded that text structure and productivity related to the story category.
In a follow-up investigation to Barenbaum et al. (1987) and using the same participants,
Newcomer, Barenbaum, and Nodine (1988) compared the productivity (total words) and coherence in
12
the oral and written narratives produced by the students. As in Barenbaum et al., stories were classified
as either a story, primitive story, action sequence, descriptive, or expressive. To measure coherence,
written products also were measured for whether they included unclear referents, or were confusing or
incoherent. Results revealed that the typically achieving group across grade levels produced more
stories in the oral modality than in the written, but that there was no difference in story production
between modalities for the other achievement groups. Coherence was better in the oral modality for all
groups. As a general trend, the number of coherence errors increased as the complexity of the story
increased. Regarding productivity, the third graders demonstrated the lowest levels, consistent with
developmental expectations.
Also comparing oral and written narratives, Gillam and Johnston (1992) examined elements of
both microstructure (number and mean length of T-unit, percentage of complex T-units, percentage of
grammatically unacceptable T-units, number of predicate types per T-unit) and macrostructure (number
of connectives per T-unit, cohesion, number of constituents) in the stories produced by 40 students in 4
groups: language/learning impaired, chronological age-matched comparison group, spoken language
age-matched group, and reading-matched group (n=10 per group; age range 9-12 years). Sets of
picture prompts were used to elicit 2 oral and 2 written stories per participant. Participants selected a
picture from a set of three and were instructed to create a story based on the picture they selected. For
each participant, the longest oral and longest written narratives were then transcribed into SALT (length
determined by story constituents). The unit of segmentation was the T-unit. Narratives were examined
for complexity of linguistic form (morphemes per T-unit, number of T-units, percent complex T-units,
number of connectives per T-unit including causal, conditional and temporal connectors), and
underlying content (propositions per T-unit, constituents per story, predicate types per T-unit, and
percent of dyadic constituents). Results indicated that for all groups, written narratives were more
difficult to produce than were oral narratives, as demonstrated by fewer morphemes and prepositions
and increased error rate in the written products. However, for the typically developing participants, their
written products exhibited greater grammatical complexity than their oral narratives. Differences were
not examined across grade levels or for age effects.
Continuing the line of investigations comparing the production of oral and written narratives,
Mackie and Dockrell (2004) examined one written and one spoken narrative produced by 33 children in
three groups: children with specific language impairment (mean age = 11 years), chronological age-
matched peers (CA), and language age-matched peers (LA; mean age = 7.3 years; n=11 per group).
The written products were elicited using a picture prompt in a 30-minute writing session. The writing
samples were analyzed for elements of microstructure (number of words, number of words per minute,
proportion of syntax errors, spelling errors), and macrostructure as measured by the Picture Story
13
Language Test (PLST) scales for Content, Abstract-Concrete (5 levels are used to rate the level of
abstract thought or ideation; quality rating of ideation in the sample). Results revealed that the typically-
developing CA group and the younger group of LA-matched peers produced an average of 91 and 53.2
total words, 8 and 4 words per minute, .02 and .04 proportion of syntax errors, .03 and .16 proportion of
spelling errors, and had a mean Content score (PLST) of 15 and 12.4, respectively. These findings
indicate a developmental effect on performance of the older CA group for greater overall productivity,
fluency, syntactical accuracy, and reduced spelling errors in their narrative written products.
Furthermore, correlations among writing, reading, and oral language measures were examined as well.
For the two groups of typically-developing students (CA and LA groups), no statistically significant
correlations existed between reading, oral language, or writing measures; although, trends were noted
indicating a possible relationship between the content measure and total words produced. These trends
suggest a relationship between measures of macrostructure and microstructure. However, the small
sample size and measures chosen (i.e., sensitivity) may have limited the findings and prohibit further
interpretation for the data derived from the Mackie and Dockrell sample.
In a large-scale longitudinal investigation, Fey, Catts, Proctor-Williams, Tomblin, and Zhang
(2004) analyzed the spoken and written narratives of 538 students produced in grades 2 and 4,
including 238 students exhibiting typical development, for elements of microstructure (lexical diversity:
number of different words; grammatical complexity and accuracy: mean length of C-unit, number of C-
The free and reduced lunch rate for the elementary school was 24.1%. The following demographic
information was collected for each participant: gender, age, ethnicity, status in exceptional student
education (if applicable) and whether the student was a recipient of free/reduced lunch (as a measure
of socioeconomic status).
Participants were recruited in conjunction with a larger investigation examining an experimental
spelling intervention. Approval was obtained from the Florida State University Institutional Review
Board (IRB) for the procedures and consent forms for this study (see Appendix A). Additionally,
permission to conduct research was obtained from the school. Consent forms were sent home to all
second, third, and fourth grade students. Participants had to be monolingual English-speaking, enrolled
in general education, with no history of sensory impairments as determined by school records.
Consultation between the PI and research director at the school confirmed whether participants with
parental consent met the inclusionary criteria. Writing samples were obtained from and group
administered measures were conducted with all participants who returned a signed parental consent
form. Attempts were made to recruit equal numbers of male and female students, and for the sample to
be ethnically- and socioeconomically-diverse.
A total of 93 participants were recruited for the spelling intervention study and writing samples
were collected from 89 participants. Four of the consented participants did not complete the writing
samples for the present study and were therefore excluded. The final sample for the present
investigation (n = 89) included 37 males (41.6%), and 52 females (58.4%). The participants ranged in
age from 7 years, 0 months to 10 years, 11 months (M = 8 years, 6 months; SD = 10.9 months). The
participants represented a range of ethnic backgrounds, including 55% Caucasian, 20.2% African
American, 11.2% Hispanic, 3.4% Asian American, 7.9% multiethnic, and 2.2% unreported ethnic
backgrounds. School records indicated that 13 students in the sample were receiving support services
in special education. However, no specific information was provided regarding the students' primary
exceptionalities or details regarding services received. All of the students were enrolled in general
education. Examination of the descriptive statistics on these students' performance on the independent
18
measures indicated mean scores within the average range (Group Reading Assessment and
Diagnostic Evaluation, Reading Comprehension Composite M = 90.5, SD = 8.51; PPVT-IV M = 101.10,
SD = 14.14). Demographic characteristics are depicted per grade level in Table 1.
Table 1
Demographic Characteristics of Participants by Grade Level.
Characteristic Grade Level 2 3 4 Total n Age (M, SD) 7:9 (7.35) 8:5 (3.52) 9:8 (5.77) -- Gender Male 17 (61%) 14 (42%) 12 (43%) 43 Female 11 (39%) 19 (58%) 16 (57%) 46 Ethnicity Caucasian 16 (57%) 20 (61%) 13 (46%) 49 African American 4 (14%) 5 (15%) 9 (32%) 18 Asian American 2 (7%) 1 (3%) 0 (%) 3 Hispanic 3 (11%) 5 (15%) 2 (7%) 10 Other 3 (11%) 2 (6%) 4 (14%) 9 Free/Reduced Lunch Receiving 5 (18%) 8 (24%) 10 (36%) 23 Not Receiving 23 (82%) 25 (76%) 18 (64%) 66 Number of participants per grade level 28 33 28 Note. Standard deviation for age given in parentheses, measured in months. Total sample size = 89.
Measures
Reading
The Group Reading Assessment and Diagnostic Evaluation (GRADE; Williams, 2001) was
administered to obtain participants' reading levels. The GRADE is a norm-referenced, research-based
reading assessment which may be administered in groups. In general, the GRADE measures individual
reading skills in the areas of comprehension, vocabulary, and oral language. Levels 2, 3, and 4, which
correspond to grade levels 2, 3, and 4, were administered to participants within large groups (e.g.,
classrooms). The grade-level appropriate versions of the subtests of Word Reading, Sentence
Comprehension, and Passage Comprehension were administered and standard scores obtained. For
Levels 2 and 3, Word Reading was administered as well. The GRADE measures were scored
according to standardized test protocol. The test was used to corroborate report of participants’ general
19
reading skills within typical limits. The GRADE measure was selected as it allowed direct comparison of
student performance from one grade to another (Moore-Brown, Montgomery, Bielinski, & Shubin,
2005).
The GRADE was standardized using a sample of 33,432 students from 46 states in preschool
through postsecondary grades. According to the test manual, internal consistency of the GRADE was
determined for each subtest, composite, and test level using coefficient alpha and split-half methods
based on classical test theory, and ranged from .95 to .99; thus indicating high levels of internal
reliability for each form, level, and grade-enrollment group. Test-retest reliability coefficients for levels 2
through 4 ranged from .89 to .98. Concurrent validity was established at .71 with the Iowa Test of Basic
Skills (ITBS), .82 to .87 with the California Achievement Test (CAT), and .86 to .90 with the Gates-
MacGinitie Reading Tests (Gates).
Vocabulary
In addition to the GRADE measure, the Peabody Picture Vocabulary Test - Fourth Edition
(PPVT-IV; Dunn & Dunn, 2007) was administered as part of a battery of assessments to ascertain
participants’ receptive vocabulary levels, and to corroborate teacher report of receptive language skills
within typical limits. The PPVT-IV is a measure commonly utilized in language and literacy research.
Internal consistency of the PPVT-IV by age and grade is .94 to .95. Test-retest reliability by age is .93.
The PPVT-IV has average correlations of .72 with the CELF-4 Core Language scale, .71 with the
CELF-4 Receptive Language scale, and .70 with the CELF-4 Expressive Language Scale. Additionally
the PPVT-IV has an average correlation of .60 with the GRADE Total Test Score for levels 2-4.
GRADE and PPVT-IV scores for the participants by grade level are reported in Table 2. One-
way analysis of variance (ANOVA) indicated that the mean group standard scores for the PPVT-IV did
not significantly differ between grade levels, F(2,86) = 2.413, p>.05, partial η2 = .07. ANOVA results for
comparison of grade level means for the GRADE Comprehension Composite scores indicated a
significant main effect for grade level, F(2,86) = 5.86, p=.004, partial η2 = .12. Follow-up tests to
evaluate pairwise differences among the means indicated that there were no significant differences
between second and fourth grades, but there was a significant difference between third and fourth
grades, favoring the third grade group, (mean difference = 9.65, p = .003). The 95% confidence
intervals for the pairwise differences were -9.95 to .364 for the second grade-third grade comparison, -
.57 to 13.57 for the second grade-fourth grade comparison, and 2.85 to 16.45 for the third grade-fourth
grade comparison. Concerns regarding the difference on GRADE Comprehension Composite scores
between the third and fourth grade groups were allayed upon the consideration that the fourth grade
scores, while slightly lower than third grade, still fell well within the average range (M = 95.90).
Examination of the fourth grade scores indicated a small group of scores clustered at the lower portion
20
of the average range, thereby influencing the group mean and standard deviation. Therefore, it was
concluded that the GRADE scores fulfilled their main purpose in the present investigation, to verify
reading scores within normal limits across all three grade levels, and could also serve a second
purpose as a covariate in the main analyses. Previous literature supports the choice of reading ability
as a covariate in studies examining written microstructure and macrostructure (Cox et al, 1990;
Montague et al, 1990).
Table 2. Means and standard deviations for independent measures. Grades 2 3 4 Vocabulary
PPVT-IV Raw score 136.86 (11.61) 144.70 (17.43) 150.12 (14.82)
PPVT-IV SS 110.68 (9.17) 109.03 (14.31) 102.61 (12.42)
Reading (GRADE measure)
Word Reading raw score 26.18 (2.51) 28.82 (1.36) N/A
Sentence Comprehension raw 14.11 (3.94) 17.55 (1.70) 12.46 (3.82)
Passage Comprehension raw 16.04 (6.08) 18.0 (5.16) 13.14 (6.27)
Comprehension Composite raw 32.29 (11.97) 35.52 (6.17) 25.61 (9.01)
Comprehension Composite SS 102.40 (12.15) 105.55 (9.75) 95.90 (11.50)
Note. PPVT-IV reported in standard scores; *raw scores reported; Level 4 of GRADE does not include the Word Recognition subtest. Standard deviations are reported in parentheses.
Writing
Two writing samples were collected per participant upon presentation of one expository and one
narrative writing prompt (see Appendix B) during a single writing session. Two pieces of paper were
provided for each writing sample to be completed. Participants used paper and pencil and were allowed
15 minutes per composition. In addition to reading the prompts, the evaluator wrote the prompts on the
board for the students to view. The elicitation procedure was similar to that of the Florida Writing
Assessment Program, Florida Writes, and progress monitoring program, Writes upon Request Program
(FLDOE, 2009), so that a standard elicitation method was employed similar to other writing
assessments that take place within the context of normal education practices for students in grades one
through four. The writing scale designed for this study, consisting of nine items for microstructure and
three for macrostructure, had good internal consistency, with a Cronbach alpha coefficient of .80.
21
Procedure
Following the attainment of informed consent, students were assessed during the school day on
a group or classroom-wide basis, at a time deemed appropriate by the teachers, to minimize
interruptions to instruction. All participants completed the appropriate level of the GRADE assessment
and one narrative and one expository writing sample. The GRADE required that the listening
comprehension subtest be administered first. The GRADE assessment lasted 45-60 minutes; writing
samples lasted 15 minutes each. Testing was conducted by the primary investigator and trained
research assistants. Data collection took place during the 2008-2009 school year.
Data Analysis
Dependent Measures
Multiple dependent measures were analyzed from the writing samples to describe the elements
of microstructure and macrostructure elements present in the samples. The primary investigator
transcribed the writing samples into a computer database according to Systematic Analysis of
Language Transcript conventions (SALT, Version 8; Miller & Chapman, 2005). The unit of
segmentation was the T-unit, as suggested by Nelson et al. (2004) and consistent with previous
investigations (Nelson & Van Meter, 2007; Puranik et al., 2007, 2008; Scott & Windsor, 2000). Items
that were not part of the sample text, such as “The end”, “Sincerely,”, “That’s all” were not counted
within the dependent measures. After practicing and establishing coding guidelines, the dependent
measures related to microstructure variables were scored following the SALT coding protocol in
Appendix C. Two trained graduate student assistants assisted with the reliability coding. One rater was
primarily responsible for coding of the microstructure variables and a second rater for scoring for
macrostructure variables using the rubric.
Microstructure: Productivity
The microstructure productivity measures values for total number of words (TNW), and T-units
(TNT) were calculated automatically in SALT. TNW is a very frequently used measure of productivity,
and consists of the number of words produced in a given writing sample (Berman & Verhoeven, 2002;
Mackie & Dockrell, 2004; Nelson & Van Meter, 2007; Puranik, 2006; Scott & Windsor, 2000). As with
TNW, TNT is also widely used, and is calculated by SALT as the number of utterances in the transcript
(because the transcript is broken down at the level of the T-unit).
Microstructure: Grammatical Complexity
Numerous measures exist for examining the grammatical complexity of the written product.
Mean length of T-unit (MLTu) is a commonly employed measure that is automatically calculated in
SALT (Berman &Verhoeven, 2002; Nelson & Van Meter, 2007; Puranik, Lombardino, & Altmann, 2007,
22
2008; Scott &Windsor, 2000) by dividing the total number of words by the total number of T-units in a
sample. Additionally, multiple writing variables have been examined in writing research using the unit of
the clause. A clause consists of a group of related words including a subject and a verb (Puranik,
2006). In this study, the total number of clauses (TNC) was calculated in order to compute clause
density (CLD). Both measures have been used in previous examinations of the written product
(Puranik, Lombardino, & Altmann, 2007, 2008; Scott & Windsor, 2000). Clause density (CLD) (Scott &
Stokes, 1995; Scott & Windsor, 2000; Puranik, et al 2007, 2008) was calculated by dividing the total
number of clauses (main and subordinate) in the sample by the total number of T-units across the
sample. In addition, the number of clauses per sentence (CPS) was measured to capture grammatical
complexity at the sentence level. Two sets of sample files were created in SALT to calculate clauses
per T-unit separately from clauses per sentence.
Transcripts within the SALT program were coded for sentence type (complex vs. simple, correct
vs. incorrect), and presence of grammatical errors. A simple sentence consisted of one main clause
and only one verb, while a complex sentence included one main clause plus one or more
embedded/subordinate clauses, two main clauses, or one main clause and verb phrase joined by a
coordinating conjunction. Grammatical errors were defined as errors occurring in verb or pronoun
tense, agreement or case, omitted or incorrect inflection, omitted or substituted grammatical elements,
and violated word order. A sentence without any grammatical errors was considered correct, while a
sentence with one or more errors was deemed incorrect. Occasionally, a grammatical error occurred
across sentences, where the individual sentence was grammatically accurate, but the error was a verb
tense change across the other sentences, or across the body of the paper. Therefore, such errors that
occurred across sentences were not counted as sentence level errors and were indicated in the SALT
transcript as GEX. Consistent with previous investigations, the total number of grammatical errors
(TNGE), and percentage of grammatically correct sentences (% GS) were calculated (Mackie &
Dockrell, 2004; Nelson & Van Meter, 2007; Puranik, 2006; Puranik, Lombardino, & Altmann, 2007).
GEX were counted within the TNGE but did not affect % GS as only within-sentence GEs were involved
in that calculation. Considerations were made regarding the potential influence of participant dialect on
calculation of grammatical errors, the results of which are discussed later.
Microstructure: Lexical Diversity
Lexical diversity, which is thought to indicate vocabulary size and control, is most commonly
measured directly through either the number of different words (NDW) in the written text, or the type-
token ratio (TTR; ratio of different word types to overall words); although, indirect measures of lexical
diversity are often evident in holistic rubrics used for writing assessment (e.g., word choice) (Scott,
2009). In recent years, NDW has become the preferred measure of lexical diversity among those
23
researching writing development as it can be automatically calculated in SALT, is sensitive to
developmental changes, (Nelson, Bahr, and Van Meter, 2004; Puranik, 2006), and is considered to be
relatively free of culture and socioeconomic bias (e.g., Zevenbergen, Whitehurst, & Zevenberger,
2003). Previous researchers have suggested that NDW or TTR are most accurately interpreted when
sample size is controlled for (Scott, 2009; Scott & Windsor, 2000). For this reason, an additional related
measure of lexical properties was incorporated in the present investigation that is not confounded by
writing sample size. Lexical density (LXD) was the proportion of content words (e.g., nouns, verbs,
adjectives) to total words (Scott, 2009). By taking a proportion of content words to total words, each
sample was then measured for lexical density on the same scale regardless of overall sample length,
thereby reducing the impact of sample size.
Macrostructure
The dependent measures related to macrostructure variables (organization, genre-specific text
structure, cohesion) were scored according to an analytic scoring system (see Appendix D for
operational definitions and protocol). The operational definitions for examining levels of organization,
text structure, and cohesion were formed based on key features of informal writing inventories used in
previous investigations (Crawford et al, 2004; Moats, Foorman, & Taylor, 2006; Nelson et al, 2004;
Singer & Bashir, 2002). Organization was examined within the introduction, body, and conclusion of the
product. Products also were examined for use of an appropriate text structure (genre-specific), and
overall cohesion. Each item received a score ranging 1-4. The individual trait scores were combined for
an overall macrostructure composite score. In summary, there were 13 writing variables (9 for
microstructure, 4 for macrostructure) examined, as depicted in Table 3 below:
24
Table 3: Dependent Writing Variables
Level Dependent Measure
Microstructure
Productivity Total number of words (TNW)
Total number of T-units (TNT)
Grammatical Complexity Mean Length T-unit (MLTu)
Total clauses (per sentence) (CPS)
Clause density (# clauses per T-unit) (CLD)
Percentage of grammatical sentences (%GE)
Total grammatical errors (TGE)
Lexical Diversity Number of different words (NDW)
Lexical density (LXD)
Macrostructure
Organization Trait score (1-4)
Text Structure Trait score (1-4)
Cohesion Trait score (1-4)
Macrostructure Composite Combination of the above 3 trait scores
Research Assistant Training
Three graduate students enrolled in the Department of Communication Sciences and Disorders
were recruited as research assistants. Research assistants met specific requirements to participate in
research activities, including 1) use of English as primary language, 2) prior experience working with
school-age populations, 3) completion of 5 hours training before participating in data analysis, and 4)
availability to assist with data analysis during the spring and summer semesters.
Three training sessions were conducted. The first training session served 3 purposes: 1) the PI
provided an overview of the stated problem being examined and the purpose of the investigation; 2) the
PI and research assistant discussed their respective roles and responsibilities, and; 3) the PI and
research assistant discussed scheduling and the assistant’s availability for the semester. The second
training session consisted of an introduction to the SALT program. Each assistant received a data
coding training manual. The PI provided an overview of the manual and the SALT program, and
explained the procedure for assistants to follow. The training manual included an overview of the
problem being addressed & the research questions, overview of how the samples were transcribed into
25
SALT by the PI, scoring protocol for SALT coding of microstructure & macrostructure elements, anchor
samples for coding practice, procedure for establishing reliability with the PI for coding, and proposed
schedule for completion of data transcription and coding. The third session involved guided practice
coding writing samples into the SALT program. For the purpose of training, practice samples that were
independently coded by the PI and reliability coder were expected to reach an agreement criterion of
90% agreement prior to moving on to coding the reliability set. Additional training sessions were
conducted as needed.
Inter-Rater Reliability
Reliability of the dependent measures was established using a randomly-selected sub-sample of
writing samples equaling 25% of the total number of samples collected specifically. Samples selected
for reliability coding consisted of approximately 25% of the samples produced within each genre and
within each grade level. Percent agreement and Cohen's Kappa coefficients were calculated for the
following variable characteristics that required coding in SALT to produce the scores for each of the
dependent variables: T-unit segmentation, clauses per T-unit, clauses per sentence, sentence codes to
indicate grammatical complexity (simple vs. complex) and accuracy (correct vs. incorrect) of the
sentence structure, identification of content words, and identification of grammatical errors. The actual
values for the following variables were automatically calculated in SALT, and therefore did not require
reliability estimates: total words, total T-units, MLTU, total clauses per sentence, clause density,
percentage of grammatical sentences, total grammatical errors, and number of different words.
Percent agreement was calculated by dividing the number of agreements by the number of
disagreements plus agreements and multiplying this score by 100 for each measure. Percent
agreement ranged from 83% to 98% for the microstructure variables, and from 84% to 93% for
macrostructure variables. Percent agreement is a commonly reported measure of inter-observer
agreement; however, a specific disadvantage is that it does not account for the possibility of chance
agreements. Therefore, Cohen's Kappa coefficients also were calculated, by considering the
proportions of observed and chance agreement. Kappa coefficients of >.6 were required to establish
adequate reliability. Kappa values may be interpreted as follows: 0.41-0.60 fair, 0.61-0.80 good, and
over 0.80 very good reliability among raters (Warner, 2008). Kappas ranged from 0.80 to 0.98 for
microstructure variables and from .72 to .90 for macrostructure variables. Therefore, suitable reliability
was established for all coded dependent measures, and therefore judged to be adequate for all
subsequent analyses. The kappa coefficients and percent agreement levels are illustrated in Table 4.
26
Table 4
Cohen's Kappa coefficients and percent agreement.
Coded Variable Kappa statistic* Percent agreement Microstructure 1. T-unit segmentation .95 96% 2. Clause density .80 83% 3. Clauses per sentence .98 98% 4. Total Content words .95 96% 5. Total grammatical errors .91 94% 6. Sentence codes .78 87% Macrostructure 1. Organization .81 84% 2. Text Structure .90 93% 3. Cohesion .72 84% Note. Kappa statistic* = All reported Kappa coefficients were significant at p <.001.
Research Design
This investigation employed multivariate descriptive quantitative research methods to answer
the proposed research questions. In general, the overarching goals of quantitative methods are to test
theories and hypotheses, identify correlational and/or causal relations, and determine group differences
or patterns (Kazdin, 2003). More specifically, descriptive quantitative methods serve to identify
characteristics of observed phenomena, and explore possible correlations among phenomena without
changing them (Leedy & Ormond, 2005). This investigation employed a cross-sectional quantitative
design in that it considered development across grade levels. With multiple dependent variables and
grade level serving as a between-subjects factor, and the z scores of the GRADE Comprehension
Composite as a covariate, two separate Multiple Analyses of Covariances (MANCOVA) were
conducted; one MANCOVA was conducted per genre. To compare microstructure and macrostructure
performance across genres, repeated measures analysis of variance (RM-ANOVA) methods were
utilized. Collectively these analyses were designed to answer the first and second research questions.
To answer the third research question, multiple correlations were conducted. Additionally, preliminary
analyses involving exploratory factor analysis techniques for data reduction were employed.
Power Analysis
Data were analyzed using the Predictive Analytics Software for Windows (PASW), version 17.0
(SPSS, 2009; formerly known as the Statistical Package for Social Science, SPSS). To answer the first
27
and second research questions, data for the dependent writing measures were analyzed using
multivariate techniques (MANOVA) with grade level as a between-subjects factor, and genre as a
within-subjects factor. Adjustments were made for multiple pairwise comparisons to compare
performance of participants in the 3 grades. Correction for Type I error occurred via Bonferroni
correction. Measures of effect size were reported using partial eta squared (using an alpha level of .05,
and ability to detect a large effect size with power equal to or exceeding .70). The maximum number of
variables included in a single MANOVA was five. As such, in order to detect a large effect for a
MANOVA with 5 variables, 3 grade-level groups, α = .05, power = .70, 25 participants per group was
recommended (Stevens, 1997). Consequently, a total of 75-100 participants were required to achieve
sufficient power to conduct the MANOVAs.
28
CHAPTER THREE
RESULTS
The analyses of data and results are presented in five sections. The first section details the
preliminary analyses that were performed to survey the data, check assumptions for the planned
analyses, and utilize data reduction techniques (e.g., exploratory factor analysis). The second section
presents the data on the participants’ performance on the dependent writing measures. Results of two
MANCOVAs conducted to address the first and second research aims to determine the progression of
microstructure and macrostructure elements in the narrative and expository writing of children in
second, third, and fourth grades are reported. The third section includes the results of the correlational
analyses to determine the relations among measures of microstructure and macrostructure, addressing
the third research aim. Findings related to certain demographic characteristics (e.g., ethnicity, SES,
gender) of the sample are explored in the fourth results section. Finally, a post hoc analysis is
presented.
Preliminary Analyses
Data were surveyed for normality and outliers by grade. To detect outliers in the data, all values
were converted to z scores. Outliers were identified using a criterion of the mean plus or minus two
standard deviations. This method revealed a total of 41 univariate outliers across both genres sampled:
total words (3 outliers; 2 narrative, 1 expository), lexical density (2 outliers; 1 narrative, 1 expository),
total T-units (3 narrative outliers), clause density (4 outliers; 2 narrative, 2 expository), clauses per
Factors and Respective Dependent Variables Analyzed via MANCOVA
Factor Dependent Measure
Productivity Total words
Total T-units
Number of Different Words
Grammatical Complexity Mean Length T-unit
Clauses per sentence
Clause density (# clauses per T-unit)
Grammatical Accuracy Percentage of grammatical sentences
Total grammatical errors
Lexical Diversity Lexical density
Macrostructure Organization Trait Score
Text Structure Trait Score
Cohesion Trait Score
Table 10. Descriptive statistics for dependent measures; narrative genre.
Grade 2 Grade 3 Grade 4 Measure M SD M SD M SD _________________________________________________________________________________________________________ Productivity Total Words 24.27 12.22 53.24 25.57 77.59 34.56 Total T-units 3.46 1.75 6.45 3.28 9.59 4.42 Number of Different Words 18.31 7.05 36.88 14.58 47.33 17.06 Grammatical complexity
Table 11. Descriptive statistics for dependent measures; expository genre. Grade 2 Grade 3 Grade 4 Measure M SD M SD M SD _________________________________________________________________________________________________________ Productivity Total Words 27.77 12.00 53.97 22.11 73.46 29.55 Total T-units 3.73 1.22 6.62 2.96 8.84 4.20 Number of Different Words 20.85 7.59 37.56 12.71 47.85 16.92 Grammatical complexity
The influence of cultural-linguistic factors on writing performance, such as ethnicity and dialect,
is important to consider. First, one must acknowledge that ethnicity and dialect are not equivalent, and
each factor warrants separate consideration. Further, in the present study, no measure of dialect was
administered. No effect of ethnicity was detected in the present sample. However, regardless of
reported ethnicity, it is possible that dialectal influences may exist for individual children, and could
affect the outcome measures of grammatical accuracy. The dialect shifting-reading achievement
hypothesis suggests that students who successfully shift from dialectal forms reflective of non-
mainstream English dialects (e.g., AAE) to Standard American English (SAE) forms in different literacy
tasks (including writing) demonstrate better reading outcomes than students who do not make the shift
as adequately (Craig et al., 2009). Investigations employing larger samples of participants with
ethnically diverse backgrounds, and incorporating distinct a priori measures of dialect (e.g., dialect
density measures), may have better chances of detecting possible differences. If differences are indeed
detected in this manner, investigators can recode the SALT files to capture features of a specific dialect
that has been observed in the sample (e.g., AAE). It would be worthwhile to compare results for written
grammatical and lexical microstructure variables, as well as text structure influences, of dialectal
speakers to capture the weight of influence that dialectal differences may exert on dependent writing
61
measures for both microstructure and macrostructure (Terry, 2006; Thompson, Craig, & Washington,
2004).
Limitations and Future Research
A variety of potential limitations to the present investigation have been presented throughout
this discussion. However, some additional considerations for future research are necessary to note.
One consideration is the method used to elicit writing samples employed in the current study. A single
elicitation technique was incorporated (i.e., response to writing prompt). It is important to consider that
grade and genre effects may vary as a result of differences in prompting procedures and targeted
genre structure (Scott, 1994). Moreover, time constraints may limit productivity, and possibly any
variables strongly correlated with productivity. The timeframe allowed for students to produce their
writing samples was consistent with regular classroom practices guided by statewide assessments.
Whether dissimilar results would be obtained with writing samples produced via different sampling
techniques is unknown. More work is needed in this area to compare the value of various elicitation
techniques to capture the possible relations between elicitation method and writing outcomes.
Furthermore, the degree of the relations among elements of microstructure and macrostructure may be
shaped by the actual genre structure produced, especially considering the results of the post hoc genre
identification task. Thus, investigators planning future studies in this are may elect to first establish the
reliability of selected prompts to elicit the intended genre, and plan in advance a post-hoc analysis to
verify the reliability of selected prompts within their sample.
Sensitivity of dependent measures to detect grade and genre differences is another crucial
element of this investigation to note in light of the findings. For example, potential floor and ceiling
effects were indicated for individual grades on a few dependent measures. As discussed at the
beginning of the results section, several variables, for second grade in particular, exhibited skewness
and/or kurtosis reflective of floor and ceiling effects. This may have limited the measure’s sensitivity to
detect differences between grade levels or to negatively impact the power of parametric analyses
utilized. However, MANOVA is reportedly robust against mild to moderate violations of normality
(Fields, 2005). Furthermore, these data points were considered to be accurate depictions of the range
of performance and true reflections of the variability inherent in young children’s performance on writing
measures. Larger samples may help allay this concern.
Regarding the sensitivity of the macrostructure measure, the fact that this measure resulted in a
unidimensional measure of macrostructure (according to the EFA results) would seem to be
contradictory to some authors’ recommendations against the use of holistic score ratings of writing
performance to inform instruction and monitor growth (Nelson & Van Meter, 2004). However, as noted
in the present investigation, a “holistic” rating scale for macrostructure was a useful method to compare
62
a particular student’s or grade level’s performance in comparison to peers or comparison groups. In
contrast, EFA results indicated that the microstructure measure consisted of four distinct factors. As
such, microstructure, in contrast to macrostructure, would be best examined with an analytic scoring
method, utilizing more than one factor or score. Either way, the purpose for the writing assessment, as
well as the reliability of a particular scale to fulfill that purpose, should be the focus at the outset. In
some states, including Florida, the statewide progress monitoring tool for writing in the elementary
grades (i.e., Writes Upon Request) is administered multiple times per school year and yields only a
holistic score. The raters consider four factors in their ratings of student text: focus, organization,
support, and conventions (FLDOE, 2009). Educators are first cautioned against using this single score
as the sole determinant of a student's writing proficiency, and encouraged to interpret this score in light
of the student's performance in other writing tasks and contexts (FLDOE).
Additional considerations for research design are warranted. The data examined in this
investigation represented the pre-test writing performance of students from three grade levels within
one school, who also were recruited to participate in a spelling intervention study. Even though the
design of the school sampled is intended to be representative of statewide student demographics,
analyses based on samples of convenience still may not be wholly representative of the general
population. Caution is therefore called for when applying these interpretations to samples other than
that included in this study. However, this investigation has demonstrated the utility of these measures
for detecting grade and genre differences, and can be utilized when developing local norms.
There are two notable concerns about the use of a covariate in the main analyses. Grade level
differences on the GRADE Comprehension Composite scores indicated the need to include the scores
as a covariate in the main analyses. However, even though there was a significant group difference
between third and fourth grades on this measure, the magnitude of this difference, as indicated by the
effect size, was small (.12). The clinical significance of this small difference is unclear, especially
considering that the mean scores for the fourth grade group remained well within the average range.
Furthermore, as some authors warn against employing ANCOVA with nonrandom groups, researchers
should consider possible alternatives to address the limitations involved and improve their designs
(Miller, 2001).
Data derived from educational research often have a nested structure (e.g., students are nested
in classes, classes are nested within schools, and schools are nested within school districts). With
consideration of the potential impact of quality of writing instruction on writing outcomes (Mehta,
Foorman, Branum-Martin, & Taylor, 2005; Moats, et al 2006), future researchers are encouraged to
utilize ANOVA designs with nested factors to detect within-class effects. The current design did not
allow for this level of analysis. An increased sample size, with a larger number of classrooms examined
63
at each grade level, and additional schools, would support nested designs to look at class/teacher
effects more specifically.
Results of the present investigation extend findings from previous studies, and add to the
existing literature regarding development of and relations among written microstructure and
macrostructure features within and across grade levels and genre types. The dependent measures
utilized in the present study have been suggested as not only useful tools for differentiating groups of
students on some factor of achievement (e.g., reading level, learning disability, language impairment;
see Nelson & Van Meter, 2007; Puranik et al., 2007), but also as useful progress monitoring tools for all
students. To date, there have been few longitudinal studies designed to examine the utility of these
measures to monitor student progress across multiple genres and grade levels.
Educational Implications
Educators are encouraged to consider the lack of differences between grade levels for some of
the dependent measures in light of established grade level expectations that are reflected in state
standards for writing. For example, Florida standards (FLDOE, 2009) require second graders to write in
a variety of informative and expository forms, and produce narratives based on real or imagined events
(including a main idea, characters, sequence of events, and descriptive details). Beginning in third
grade, students are expected to write short persuasive texts, write in additional varieties of informative
and expository forms, and produce more complex narratives (including additional items over and above
those listed for second grade: setting, plot, sensory details, and logical sequence of events).
Additionally, third graders are expected to produce a minimal expository structure containing at least
three paragraphs, and including a topic sentence, supporting details, and relevant information. By
fourth grade, the expectations are higher for persuasive (including use of persuasive techniques,
supporting arguments and detailed evidence), informative and expository (essays including
introductory, body, and concluding paragraphs), and narrative writing (all of the previously mentioned
components plus a context to enable the reader to imagine the world of the event or experience).
Based on review of the present data, it is clear that not all of the writing samples collected reflected
mastery of the previous grade level’s standards for writing. This begs the question, if the established
grade level expectations are considered reasonable, how well are current instructional practices
designed to support student achievement of these standards? Future research needs to determine the
extent to which writing instruction, assessment, and progress monitoring adhere to grade level
standards for writing performance. In the meantime, state writing standards that are being developed or
revised need to be research-based, and educators, researchers, and policy-makers need to work
collaboratively to design instruction that is reflective of research-based standards for writing.
64
In general, the importance of timely detection of students at risk for writing problems, and the
provision of early intervention for defined problems, is garnering increased attention in the educational
system in recent years (Singer & Bashir, 2004). Federal and state educational policy reflects this
movement to improve the writing proficiency of all students (Troia, 2009). However, writing research is
an extremely complex undertaking, and as such, requires careful planning and attention to
methodological implications and limitations of previous research. Much more is to be learned regarding
the development of linguistic features in children’s writing, the effects of multiple factors at the child,
family, classroom, and school levels, and development of reliable and valid writing assessment and
progress monitoring tools. Once writing problems are detected, and instruction or intervention is
planned and provided, reliable progress monitoring tools are necessary to document the student’s
response to the interventions implemented. When considering development of appropriate progress
monitoring tools for writing, one should consider that some measures are more sensitive for capturing
developmental progression within and between grade levels than others.
Conclusion
This study examined multiple dimensions of written language produced by children in grades 2,
3, and 4 in narrative and expository writing samples. The samples were analyzed for developmental
progression of linguistic elements of microstructure and macrostructure represented by the five factors
of productivity, grammatical complexity, grammatical accuracy, lexical diversity, and macrostructure.
Results of this study suggest that variables of written microstructure and macrostructure were sensitive
to grade and genre level differences, that productivity and macrostructure were related in both genres
for all three grade levels, and that one cannot assume the older students will outperform younger
students on all measures. This latter finding was thought to be due to a trade-off between linguistic and
cognitive demands. Consequently, future research needs to establish these trade-off trends in larger
samples and examine the effects of different academic contexts (e.g., variable elicitation techniques,
discourse structures, content specific assignments) on this phenomenon.
Acknowledging that writing is truly an essential component of literacy (broadly defined), and
therefore plays a hefty role in the national literacy crisis, then “poor facility in expressing thoughts
through written language” may persist as the “most prevalent disability of communication skills” (Lerner,
1976; p. 266). Given the importance of writing, researchers and practitioners have a responsibility to
persevere and continue meeting the demands of this challenge directly, through continual exploration of
various aspects of writing development and writing proficiency, with the ultimate goal to improve the
writing outcomes of all students.
65
APPENDIX A
CONSENT FORM & IRB APPROVAL
66
Office of the Vice President For Research Human Subjects Committee Tallahassee, Florida 32306-2742 (850) 644-8673 · FAX (850) 644-4392 APPROVAL MEMORANDUM Date: 9/18/2008 To: Kenn Apel [[email protected]] Address: 1200 Dept.: COMMUNICATION DISORDERS From: Thomas L. Jacobson, Chair Re: Use of Human Subjects in Research The Effect of a Multiple-Linguistic Factor Spelling Approach on Spelling, Reading, and Writing Abilities The application that you submitted to this office in regard to the use of human subjects in the research proposal referenced above has been reviewed by the Human Subjects Committee at its meeting on 09/10/2008. Your project was approved by the Committee. The Human Subjects Committee has not evaluated your proposal for scientific merit, except to weigh the risk to the human participants and the aspects of the proposal related to potential risk and benefit. This approval does not replace any departmental or other approvals, which may be required. If you submitted a proposed consent form with your application, the approved stamped consent form is attached to this approval notice. Only the stamped version of the consent form may be used in recruiting research subjects. If the project has not been completed by 9/9/2009 you must request a renewal of approval for continuation of the project. As a courtesy, a renewal notice will be sent to you prior to your expiration date; however, it is your responsibility as the Principal Investigator to timely request renewal of your approval from the Committee. You are advised that any change in protocol for this project must be reviewed and approved by the Committee prior to implementation of the proposed change in the protocol. A protocol change/amendment form is required to be submitted for approval by the Committee. In addition, federal regulations require that the Principal Investigator promptly report, in writing any unanticipated problems or adverse events involving risks to research subjects or others. By copy of this memorandum, the Chair of your department and/or your major professor is reminded that he/she is responsible for being informed concerning research projects involving human subjects in the department, and should review protocols as often as needed to insure that the project is being conducted in compliance with our institution and with DHHS regulations. This institution has an Assurance on file with the Office for Human Research Protection. The Assurance Number is IRB00000446. Cc: Juliann Woods, Chair [[email protected]]
HSC No. 2008.1364
67
APPENDIX B
WRITING INSTRUCTIONS AND PROMPTS
“Today you are going to do two pieces of writing on topics I give you. I want you to do everything you
know how to do as a writer to complete this assignment. You may use any strategies you know that
help you. Let me read the first prompt to you. (Read prompt aloud.) When you are finished, you may
re-read your paper and make any changes you want.”
Narrative: "Tell me about a time that someone surprised you and what happened."
Expository: "Pretend you are a super hero and you are being interviewed on the news. Tell
everyone what special powers you would have. Also, explain what you would do with them to
help the world."
68
APPENDIX C
SALT PROTOCOL FOR MICROSTRUCTURE VARIABLES
Entering utterances (sentences from a writing sample) as T-units in SALT
• A t-unit is one main clause plus any subordinate (dependent) clause or nonclausal structure (such as a prepositional or verbal phrase) that is embedded in the main clause.
• A T-unit is an independent clause (a subject and a predicate) along with ay phrases or clauses embedded in it. • All coordinated clauses are separated out into T-units, unless they contain a co-referential subject deletion in the
second clause. Examples: 1. If people live in the city they don’t have to drive. (1 T-unit)
Enter is SALT as one line: If people live in the city they don’t have to drive.
2. There are people that live in the city and people that live in the country. (2 T-units)
Enter in SALT as two lines: There are people that live in the city. and people that live in the country. Examples of main clauses with embedded clauses: 3. Reading books is my favorite thing to do. (1 T-unit)
Enter is SALT as one line: Reading books is my favorite thing to do.
4. I like to read books. (1 T-unit) Enter is SALT as one line: I like to read books.
5. The book, which I forgot to bring, was my favorite. (2 T-unit) Enter is SALT as one line: The book, which I forgot to bring, was my favorite.
Example of a main clause with a phrase or clause subordinated to it: 6. She thanked me when I gave her the book. (1 T-unit)
Enter is SALT as one line: She thanked me when I gave her the book.
Example of 2 utterances that convey the same information but in different numbers of T-units:
69
7.I forgot the book but I can bring it tomorrow and I will give it back so I hope that is okay. (4 T-units)
Enter is SALT as four lines: I forgot the book. but I can bring it tomorrow. and I will give it back. so I hope that is okay.
8. I forgot the book that I need to give back, although I am coming tomorrow and can bring it then. (1 T-unit)
Enter is SALT as one line/utterance: I forgot the book that I need to give back, although I am coming tomorrow and can bring it then.
Example of a relative clause: 9. It was the boy who probably did it. (1 T-unit)
Enter is SALT as one line/utterance: It was the boy who probably did it. Example of an expanded noun phrase: 10. The large, green hairy monster ate the food. (1 T-unit)
Enter is SALT as one line/utterance: The large, green hairy monster ate the food.
Example of a nonfinite clause: 11. Keeping the room clean was her responsibility. (1 T-unit)
Enter is SALT as one line/utterance: Keeping the room clean was her responsibility.
Example of adverbial fronting: 12. There sat the big king elephant. (1 T-unit)
Enter is SALT as one line/utterance: There sat the big king elephant.
Other examples: She hid and forgot about it. (1 T-unit) - compound verb phrase After she hid it, she forgot where. (1 T-unit) -subordinated clause She decided to look for it. (1 T-unit) -verb as secondary verb, or nonfinite verb as an infinitive Looking for it proved difficult. (1 T-unit) -verb used as a noun phrase: gerund Art is fun because we paint but when we come back to our classroom we do our work. (2 T-units)
70
Enter in SALT as two lines: Art is fun because we paint. but when we come back to our classroom we do our work. I am 8 years old and I am in the third grade. (2 T-units)
Enter in SALT as two lines: I am 8 years old. and I am in the third grade. The boy who is my friend started working after I was done. (1 T-unit)
Enter in SALT as one line: The boy who is my friend started working after I was done.
He saw her and then he decided to turn around. (2 T-units)
Enter in SALT as two lines: He saw her. and then he decided to turn around.
She putted the dog’s collar on but he shook it off. (2 T-units)
Enter in SALT as two lines: She putted the dog’s collar on. but he shook it off.
Notes: • Young children tend to string together independent clauses with the coordinating conjunctions and, but, or, so. Use
of these conjunctions often signifies a new T-unit. • Each independent clause in a run-on sentence is counted as a separate T-unit.
SALT RULES:
• Must have a period at the end of each T-unit. • Always save the file you have segmented into T-units as a new file, different from the “original” SALT file, with the
word “Tunit” in the file name, save in the folder with your name.
Coding for Clauses in SALT Focus A: Coding for number of clauses per T-unit (clause density) Definition of Clause = group of related words with a subject and verb (some clauses are dependent, can stand alone, others are independent, cannot stand alone). 1.Using your checklist of participant files to code, open the file under your “Tunits” folder on the projects drive. 2. Enter the following codes at the top of every transcript: +[1CL]: 1 clause +[2CL]: 2 clauses +[3CL]: 3 clauses +[4CL]: 4 clauses +[MCL]: > 4 clauses (multiple) (*tip = you may want to copy and paste these codes from the first transcript to the others) 3. Count the number of clauses per T-unit and indicate at the end of the line (before the period) how many clauses you counted.
71
4. After you’ve entered the clause code for each T-unit in the transcript, please save the file under your file for “Clauses” on the projects drive. ------------------------------------------------------------------------------------------------------------ Focus B: Counting the number of clauses per sentence (for variables related to sentence complexity). 1. Using your checklist of participant files to code, open the file under your “Clauses” folder on the projects drive. 2. Count the number of clauses per sentence, and indicate at the end of the sentence (before the last period) how many clauses you counted. Remove the Clause counts that were previously counted at the T-unit level. Your final transcript document should only show the clause count per sentence. Note:
• There can be more than 1 T-unit in a sentence. • A sentence is generally based on the child’s punctuation in the original sample. (Sometimes kids forget to put a
period or some other punctuation at the end of a sentence. However, if they begin the next sentence with a capital letter, then it is treated in SALT as 2 sentences despite lacking punctuation in the original paper).
3. After you’ve entered the clause code for each sentence, please save the file under your file for “Sentence clauses” on the projects drive.
Coding for Grammatical Errors (GE) Add the following code to the list of codes at the top of every transcript/SALT file: +[GE]: Grammatical Error Grammatical errors = verb or pronoun tense/agreement/case, omitted or incorrect inflection / omitted or substitution of grammatical elements, violation of word order. Examples:
• The city have a lot of people. (problem = subject/verb agreement) How it should be coded in SALT:
The city have(GE) a lot of people.
• People living in the country are also close to their jobs, which are usually farming. (problem = unclear referent)
How it should be coded in SALT: People living in the country are also close to their jobs, which(GE) are usually farming.
• There are a lot of farmers in suburbs. (problem = missing article before “suburbs”) How it should be coded in SALT: There are a lot of farmers in(GE) suburbs. • There is a lot of schools. (problem = subject/verb agreement) How it should be coded in SALT: There is(GE) a lot of schools.
Coding for Simple/Complex and Correct/Incorrect Sentences Add the following codes to the list of codes at the top of every transcript/SALT file: +[SC]: Simple Correct +[CC]: Complex Correct
72
+[SI]: Simple Incorrect +[CI]: Complex Incorrect
• Correct Sentence = no grammatical errors (no GE codes) • Incorrect Sentence = has one or more grammatical errors (1 or more GE codes)
• Simple Sentence = A sentence with only one main clause.
Examples of Simple Correct Sentences [SC] People live in different places [1CL][SC] Farmers raise cows, pigs, and chickens [1CL][SC] Examples of Simple Incorrect Sentences [SI] There is[GE] a lot of schools, offices, and factories in the cities [1CL][SI]. There are a lot of farmers in[GE] suburbs [1CL][SI].
• Complex Sentence = A sentence with either: • one main clause and one or more subordinate/embedded clauses, or • two main clauses, or • one main clause and verb phrase joined by a coordinating conjunction (clause = must have a verb).
*Recall that a clause is a group of related words containing a subject with a verb. Examples of Complex Correct Sentences [CC] The suburbs are more crowded than the country but less crowded than the city [2CL][CC]. If people don’t want to drive a long way to their jobs, they live in the city [2CL][CC]. Examples of Complex Incorrect Sentences [CI] The farmers grows[GE] crops and give them to their animals [2CL][CI]. If people don’t want to drive a long way to their jobs, they lives[GE] in the city [2CL][CC].
SAVE YOUR CODING FILES WITHIN THE “SENTENCE TYPE” FOLDER
73
APPENDIX D
PROTOCOL FOR MACROSTRUCTURE VARIABLES
74
75
76
77
REFERENCES Andrade, H. L., Wang, X., Du, Y., & Akawi, R. (2009). Rubric-referenced self-assessment and self-
efficacy for writing. Journal of Educational Research, 102(4), 287-301. Ball, A. (2006). Teaching writing in culturally diverse classrooms. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 293-310). New York: Guilford Press. Barenbaum, E., Newcomer, P., & Nodine, B. (1987). Children's ability to write stories as a
function of variation in task, age, and developmental level. Learning Disability Quarterly, 10(3), 175-188.
Berman, R. A., & Verhoeven, L. (2002). Cross-linguistic perspectives on the development of text-production abilities: Speech and writing. Written Language and Literacy, 5(1), 1-43.
Biancarosa, C., & Snow, C. E. (2006). Reading next: A vision for action and research in middle
and high school literacy:A report to Carnegie Corporation of New York (2nd ed.). Washington, DC:Alliance for Excellent Education.
Blair, T. K., & Crump, W. D. (1984). Effects of discourse mode on the syntactic complexity of
learning disabled students' written expression. Learning Disability Quarterly, 7(1), 19-29. Borman, G. D., Dowling, N. M., & Schneck, C. (2008). A multisite cluster randomized field trial of Open Court Reading. Educational Evaluation and Policy Analysis, 30(4), 389-407. Cox, B. E., Shanahan, T., & Sulzby, E. (1990). Good and poor elementary readers' use of
cohesion in writing. Reading Research Quarterly, 25(1), 47-65. Craig, H. K., Zhang, L., Hensel, S. L., & Quinn, E. J. (2009). African American English-
Speaking students: An examination of the relationship between dialect shifting and reading outcomes. Journal of Speech, Language, and Hearing Research, 52, 839-855.
Crawford, L., Helwig, R., & Tindal, G. (2004). Writing performance assessments: How
important is extended time? Journal of Learning Disabilities, 37(2), 132-142. Davis, N., & Compton, D. L. (2008, March). Falling through the cracks: Children who are exceptions to the RtI identification process. Perspectives on Language Learning and Education, 15, 41-45. Dockrell, J. E., Lindsay, G., Connelly, V., & Mackie, C. (2007). Constraints in the production of
written text in children with specific language impairments. Exceptional Children, 73(2), 147-164.
Donovan, C. A., & Smolkin, L. B. (2006). Children's understanding of genre and writing
development. In C.A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 131-143). New York: Guilford Press.
Dunn, L. M., & Dunn, D. M. (2007). Peabody Picture Vocabulary Test - 4th ed. Minneapolis, MN:
NCS Pearson Assessments. Englert, C. S., Raphael, T. E., Anderson, L. M., Gregg, S. L., & Anthony, H. M. (1989).
78
Exposition: Reading, writing, and the metacognitive knowledge of learning disabled students. Learning Disabilities Research, 5(1), 5-24.
Englert, C. S., & Thomas, C. C. (1987). Sensitivity to text structure in reading and writing: A
comparison between learning disabled and non-learning disabled students. Learning Disability Quarterly, 10(2), 93-105.
Fields, A. (2005). Discovering statistics using SPSS. Second edition. London: Sage Publications. Fitzgerald, J., & Shanahan, T. (2000). Reading and writing relations and their development. Educational Psychologist, 35(1), 39-50.
Fey, M., Catts, H., Proctor-Williams, K., Tomblin, B., & Zhang, X. (2004). Oral and written
story composition skills of children with language impairment. Journal of Speech, Language, and Hearing Research, 47, 1301-1318.
Florida Department of Education (FLDOE, 2009). Florida Writes. Retrieved November 1, 2008,
from http://www.fldoe.org/asp. Glenn, C. G., & Stein, N. L. (1980). Syntactic structures and real world themes in stories
generated by children. (Technical report). Urbana, IL: University of Illinois, Center for the Study of Reading.
Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of
adolescents in middle and high schools – A report to the Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.
Graham, S., & Harris, K.R. (2003). Students with learning disabilities and the process of writing: A meta-analysis of SRSD Studies. In H.L. Swanson, K. Harris, and S. Graham (Eds.), Handbook of learning disabilities (pp. 323-344). New York: Guilford.
Houck, C., & Billingsley, B. (1989). Written expression of students with and without learning disabilities: Differences across the grades. Journal of Learning Disabilities, 22, 561-565.
Hunt, K.W. (1966). Recent measures in syntactic development. Elementary English, 43,732-739. Jenkins, J. R., Johnson, E., & Hileman, J. (2004). When is reading also writing: Sources of
individual differences on the new reading performance assessments. Scientific Studies of Reading, 8,125-151.
Kazdin, A.E. (2003). Research design in clinical psychology (4th ed.). Boston, MA:
Allyn and Bacon. Kennedy, P. (1993). Preparing for the twenty-first century. New York: Random House. Kidder, C. L. (1974). Using the computer to measure syntactic density and vocabulary density in
the writing of elementary school children. Unpublished doctoral dissertation, Pennsylvania State University, 1974.
Justice, L. (2004). The connection between oral narrative and reading problems: What’s the story? TEMPO Weekly Reader, 7.2, 2-9.
79
Laughton, J., & Morris, N. (1989). Story grammar knowledge of learning disabled students.
Learning Disabilities Research, 4, 87-95. Leedy, P. D., & Ormond, J. E. (2005). Practical research: Planning and design. Upper
Saddle River, NJ: Pearson Education, Inc. Lerner, J. W. (1976). Children with learning disabilities: Theories, diagnosis, teaching,
strategies. Boston: Houghton Mifflin. Li, Y. (2000). Linguistic characteristics of ESL writing in task-based e-mail activities. System, 28(2),
229-245. MacArthur, C. A., Graham, S., & Fitzgerald, J. (2006). Handbook of writing research. New
York: Guilford Press. MacWhinney, B. (1995). The CHILDES project. Hillsdale, NJ: Erlbaum. Mackie, C., & Dockrell, J. E. (2004). The nature of written language deficits in children with
SLI. Journal of Speech, Language, and Hearing Research, 47, 1469-1483. Mehta, P. D., Foorman, B. R., Branum-Martin, L., & Taylor, W.P. (2005). Literacy as a
unidimensional multilevel construct: Validation, sources of influence, and implications in a longitudinal study in grades 1 to 4. Scientific Studies of Reading, 9(2), 85-116.
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of
Abnormal Psychology, 110(1), 40-48. Miller, J., & Chapman, R. (2005). Systematic Analysis of Language Transcripts (v.8). Madison,
WI: Waisman Center, University of Wisconsin-Madison. Moats, L., Foorman, B., & Taylor, P. (2006). How quality of writing instruction impacts high-
risk fourth graders' writing. Reading and Writing, 19, 363-391. Montague, M., Maddux, C., & Dereshiwsky, M. (1990). Story grammar and comprehension and
production of narrative prose by students with learning disabilities. Journal of Learning Disabilities, 23, 190-196.
Moore-Brown, B. J., Montgomery, J. K., Bielinski, J., & Shubin, J. (2005). Responsiveness to
intervention: Teaching before testing helps avoid labeling. Topics in Language Disorders, 25(2), 148-167.
Morris, N., & Crump, W. (1982). Syntactic and vocabulary development in the written language
of learning disabled and non-disabled students at four age levels. Learning Disability Quarterly, 5, 163-172.
National Center for Education Statistics (NCES) (2003). The nation’s report card: Writing 2002. Washington, DC: U.S. Department of Education.
Nelson, N. W., & Van Meter, A. M. (2002). Assessing curriculum-based reading and writing samples. Topics in Language Disorders, 22(2), 35-59.
80
Nelson, N. W., & Van Meter, A. M. (2007). Measuring written language ability in narrative
samples. Reading and Writing Quarterly, 23(3), 287-309. Nelson, N. W., Bahr, C. M., & Van Meter, A. M. (2004) The writing lab approach to language
instruction and intervention. Baltimore, MD: Paul H. Brookes. Newcomer, P. L., Barenbaum, E. M., & Nodine, B. F. (1988). Comparison of the story
production of LD, normal-achieving, and low-achieving children under two modes of production. Learning Disability Quarterly, 11(2), 82-96.
Newcomer, P. L., & Curtis, D. (1984). Diagnostic Achievement Battery. Austin, TX: Pro-Ed. Nippold, M., Ward-Lonergan, J., & Fanning, J. (2005). Persuasive writing in children, adolescents, and
adults: A study of syntactic, semantic, and pragmatic development. Language, Speech, and Hearing Services in Schools, 36, 125-138.
No Child Left Behind Act (NCLB) of 2001. (P.L.107_110 [20 U.S.C. 7801]). Nodine, B., Barenbaum, E., & Newcomer, P. (1985). Story composition by learning disabled,
reading disabled, and normal children. Learning Disability Quarterly, 8, 167-179. Pajares, F., Britner, S., & Valiante, G. (2000). Relations between achievement goals and self-beliefs of
middle school students in writing and science. Contemporary Educational Psychology, 25, 406- 422.
Persky, H. R., Daane, M. C., & Jin, Y. (2003). The nation’s report card: Writing 2002. (NCES
2003-529.) U.S. Department of Education. Institute of Education Sciences. National Center for Education Statistics. Washington, DC: Government Printing Office.
Peterson, S. (2006). Influence of gender on writing development. In C.A. MacArthur, S.
Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp.311-323). New York: Guilford Press.
Purcell-Gates, V. (2000). Family literacy. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R.
Barr (Eds.), Handbook of reading research, volume II (pp. 853-870). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Puranik, C. (2006). Expository writing skills in elementary school children from third through
sixth grades and contributions of short-term and working memory. Unpublished doctoral dissertation. University of Florida, 2006.
Puranik, C., Lombardino, L. J., & Altmann, L. J. (2007). Writing through retellings: an
exploratory study of language-impaired and dyslexic populations. Reading & Writing, 20. 251-272.
Puranik, C., Lombardino, L. J., & Altmann, L. J. (2008). Assessing the microstructure of written
language using a retelling paradigm. American Journal of Speech Language Pathology, 17. 107-120.
Sanacore, J., & Palumbo, A. (2009). Understanding the fourth grade slump: Our point of view. The
81
Educational Forum, 73, 67-74. Sanders, T. J. M., & Schilperood, J. (2006). Text structure as a window on the cognition of
writing: How text analysis provides insights in writing products and writing processes. In C.A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 386-402).
Scott, C. M. (1994). A discourse continuum for school-age students: Impact of modality and genre. In G. Wallach & K. Butler (Eds.), Language learning disabilities in school-age children and adolescents (pp. 219-252). New York: Merrill. Scott, C. M. (2009). Language-based assessment of written expression. In G. A. Troia (Ed.),
Instruction and assessment for struggling writers: Evidence-based practices (pp. 358-385). Scott, C., & Stokes, S. (1995). Measures of syntax in school-age children and adolescents.
Language, Speech, and Hearing Services in Schools, 26, 309-317. Scott, C., & Windsor, J. (2000). General language performance measures in spoken and written
narrative and expository discourse of school-age children with language learning disabilities. Journal of Speech Language and Hearing Research, 43. 324-339.
Shanahan, T. (2006). Relations among oral language, reading, and writing development. In C.A.
MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp.171-186).
Singer, B. D. (2007). Assessment of reading comprehension and written expression in adolescents and adults. In A. G. Kamhi, J. J. Masterson, & K. Apel (Eds.), Clinical decision making in developmental language disorders (pp. 77-98).
Singer, B. D., & Bashir, A. S. (2002). EmPOWER: A strategy for teaching expository writing.
Boston, MA: Innovative Learning Partners, LLC. Singer, B. D. & Bashir, A. S. (2004). Developmental variations in writing composition skills. In
C.A. Stone, E.R. Silliman, B.J. Ehren, & K. Apel (Eds). Handbook of language and literacy development and disorders. (pp. 559-582). NY: Guilford Press.
Smith, M. C. (2000) What will be the demands of literacy in the workplace in the next
millennium? Reading Research Quarterly, 35(3), 378-383. SPSS Inc. (2009). Predictive Analytics Software for Windows version 17.0, Chicago, IL:
Authors. Stevens, J. (1997). Applied multivariate statistics for the social sciences. Third edition. Mahwah,
NJ: Lawrence Erlbaum Associates, Publishers. Terry, N. P. (2006). Relations between dialect variation, grammar, and early spelling skills.
Reading and Writing, 19, 907-931. Thompson, C. A., Craig, H. K., & Washington, J. A. (2004). Variable production of African
American English across oracy and literacy contexts. Language, Speech, and Hearing Services in Schools, 35, 269-282.
Troia, G. A. (2009). Instruction and assessment for struggling writers: Evidence-based
82
practices. New York: Guilford Press. U.S. Department of Education. (2008). Reading First. Retrieved November 1, 2008, from http://www.ed.gov/programs/readingfirst/index.html. Warner, R. M. (2008). Applied statistics: From bivariate through multivariate techniques. Los
Angeles: Sage Publications. Wasik, B. H., & Hendrikson, J. S. (2004). Family literacy practices. In C.A. Stone, E.R.
Silliman, B.J. Ehren, & K. Apel (Eds). Handbook of language and literacy development and disorders. (pp. 154-174). NY: Guilford Press.
Williams, K. T. (2001). GRADE. Circle Pines, MN: AGS Publishing, Inc. Zevengergen, A. A., Whitehurst, G. J., & Zevenbergen, J. A. (2003). Effects of a shared-reading intervention on the inclusion of evaluative devices in narratives of children from low-income families. Applied Developmental Psychology, 24, 1-15.
83
BIOGRAPHICAL SKETCH
Shannon Hall-Mills is a Doctoral Candidate in the School of Communication Science and Disorders at
Florida State University. Shannon was born and raised in Volusia County, Florida. She received her
Bachelor’s and Master’s degrees in Communication Disorders from Florida State University in 1999 and
2001. She obtained the Certificate of Clinical Competence in Speech-Language Pathology (CCC-SLP)
from the American Speech Language and Hearing Association. She maintains licensure to practice
Speech-Language Pathology with the Florida Department of Health, and certification from the Florida
Department of Education to work with students (K-12) who are speech/language impaired. Before
entering the doctoral program, Shannon worked with students in grades PK-5 as a school-based
speech-language pathologist and language diagnostician. While working on her doctorate, Shannon
was sponsored through a language and literacy leadership doctoral assistantship and graduate
scholarships from the Kappa Kappa Gamma Foundation, including a scholarship sponsored through
the Gates Foundation. Additionally, she served as an independent contracting therapist and Medicaid
provider for an early intervention program in the Tallahassee area. Shannon’s professional interests are
language and literacy development and disorders, school-based speech-language pathology services,
educational policy, and evidence-based practices. She is active in the American Speech Language
Hearing Association and the Florida Association of Speech-Language Pathologists and Audiologists.
Shannon currently resides in Tallahassee with her husband, serves as the state education consultant
for school-based SLPs in Florida through the Florida Department of Education, and continues to teach
at FSU in the School of Communication Science and Disorders and collaborate on language and