Top Banner
139 EME 11 (2) pp. 139–158 Intellect Limited 2012 Explorations in Media Ecology Volume 11 Number 2 © 2012 Intellect Ltd Pedagogy. English language. doi: 10.1386/eme.11.2.139_1 Keywords feedback textuality textual variance cognitive learning affective learning feedback valence PEdagogy Keith Massie Louisiana State University at alexandria Feedback: the impact of textual channel variance on student learning abstract The current study examined the impact of the medium and valence of feedback on student learning. Three textual media – handwritten, printed and computer- mediated – were employed as well as two forms of valence – positive and negative. Using a 3×2 factorial design a MANOVA was calculated to determine if there is a main effect for textual medium or valence, as well as if any interaction effect between the two variables existed. Such work parallels an earlier study by Carpenter and Marshall McLuhan. The results of this study are useful to better understand textual feedback and have pragmatic value. In On Liberty, J. S. Mill ‘advised researchers not only to retain ideas that had been tested and found wanting but to consider new and untested conceptions as well, no matter how absurd their first appearance’. (Feyerabend 1987: 33) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. EME_11.2_Pedagogy_Massie_139-158.indd 139 12/24/12 4:02:50 PM Copyright Intellect Ltd 2012 Not for distribution
20

Feedback: The Impact of textual channel variance on student learning

Dec 23, 2022

Download

Documents

Albert Fu
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Feedback: The Impact of textual channel variance on student learning

139

EME 11 (2) pp. 139–158 Intellect Limited 2012

Explorations in Media Ecology Volume 11 Number 2

© 2012 Intellect Ltd Pedagogy. English language. doi: 10.1386/eme.11.2.139_1

Keywords

feedbacktextualitytextual variancecognitive learningaffective learningfeedback valence

PEdagogy

Keith MassieLouisiana State University at alexandria

Feedback: the impact of

textual channel variance on

student learning

abstract

The current study examined the impact of the medium and valence of feedback on student learning. Three textual media – handwritten, printed and computer-mediated – were employed as well as two forms of valence – positive and negative. Using a 3×2 factorial design a MANOVA was calculated to determine if there is a main effect for textual medium or valence, as well as if any interaction effect between the two variables existed. Such work parallels an earlier study by Carpenter and Marshall McLuhan. The results of this study are useful to better understand textual feedback and have pragmatic value.

In On Liberty, J. S. Mill ‘advised researchers not only to retain ideas that had been tested and found wanting but to consider new and untested conceptions as well, no matter how absurd their first appearance’.

(Feyerabend 1987: 33)

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

EME_11.2_Pedagogy_Massie_139-158.indd 139 12/24/12 4:02:50 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 2: Feedback: The Impact of textual channel variance on student learning

Keith Massie

140

1. First-order cybernetics observes a given system. In contrast, Klaus Krippendorf (1996) suggests second-order cybernetics, not only observes the system, but also includes the observer himself or herself into the system.

2. Wiener would agree to this. ‘Weiner’s cybernetic theory is a communication theory, concerned with how messages are exchanged between two or more units so that each influences the other’ (Rogers 1997: 397).

‘Great work!’, ‘This needs to be explicated’, and ‘Avoid casual language’ are each forms of teacher-to-student feedback. Feedback is an essential element of teacher-to-student interaction because it encourages student achievement and can induce significant change in performance (Book and Wynkoop-Simmons 1980). Examining levels of feedback by coaches to athletes, George Tzetzis et al. (2008) found that as the amount and type of feedback increases, the self-confidence of the receiver also increases. In addition to increasing confidence, Pamela Cooper and Cheri Simonds suggest, feedback ‘has a powerful effect on a student’s perception of self, school, and instructor’ (2003: 91).

With the importance of feedback seemingly high, research is scarce. Citations of primary sources are often absent in many communication text-books that address feedback (Cooper and Simonds 2003; Gamble and Gamble 1987; Hybels and Weaver 1986; Lucas 2000). The following literature review seeks to provide context for ‘feedback’ as an important communication element in teaching and learning by examining how it has been addressed in cybernetics, media ecology and communication education.

Literature review

Though research focused specifically on feedback is scant, one finds three areas where ‘feedback’ is addressed directly. First, one can reference the early work of cybernetics, which places ‘feedback’ at centre stage. Second, given the nature of the current study, one can see ‘feedback’ from a media ecology perspective to understand how ‘feedback’ may differ depending on medium. Finally, one can peruse the typologies, opinions and assertions of those discussing ‘feedback’ in an educational setting.

Feedback: Cybernetics

Francis Heylighen and Cliff Joslyn state, ‘cybernetics as a specific field grew out of a series of interdisciplinary meetings held from 1944 to 1953 that brought together a number of noted post-war intellectuals’ (2001: 2). Cybernetics provided the first systematic approach to understanding feedback. Though the results of cybernetics research make broad claims about the role of feedback, it establishes a wide foundational base from which to start investigating more specific contexts that incorporate feedback. It begins to illustrate the role feed-back plays in changing or maintaining a given system.

Feedback, introduced into general usage by Norbert Wiener, is an essential element of the basic communication model. Using systems theory, Wiener (1954), continuing the work of William Ross Ashby (1952), created a ‘first-order cybernetics’ approach to feedback (Krippendorf 1996; Shalizi 1997).1 Wiener stated, ‘feedback [is] the property of being able to adjust future conduct by past performance’ (1954: 33; emphasis in original).

Ashby also defines feedback in his work, An Introduction to Cybernetics (1956). Ashby stated, ‘feedback exists between two parts when each affects the other’ (1956: 53).2 Ashby makes clear that ‘the nature, degree, and polar-ity of the feedback usually have a decisive effect on the stability or instability of the system’ (1952: 53, emphasis added). For Ashby, feedback exists in the system because neither value is independent of the other.

If feedback is absent in a system, Cosma Shalizi (1997), using the work of Ashby (1956), suggests the system will seek equilibrium. Here, feedback serves to disrupt stability and promote change. Without some form of feed-back, a system reverts to its most stable initial state. Addressing this, Wiener

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 140 12/24/12 4:02:51 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 3: Feedback: The Impact of textual channel variance on student learning

Feedback

141

states, ‘It is my thesis that the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback’ (1954: 26). Here, Ashby and Wiener appear in agreement to some extent.

This project defines instructional feedback as any communication by a teacher intended to maintain, increase or decrease a given act of a student’s work. Feedback may occur from teacher-to-student without creat-ing an adjustment in the student’s actions. In short, the feedback is either unsuccessful for a teacher intending to promote change or the feedback is successful in promoting maintenance of a student act. Discussing organiza-tions, David Nadler summarized this: ‘feedback enables system correction rather than automatically bringing it about. This point is important since in organizations potential feedback information can exist but may be ignored’ (1977: 69; emphasis in original). Therefore, feedback does not automatically induce learning in the student but instead aids in increasing the potential for student learning.

Feedback can be seen as information that can create potential change in the original state of a system (Ashby 1956; Nadler 1977; Wiener 1954). Restated, feedback can be seen as an information that disrupts equilibrium and moves an individual to act. In contrast, Jane Reed and Louise Stoll suggest another form of feedback (i.e. balancing feedback) as information that ‘reduces change [and] brings the system back into balance’ (2000: 135). Balancing feedback serves as a type of maintenance.

Feedback: Media ecology

Literature in the media ecology tradition is also relevant to this research. According to the Media Ecology Association website, Neil Postman contends that ‘Media ecology looks into the matter of how media of communication affect human perception, understanding, feeling, and value; and how our interaction with media facilitates or impedes our chances of survival’ (2009: n.p.). Examining pedagogical feedback delivered by different media would appear to fit within this description, since didactic feedback is intended to ‘affect human perception [and] understanding’. In addition, Postman summa-rizes media ecology as ‘the study of media as environments’ (2009: n.p.). Analysing the impact of various forms of textual feedback, the current study functions as a subset of media ecology because it examines variations within a ‘species’ rather than relationships and interaction between ‘species’.

In 1954, a research group headed by Marshall McLuhan ran an experiment, orchestrated by Carpenter, to test student learning based on the medium by which information was conveyed (Marchand 1998). The test groups consisted of students who viewed a lecture on television, watched it in a television studio, listened to it on the radio and read it in printed form. When each inde-pendent group was compared regarding its understanding and retention of the information, the group that watched the television portrayal scored the highest on a test of the information; whereas ‘the print group scored lower than even the radio listeners’ (Marchand 1998: 134).

The current study followed in this tradition. While research has been done to understand the effects of various media on education, like televi-sion (Postman 1992, 1985), radio (Schramm 1977) and the Internet (Smith et al. 2001), most have made comparisons to the oral tradition of teaching thereby contrasting quite different media. Previous media ecology scholarship

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 141 12/24/12 4:02:51 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 4: Feedback: The Impact of textual channel variance on student learning

Keith Massie

142

has often examined the impact new media technologies have on the previ-ous media environment (Ong 2002; McLuhan 1994; Plato 1973), but schol-arship addressing how various ‘species’ of a medium differ is limited. The current research is more nuanced than previous work exploring the impact of distinctly different media. Appropriating the media ecology metaphor, this study does not examine interactions and differences that occur when a new ‘species’ (i.e. media) is introduced into an environment but rather highlights intra-species differences (i.e. textual media only) within an environment. The present research, while investigating the contrasts between older media like handwritten and printed text to the newer computer-mediated forms, also examined a difference between the two older media. It examined different types of textual feedback that a teacher may provide to students by comparing handwritten, typed and computer-mediated forms. In doing so, it seeks to illu-minate differences between not only several text-based forms of feedback, but also positive and negative valences of feedback. How does the medium, keep-ing the content of the feedback the same, change the way students interpret, understand and benefit from it? Do various media have didactic elements, which aid or inhibit the quality of teacher-to-student feedback?

Although the media ecology tradition – dating back to the works of Lewis Mumford (1934), Harold Innis (1951) and McLuhan (1994) – is rich with explanatory value regarding the media environment, almost all research in media ecology has used a cultural, historical, phenomenological and/or rhetorical approach. The current study, however, takes a social scientific approach similar to the aforementioned research orchestrated by Carpenter in 1954. Some media ecologist may be wary of such an approach given their distrust of statistics (Postman 1992).

Pedagogical feedback: Typologies

Synthesizing media, feedback and pedagogy, this final literature review section examines previous findings of the scant research conducted about mediated forms of instructional feedback. Teachers commonly use instructional feed-back to inform students as to the correctness of their answers. It often guides students by suggesting ways to improve. Students’ achievement and perform-ance can be significantly changed because of feedback (Book and Wynkoop-Simmons 1980). While various researchers discuss instructional feedback, each creates different typologies to address the topic. It is important to try to find common features among these differing typologies.

Glen Van Andel (Winninger 2003), in a video entitled Feedback in Learning and Performance states, ‘Feedback is a very effective pedagogical tool and therefore it is a very important process of learning and teaching’. Multiple researchers have categorized pedagogical feedback in a variety of ways, but a majority of the articles written in the area of instructional feedback are based on personal experiences or opinions. Few researchers have attempted to seek empirical results regarding feedback. Even though the categorizations display disparity, numerous classifications are not at odds with each other. The following will illustrate the distinctive ways in which researchers have organized pedagogical feedback into multiple types.

The most common way to understand pedagogical feedback may be by the creation of valences – positive and negative. One research website defines ‘positive’ feedback as an element that adds to the characteristics of the agent in the system and views ‘negative’ feedback as ‘subtract[ing] the output from

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 142 12/24/12 4:02:52 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 5: Feedback: The Impact of textual channel variance on student learning

Feedback

143

3. Much of the valence of feedback from a teacher to a student depends on the nature of the assignment given to the student. A simple, intellective task with a right or wrong answer (e.g. a True/False test) may receive only a teacher’s agreement with correct responses and disagreement (i.e. marked wrong) for incorrect responses.

some input’ (‘Feedback loops’ 2003). This distinction allows feedback to be understood as increasing (positive) or decreasing (negative) a specific input by a given agent. Pedagogically, this distinction becomes synonymous with B. F. Skinner’s (1938) idea of reinforcement. Both Charles Bullock (Winninger 2003) and Pear and Crone-Todd (2002) use the terms ‘reinforcement’ and ‘feedback’ interchangeably. In addition, Cooper and Simonds (2003), whose book is used as a text for instructing communication students how to teach, have a brief subsection on the function of non-verbal teacher-to-student feed-back titled Feedback and Reinforcement. While reinforcement appears to be a form of feedback, other types of feedback seem to exist. Using the same idea of valence, but from a different perspective, Stephen Lucas states, ‘positive feed-back consists of verbal and nonverbal responses intended to affirm the speaker and speaker’s message’, while ‘negative feedback [is] intended to disconfirm’ (2000: FF–77). Appropriating these ideas pedagogically, positive feedback occurs when a teacher confirms a student’s work and negative feedback occurs when an instructor disconfirms the student’s work.3 For the current study, positive feedback (e.g. praise) is intended to encourage the maintenance of a student’s given behaviour, and negative feedback (e.g. criticism) is directed at changing the state of the student’s given behaviour. According to Eleanore Hargreaves et al., younger individuals needed less negative feedback than older individuals and ‘negative feedback serve[s] to make a future occurrence less likely’ (2000: 25). While Hargreaves et al. give no rationale for this claim, one could argue that younger students have more fragile egos and therefore need more ego support (Frymier and Houser 2000; Seiler 1988), which could be accomplished by using positive feedback. In conjunction with the asser-tion that negative feedback has utility, Bullock maintains that whether the student is successful or unsuccessful in his or her attempt of a given task, the teacher must provide ‘positive’ feedback (Winninger 2003). In this framing of ‘positive’ feedback, Bullock equates ‘positive’ with ‘motivational’ feedback. Motivational feedback is any teacher comment that encourages the student to complete a task (Winninger 2003). As a functional element of feedback, it will be addressed in greater detail later.

Distinguishing feedback into two forms, evaluative and descriptive, commonly appears in the literature (Cooper and Simonds 2003; Hargreaves et al. 2000; Tunstall and Gipps 1996). According to William Jurma and Deidre Froelich, ‘Evaluative instructor feedback helps students to maintain or modify elements in their communication behavior’ (1984: 178). Hargreaves et al. claim feedback is ‘giving rewards and punishments’ and ‘expressing approval and disapproval’ (2000: 23). In this context, one can see evaluative feedback as consistent with the valence element of feedback, because phrases like ‘great job’ and ‘this is incorrect’ exhibit positive and negative valence, respectively, and as such are expressions of evaluative feedback. Hargreaves et al. hint that neither positive nor negative valences of feedback are inherently better than the other by stating, ‘[t]eachers… display an awareness of the negative impact that both rewards and punishments could have on pupils’ learning’ (2000: 24, emphasis added). Furthermore, Hargreaves et al. argue, ‘[t]eachers stressed the importance of giving negative as well as positive feedback’ (2000: 24).

Teachers are not the only ones expressing the need for negative feedback. According to Dana Reynolds et al. (2004), students anticipate and want nega-tive comments on their work. Students rated negative feedback the most helpful and positive comments the least helpful type of feedback (Book and Wynkoop-Simmons 1980; Reynolds et al. 2004). Quoting the work of Robert

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 143 12/24/12 4:02:52 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 6: Feedback: The Impact of textual channel variance on student learning

Keith Massie

144

Cathcart (1966), Cassandra Book and Katrina Wynkoop-Simmons further contend, ‘It is well established in educational theory that learning cannot take place without [negative feedback]’ (1980: 135).

The act of combining positive and negative feedback is matched by Hargreaves et al.’s attempt to include both evaluative and descriptive forms of feedback. Hargreaves et al. state that, ‘[t]eachers recognized that evaluative feedback, which showed their approval or disapproval, needed to be accom-panied by descriptive feedback’ (2000: 25). For Hargreaves et al., descriptive feedback is marked by ‘describing why an answer is correct [or incorrect]’ and ‘what they have and have not achieved’ (2000: 23). Given this, descrip-tive feedback appears congruent to Bullock’s proposition regarding ‘informa-tional’ feedback. Conceptually, the descriptive and informational elements are similar, but in practice, somewhat different. Although both Hargreaves et al. and Bullock agree that descriptive/informational feedback should be used when addressing a student’s unsuccessful attempt, Hargreaves et al. argued that instructors should use descriptive/informational feedback even for student’s successful attempts. In short, students need to know not only why their work is wrong but also why their work is correct. In contrast, Cooper and Simonds list attributes of effective feedback and claim that an instruc-tor should ‘describe other people’s behavior without evaluating’ (2003: 16). This statement suggests that one should use descriptive/informational feed-back instead of evaluative feedback, but the primary source that it draws from focuses entirely on situations when a teacher responds to a student’s inappro-priate classroom behaviour.

Another common way to delineate forms of pedagogical feedback is by its function. For Bullock, feedback has three forms – contingent, informa-tional and motivational (Winninger 2003). While an explicit definition of these three terms is never offered, one can define them through their contextual use. Contingent simply means that the feedback is related directly to the task at hand. The level of contingent feedback is directly related to the teacher addressing the given content. For example, ‘Placing Hegel chronologically before Marx on your timeline was correct’ would be contingent feedback, while, given the same task, stating ‘Hegel is my favorite philosopher’ would not be contingent. Informational feedback is intended to provide the student with the necessary information to complete a task, and motivational feedback is used to encourage the student to complete the task (Winninger 2003). Given this trifurcation, one sees that only informational and motivational forms of feedback can be considered to have a functional element, because contin-gent forms only relate to the level of relevance the feedback has to the given task. In short, functional elements of feedback answer the question, ‘what is the feedback intended to do?’. Appearing somewhat in agreement to the two functional elements promoted by Bullock, Nadler discusses the distinctive forms of feedback by saying:

First, feedback serves to create or generate energy. This generation of energy is frequently called the motivating function of feedback. Second, feedback serves to direct behavior where motivation already exists [and this is called the] directing function [of feedback].

(Nadler 1977: 71, original emphasis)

In this light, Nadler’s ‘directing function’ is congruent to Bullock’s ‘informa-tional’ function and Nadler’s ‘motivating function’ is nearly synonymous to

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 144 12/24/12 4:02:52 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 7: Feedback: The Impact of textual channel variance on student learning

Feedback

145

Bullock’s ‘motivational’ feedback idea. While the authors appear in agree-ment, each promotes the role of feedback in the learning situation differently. For Nadler, learning occurs based entirely on the directing function of feed-back, and he never mentions learning in light of the motivating function. On the other hand, Bullock maintains that what functional element of feedback should be used is driven by the success level of the student. In both success-ful and unsuccessful student attempts, instructor feedback should be contin-gent and motivational, but, in unsuccessful situations, the feedback should also be informational (Winninger 2003). Bullock, however, provides no citation to research that supports this claim. Given these differences in Bullock and Nadler, one finds that Nadler’s perspective is future focused (i.e. highlight-ing Wiener’s ‘adjust future conduct’ element), but Bullock’s is past focused (i.e. promoting Wiener’s ‘by past performance’ aspect). To clarify this further, while Nadler looks at a student and thinks, ‘What does he/she still need to succeed at the task?’. Bullock would ask, ‘What did the student not learn that limited the success?’. If both instructors were viewing the same student’s work, each would arrive at the same answer, but the questions themselves are of interest. In the case of Bullock, an instructor is viewed as the ‘remover of obstacles’ (in the student’s past), and in Nadler’s case, as a ‘director of actions’ (for future student outcomes).

In summary, evaluative instructional feedback informs a student of the degree to which his or her behaviour is judged ‘right or wrong’, and descrip-tive/informational feedback tells the student how he or she should act. In short, evaluative feedback responds to the question ‘Is the student’s response correct?’ and descriptive/informational feedback addresses the question ‘Why the student’s answer is correct/incorrect?’.

It is important to note both evaluative and descriptive/informational feedback subtly imply the other. For example, the evaluative feedback ‘this is right’ implies that the action should be maintained (an element of descriptive/informational feedback) and the descriptive/ informational feedback ‘academic writing should have a formal tone’ implies the incorrectness of a piece of student writing (and therefore has an evaluative aspect). Teachers should attempt to incorporate both evaluative and descriptive comments into their feedback (Hargreaves et al. 2000; Reynolds et al. 2004).

Pedagogical feedback: Studies of medium

Terri Jongekrijg and James Russell (1999) published one of the most insightful studies in the area of media use for instructional feedback to students. They explored five common media for delivering feedback – written, conferences, audio, video and computer. The article establishes the positives and nega-tives of each medium, but it does not directly compare or contrast them. They argue that instructors can ‘be quite confident that written comments on students’ work will continue to be a favorite and often a very appropri-ate and effective mode of delivering feedback to students due to its ease of use and tradition’ (Jongekrijg and Russell 1999: 54). Their overview of written feedback focuses exclusively on handwritten comments and mentions three negatives: an instructor’s handwriting may be illegible, students may become ‘overwhelmed by the amount of marginalia’, and writing comments can be an ‘extremely time-consuming task’ (Jongekrijg and Russell 1999: 52).

Printed feedback has a standardized form (i.e. unlike handwriting, printed feedback will always have its t’s crossed and i’s dotted). This form may help

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 145 12/24/12 4:02:53 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 8: Feedback: The Impact of textual channel variance on student learning

Keith Massie

146

printed feedback eliminate the legibility element inherent in handwritten responses. The other two aforementioned negative elements of handwrit-ten feedback addressed by Jongekrijg and Russell (1999) may be decreased by using printed feedback, but much depends on the instructor’s choices and abilities. The amount of marginalia printed feedback displays depends greatly on the instructor’s choice of form. Some of the possible choices of form are: the instructor types comments on the bottom of the student’s electronically submitted work, the instructor generates a new document that expresses his or her feedback and the instructor uses the ‘track changes’ function of his or her word processing software. With the exception of the third form above, the instructor’s printed feedback would display less marginalia than that of traditional handwritten feedback. Each of the three forms, along with the instructor’s typing speed, creates variations in the amount of time necessary to produce printed feedback. Thus, for some instructors, printed feedback may minimize the negative elements associated with handwritten comments.

Jongekrijg and Russell (1999) praise computer-based feedback for being able to generate far more text in less time. They suggest that the computer with its flexibility of expressing feedback in textual, audio and/or visual ways is a powerful option for instructor feedback to students. Agreeing with this praise of computer-based feedback, Ken White and Bob Weight (2000) argue computer-based feedback may be more motivational and beneficial. Although Jongekrijg and Russell discuss both written and computer-based feedback in a positive tone, they perform no direct comparison between the two.

Computers have developed to a state where they may perform some of the functions of a teacher. Directly relating to instructional feedback, soft-ware, like Criterion and Writing Roadmap, can produce feedback for essays submitted by students (Trotter 2002). While the proposed research will not use a computer ‘essay-scoring technology’, like the two previously stated, the research, using instructor-generated feedback via computer, may aid in better understanding the degree to which computer feedback inhibits or promotes student learning.

Researchers have addressed differences in the content of handwritten versus computer-generated student writing. Stephen Bernhardt et al. (1989) studied 24 college classes where each section had 22 students. The study divided the classes into two groups – handwritten composition and computer-generated composition. Three writing assignment topics were piloted in a previous term and the two most comparable were selected as stimuli. Using a split-halves design, Bernhardt et al. assigned half of the handwriting group Topic 1 and the other half Topic 2. The researchers also distributed Topics 1 and 2 equally within the computer group. The first assignment, which was to be completed in class, was coded using a combined holistic/analytic scale with four variables scored on a 1–6 scale (with 1 being low and 6 being high) and two variables scored on a 1–3 scale (with 1 being low and 3 being high). Reversing the topics between the two groups, the second assignment, serv-ing as a post-test, allowed students to revise the work over a five-day period. This second assignment was again coded. One of the variables that the seven trained readers ranked in the pre- and post-test essays was overall impression of quality. The level of student improvement was obtained by subtracting the pre-test from the post-test. Though the pre- and post-test scores between groups had no statistical significance, computer users seemingly improved their overall quality of writing more. This work suggests, to some extent, student learning is connected to the medium they use to produce their text.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 146 12/24/12 4:02:54 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 9: Feedback: The Impact of textual channel variance on student learning

Feedback

147

According to this study, students’ written essays improved more by using a computer than a pen and paper. The current study sought to find if, by exten-sion, that computer-based feedback increased student learning more than handwritten feedback.

Other research has investigated handwritten versus computer-based feedback. Bruce Russell (1992) studied 112 university students enrolled in nine different sections of an introductory speech communication course. Russell randomly assigned participants into four groups: handwritten feed-back before the student viewed a videotape of his or her speech, handwrit-ten after, computer-mediated before and computer-mediated after. The study found no significant difference between the computer and handwrit-ten treatment groups on their final ‘total’ speech performance. For Russell, computer-generated feedback was ‘feedback received [from] a computer printout of the instructor’s comments’ (1992: 4). Therefore, his computer-generated feedback is not comparable to the computer-based form used in the current research but more closely related to the typed/printed form used. Russell’s computer-generated feedback, however, was significantly better at increasing students’ vocal quality skills as well as their gestural quality skills. Russell contends ‘students who received feedback by the computer method were able to improve most on those speech elements that are easily observable’ (1992: 8). He contends that the formal nature of the computer-based feedback may be the reason for the difference. Increased formality in a medium, he believes, may decrease students’ feelings that the criticism is a personal attack.

Jongekrijg and Russell (1999) claim teachers have a wide range of media to provide feedback through, and each feedback medium has both positive and negative consequences for its use. Bernhardt et al. (1989) and Russell (1992) have claimed that the medium used within the instructional environment plays a role in students’ learning.

Pedagogical feedback: Studies of valence

Previous researchers have debated the level of importance of both positive and negative feedback. Barak Rosenshine (1976) conducted (one of) the only meta-analyses focused on research addressing the relationship between the valence of feedback and student achievement. Of five prior studies on pedagogical feedback examined by Rosenshine, three found a positive and significant correlation between positive feedback and student achievement. Conversely, three of the studies found a negative and significant correlation between negative feedback and student achievement. Such findings would suggest that positive feedback has more utility that negative feedback, since it increases rather than decreases student achievement. The reason for this outcome may lie in the fact that positive feedback is more accurately perceived and more easily recalled than negative feedback (Brinko 1993), and nega-tive feedback can also decrease motivation to improve (Atwater et al. 2002; Brett and Atwater 2001; Fedor et al. 1989). Moreover, Andrew Roxburgh, who analysed the impact of positive and negative feedback on completing the task known as the nine-dots problem, contends that ‘problem-solvers are more likely to follow the advice of feedback when it is given in a positive style than when it is given in a negative style’ (2004: 37). This claim advances Chris Snyder and Chris Cowles’ (1979) findings that positive feedback was more accepted by those receiving it than negative feedback was.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 147 12/24/12 4:02:54 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 10: Feedback: The Impact of textual channel variance on student learning

Keith Massie

148

Some researchers argue that negative feedback is more effective. Reynolds et al. (2004) suggested that students desire more negative feedback. Likewise, F. Gregory Ashby and Jeffrey O’Brien assert that ‘many studies have reported that consistent rule-based learning is possible with either positive-only or negative-only feedback, and… negative-only feedback is more effective than positive-only feedback’ (2007: 872). Conversely, Noriko Iwashita points out that some research ‘findings on learner response to negative feedback are rather mixed’ (2003: 7).

In some studies, researchers make claims about the effectiveness of a specific valence of feedback only to retract their assertions. Although Roxburgh claims that positive feedback is better because ‘problem-solvers are more likely to follow’ it, he also states, ‘[g]iving feedback in either a positive or negative fashion has no significant effect on the rate of solution of people attempting the 9-dots problem’ (2004: 36). Similarly, Ashby and O’Brien provide, at least four, previous studies asserting that negative-only feedback is more effective; however, they conclude their own work by stating, ‘it may be necessary to provide both positive and negative feedback’ (2007: 877).

ReseaRch questions

The research by Jongekrijg and Russell (1999), Bernhardt et al. (1989) and Russell (1992) suggest that research should seek a better understanding of what feedback medium increases the potential for student learning. Thus, the following research question was posed:

RQ1: Does the medium of instructor feedback affect cognitive learning, actual learning or attitudes towards the feedback?

Although much of the research on the valence of feedback argues that positive feedback is more effective than negative feedback, some research challenges such claims. Therefore, to better understand the impact that valence plays on student learning the following research question was posited:

RQ2: Does the valence of instructor feedback affect cognitive learning, actual learning or attitudes towards the feedback?

No previous research has examined the possibility of the medium and valence creating an interaction effect on student learning. As such, the following research question was posed:

RQ3: Is there a combined effect of feedback medium and feedback valence on cognitive learning, actual learning or attitudes towards the feedback?

Methods

The following section will address the methods and procedures employed in the study. It will highlight the participants, materials and procedures used during the research.

Participants

After receiving IRB approval, the study located 127 university students to serve as participants. Six other students’ responses were discarded, because their answers on the questionnaire were either too few or too many. Participants

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 148 12/26/12 10:13:28 AM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 11: Feedback: The Impact of textual channel variance on student learning

Feedback

149

4. Because of a researcher error, demographic questions did not appear on the questionnaire used in the research. Therefore, no demographic data can be attained about the sample.

were solicited from various undergraduate courses at a Midwestern University, especially those in introductory public speaking classes and an introductory mass communication class.4

Materials

Materials consisted of a hypothetical student’s three-paragraph story titled ‘My Summer Vacation’. Attempting to imitate first year college students’ writ-ing, the story was on a general theme and had intentionally placed common grammatical errors. A general theme was used to limit the possibility of a participant not understanding specific information on a particular topic. Use of a hypothetical essay written by the student allowed the research to control numerous confounding variables (e.g. topic, amount of feedback, etc.). Given that the story was from the first-person perspective, events within the text were designed to be non-gender specific. A non-gender specific text aided the research in two ways. First, it created the opportunity for a participant of either sex to attempt to project himself or herself as the writer of the text. Second, it attempted to decrease any biases a participant may have towards the writer based on sex.

In addition to the three-paragraph story, six versions of the story with feed-back were constructed. The six versions included: (1) handwritten with nega-tive feedback, (2) handwritten with positive feedback, (3) printed with negative feedback, (4) printed with positive feedback, (5) computer-mediated with negative feedback and (6) computer-mediated with positive feedback.

Pre-test of handwriting

In order to limit confounding variables, the handwritten feedback to the student written response were pre-tested for legibility and androgyny. This process was necessary to limit the participant’s bias towards the hypothet-ical instructor’s feedback (e.g. a male participant may be more critical of a female instructor’s comments – see Sandler 1991). Six individuals’ handwrit-ing was tested. Since the order that participants viewed the handwriting could affect the results, the sequence of the six versions was varied when tested. After reading over each individual’s handwriting, participants were asked two questions. The first question rated how androgynous the handwriting was using a five-point Likert scale where 1 was feminine and 5 was masculine. Two handwriting samples had a mean score near three (individual two=3.47 and individual six=3.70). The second question asked participants to rate each handwriting sample’s legibility (on a scale of 1 to 10 with 10 being extremely legible). Comparing the mean scores for legibility of the two most androgy-nous forms, version six outscored that of version two, M=6.56 compared to M=5.28. Therefore, individual six wrote the handwritten feedback used in the experiment.

Pre-test of valence

To assure that the valence of the feedback was effective, four versions of feedback were pre-tested. The four versions consisted of two negatively valenced (versions 1 and 3) and two positively valenced (versions 2 and 4) samples. The order in which participants read versions was varied to limit any sequential bias. After reading each version of feedback, participants were given a questionnaire consisting of eight semantic differential scaled items,

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 149 12/24/12 4:02:55 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 12: Feedback: The Impact of textual channel variance on student learning

Keith Massie

150

which displayed a high reliability coefficient (α=0.92). Using the responses, a within-groups paired sample two-tailed t-test was done of the two negatively valenced (t (45)=-8.91, p<0.00) and two positively valenced (t (45)=-16.06, p<0.00) versions. Version 3 was used as the negative feedback for valence and version 4 was used as the positive form of feedback because they showed the greatest difference in mean scores.

Measures

The research used a single questionnaire with three different measures to determine the effect of feedback medium and valence on student learning. The first measurement – actual learning, consisted of participants answering a quiz that had seven questions. The questions were directly related to infor-mation provided by the feedback. Actual learning was based on the number of correct answers on the quiz, which showed a fairly standard distribution of scores. The individual questions’ difficulty and validity levels assess the utility of the quiz as a measurement of actual learning. The seven questions had a range of difficulty from 0.51 to 0.93 and a validity range from 0.24 to 0.84.

The difficulty level is a decimal representation of the per cent of individu-als getting the question correct. The validity level is calculated by subtracting the proportion of the lower scoring individuals (i.e. the lower 27 per cent of all scores) who answered the question correctly from the proportion of upper scoring individuals (i.e. the upper 27 per cent of all scores) who answered correctly. For example, 32 of the 32 upper scoring individuals answered ques-tion four correctly and fifteen of the 25 lower scoring individuals answered it correctly that creates a validity level of 32/32-15/25=0.40. Since no validity level was a negative value and the two lowest validity levels had high diffi-culty levels, the quiz questions are considered a reasonable measurement of actual learning.

Joan Goham’s (1988) cognitive learning scale was appropriated as the second scale for the study. This cognitive learning scale was modified from previous research by changing the wording of each question to focus not on course content, but on the instructor’s feedback and had a solid reli-ability coefficient (α=0.75). This two-question instrument asked partici-pants to gauge the feedback and compare it to their ideal feedback. The first question asked participants to rate their level (on a 1–10 scale) of learning given the experimental feedback experienced. The second question asked them to imagine what their level of learning would be given ideal feedback. Contrasting participants’ perceptions of the feedback to their ‘ideal’ feedback (i.e. subtracting question one from question two) produced the cognitive learning level variable.

The third scale measured affective learning using a modified version of Diane Christophel’s (1990) affective learning scale consisting of 24 seman-tic differential questions. Like the previous scale, this scale was adjusted to focus on feedback. This version of Christophel’s (1990) affective learning scale assessed the participants’ attitudes towards the feedback. The affective learn-ing scale was reliable (α=0.90).

Procedure

Consenting participants were randomly assigned to six groups. Random assign-ment occurred by having participants draw from a deck of note cards. On each notecard, the participant saw a letter – H, T or C – and the number 1 or 2. The

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 150 12/24/12 4:02:56 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 13: Feedback: The Impact of textual channel variance on student learning

Feedback

151

5. The mean for actual learning is the mean grade of a seven-point quiz. On the cognitive learning scale, the participants’ rankings (1 through 10) of the actual feedback were subtracted from the rankings (1 through 10) they would give to ideal feedback. In this light, as a score approaches 0 it becomes more closely related to ideal feedback. The affective learning scale ranged from 1 to 7. A lower score on the affective learning scale demonstrates a more positive attitude towards the feedback. A higher score represents more negative attitudes by the participants.

letter represented which medium for the feedback would be employed, and the number corresponded to the valence the feedback would take.

To create greater distinction between the media, this project used a printed text, like the one described by Russell (1992), but used an actual computer for the computer-generated feedback (i.e. participants viewed a monitor and not a piece of paper). A research assistant escorted those participants who were assigned to the ‘computer-based feedback’ group to an adjacent computer lab. The remaining four groups were given an unmarked copy of the student written response.

After reading the unmarked response, participants received another response with feedback relevant to their group assignment. The computer groups were given an identical copy of the student written response and were given computer-based feedback. Viewing the feedback directly on the moni-tor, the computer group was given the pre-printed typed feedback (i.e. the typed groups were given a printout of what the computer groups viewed on a monitor). The six cells had fairly equal distribution.

All participants began by reading the same uncorrected copy of the essay. Each participant worked at his or her own speed. After reading this, each partic-ipant raised his or her hand and was given a second copy of the essay, this time with feedback consistent with his or her group assignment. Although the content of the feedback differed between the positive valence and negative valence groups, the content between mediated forms was identical. After reading the given feedback, participants were asked to take a seven-question quiz and 26-question survey. The quiz and survey combined to generate three dependent variables: actual learning (quiz grade), cognitive learning (the first two questions of the survey) and affective learning (the final 24 questions of the survey).

Results

A 3×2 (mediated form x valence) MANOVA was used to see if medium, valence or an interaction effect of the two affected students’ actual, cogni-tive and/or affective learning. Table 1 presents the mean scores and standard deviation for each of the three dependent variables for medium.

RQ1 enquired whether the medium of instructor feedback had any effect on cognitive, actual and/or affective student learning. There was no significant difference between medium on actual learning, F(2, 126)=0.57, p=0.57; cogni-tive learning, F(2, 126)=2.34, p=0.10; and affective learning, F(2, 126)=2.37, p=0.10. Therefore, in response to RQ1, the medium of feedback appears to have no effect on actual, cognitive or affective learning.

Medium of Feedback Handwritten Printed Computer-based

Actual learning5 M=5.41 M=5.67 M=5.63SD=0.18 SD=0.19 SD=0.19

Cognitive learning M=1.38 M=0.88 M=1.70SD=0.26 SD=0.27 SD=0.27

Affective learning M=3.03 M=2.69 M=2.70SD=0.12 SD=0.13 SD=0.13

Table 1: Mean scores for three dependent measures by medium.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 151 12/24/12 4:02:56 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 14: Feedback: The Impact of textual channel variance on student learning

Keith Massie

152

Medium Valence of feedback

Positive Negative

Actual learning M=5.76 M=5.38SD=0.15 SD=0.15

Cognitive learning M=1.36 M=1.28SD=0.22 SD=0.22

Affective learning M=2.75 M=2.86SD=0.11 SD=0.10

Table 2: Mean scores of valence for three dependent variables.

RQ2 asked whether the valence of instructor feedback affects cognitive learning, actual learning or attitudes towards the feedback. Means for valence on the dependent measures are presented in Table 2.

There was no significant difference between valence on actual learning, F(1, 126)=3.24, p=0.07; cognitive learning, F(1, 126)=0.08, p=0.78; and affective learning, F(1, 126)=0.54, p=0.46. Therefore, in response to RQ2, the valence of feedback seems to have no effect on actual, cognitive or affective learning.

The final research question, RQ3, asked whether an interaction effect of feedback medium and valence on cognitive learning, actual learning or atti-tudes towards the feedback existed. Results indicate that there was no interaction between medium and valence on actual learning, F(2, 126)=0.74, p=0.48; cogni-tive learning, F(2, 126)=0.01, p=0.99; or affective learning, F(2, 126)=0.22, p=0.81.

discussion and concLusions

This research sought to determine whether the medium and/or valence of teacher-to-student instructional feedback affected the student’s actual, cogni-tive and/or affective learning. Given that instructors have a variety of options regarding both medium and valence for feedback, it is important to investi-gate if any is more effective at promoting student learning.

Feedback valence

Finding no significance for valence, this research is still valuable. Much of the previous research on didactic feedback and valence posits that positive feed-back is more preferable or effective for student achievement than negative feedback (Roxburgh 2004; Brinko 1993; Atwater et al. 2002; Brett and Atwater 2001; Fedor et al. 1989; Snyder and Cowles 1979; Rosenshine 1976). This study’s findings suggest that neither positive-only nor negative-only feed-back is statistically significant for increasing student outcomes. Due to this outcome, the study suggests that the valence of feedback a teacher should use has little, if any, impact on student achievement, and educators should use the valence of feedback with which they feel most comfortable.

Feedback medium

The findings showed no statistically significant relation between the textual medium used and student achievement. Such an outcome has utility. If one takes a moment to review what Postman (1995) claims about the Second Law

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 152 12/24/12 4:02:57 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 15: Feedback: The Impact of textual channel variance on student learning

Feedback

153

of Thermodynamics, one may observe how the lack of statistical significance is useful. Postman states,

The law tells us that although matter can be neither created nor destroyed (the First Law), it tends toward becoming useless. The name given to this tendency is entropy, which means that everything in the universe moves inexorably toward sameness.

(1995: 77)

Such a claim allows us to consider the ‘variations’ of textual feedback used in the study as actually the same medium. Language, here, may construct a division (e.g. handwritten versus printed) that is useless or meaningless. Textuality, then, would be a unit with no subdivision. This idea would explain why previous studies that addressed distinctly different media had significant findings.

Limitations

This study has both major and minor limitations that may have contributed to the results. Four major and two minor limitations will be addressed.

Four major limitations consist of (1) sample size, (2) dichotomy of feedback valence, (3) two elements of the experimental design and (4) the instrument used to measure actual learning. To begin, the sample size of 127 participants (with at least twenty participants per cell) may have been too small to create the necessary conditions for significance to occur.

The dichotomy of using positive-only or negative-only feedback is a second limitation. By not utilizing a third variable, which balanced positive with negative feedback (i.e. a neutral valence), the findings could not confirm or reject assertions by scholars that a blended approach is best (Reynolds et al. 2004; Roxburgh 2004; Hargreaves et al. 2000).

The third major limitation is reflected in one element of the experimental design. Regardless of group assignment, all participants began by reading a hypothetical student’s uncorrected work, and then received feedback (the specific type depended on group assignment) from a hypothetical instructor. Use of a hypothetical instructor had utility because it limited confounding variables such as the content of the feedback as well as instructor gender, sex, ethnicity, race and age. Use of a hypothetical student’s essay was used to maintain a consistency of the stimulus between the various groups. Paul Camic et al. assert ‘it is clear that experimental research usually requires a degree of artificial manipulation of control of the key variables’ (2003: 8); however, it may have also created a limitation. Even though the student’s unmarked work created consistency throughout each group, participants may have not connected to the work as if it were their own. Because of this, participants might have been less engaged or emotionally involved in the experiment, which may have affected the results. In the following section that mentions suggestions for future studies, the researcher has attempted to find a means of possibility diminishing this limitation.

Another element of the experimental design created the fourth limitation. Directly after reading the instructor’s feedback on the hypothetical student’s work, participants were given the quiz. Although this actual learning scale was earlier shown to be a reasonable measure of student learning, it may have only measured short-term recall. Since the quiz began the questionnaire,

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 153 12/24/12 4:02:58 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 16: Feedback: The Impact of textual channel variance on student learning

Keith Massie

154

participants had only a very brief time (approximately less than a minute) between having read the instructor’s comments and recalling the information to answer quiz questions correctly.

The research also appears to have three minor limitations. The first minor limitation is the lack of demographic data. The lack of such data may be significant, since males and females respond to positive and negative feed-back differently (Snyder and Cowles 1979). The second minor limitation is found in the negative form of valence used within the experiment. While one of the pre-tested valence forms (version 3) was used to represent negative feedback in the experiment, a post-experimental t-test revealed that a differ-ent pre-tested valence form should have been utilized (version 1). Though version 1 displayed a greater difference with the positively valenced version used in the experiment (t (46)=-20.0) than version 3 (t (46)=-8.90), both nega-tive versions were significant. Lastly, the grade (i.e. 6/10) placed on the hypo-thetical assignment may have created a third limitation. This grade may have affected students’ expectations of the feedback. Reynolds et al. (2004) argued that higher grades receive more positive feedback from instructors and lower grades receive more negative feedback. Given the low grade, participants may have expected to receive negative feedback from the hypothetical instructor. This violation of expectations for participants receiving the positive feedback may have altered the results. Future research may wish to address this by varying the grades on the feedback.

concLusions and suggestions For Further research

Regardless of the fact that significance was not obtained, one can evaluate the research to illuminate what it may suggest. The lack of clear findings in this research, and the inconsistent findings in the few other empirical stud-ies of instructor feedback in student learning, indicate that it is still uncertain whether or not the format and valence of text-based feedback are important considerations for instructors. Findings from the exploratory tests on several variables suggest that future research may be able to provide instructors with recommendations on providing effective feedback.

The researcher has found numerous possible adjustments to the current study that may aid future research. One such adjustment would be to increase the connection the participant has to the uncorrected essay. One way to do this would be to keep the current experimental design the same, but find a way to place the participants’ names at the top of the essay. Though the participants would still not write the essay, the placement of the name may increase their connection to it. Another approach would be to actually use the participants’ written work. This could be done by creating a large computer database of common feedback phrases given by instructors (see details in Russell 1992). Each feedback comment would have a code. A computer could track if any two participants should be given the same codes for their written work (i.e. the same feedback). When this occurs, the participants are given different mediated forms of the feedback. Though this approach aids in creating connection to the written work, it would require an extremely large sample. The probable incorporation of multiple classes to attain such a sample would create another limitation. If within his or her course, a specific participant is always given handwritten feedback, but later receives typed feedback on his or her assignment, he or she may be confused by the shift in format.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 154 12/24/12 4:02:58 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 17: Feedback: The Impact of textual channel variance on student learning

Feedback

155

Another suggestion for future research would be to create a third level of the valence variable. Since Hargreaves et al. (2000) suggest positive and nega-tive feedback should be combined in a teacher’s comments, future research may attempt to include feedback that incorporates both negative and positive comments. Thus, a valence neutral level of the variable might be compared to positive-only and negative-only feedback. Students may desire negative feedback (Reynolds et al. 2004) and may be motivated by positive feedback (Nadler 1977), but how they respond to more neutral forms may be important to examine.

In addition, the study suggests no difference in outcome based on textual medium, and this finding opens numerous, potential research lines. Are the linguistic divisions used to separate textuality into numerous formats and subdivisions useful? For example, is there evidence, within other contexts, where one textual form (e.g. handwriting) is statistically different from another (e.g. computer-mediated text)? The prima facie answer may be ‘yes’ because of the affective response of a love letter handwritten versus e-mailed. It should be noted that such a claim would focus on the medium, but the affective response may, in fact, be due to perceptions of time taken in composing the prose (i.e. a lengthy e-mail may generate the same affective state). Research addressing differences in textual form, a species within the media ecology, is scant, and future research is needed to understand the micro-ecosystem that is textuality.

reFerences

Ashby, F. Gregory and O’Brien, Jeffrey (2007), ‘The effects of positive versus negative feedback on information-integration category learning’, Perception & Psychophysics, 69:6, pp. 865–78.

Ashby, William Ross (1952), Design for a Brain, New York: John Wiley & Sons, Inc.

____ (1956), An Introduction to Cybernetics, London: Chapman & Hall.Atwater, Leeanne, Waldman, David and Brett, Joan (2002), ‘Understanding

and optimizing multisource feedback’, Human Resource Management, 41:2, pp.193–208.

Bernhardt, Stephen, Edwards, Penny and Wojahn, Patti (1989), ‘Teaching college composition with computers: A program evaluation study’, Written Communication, 6:1, pp. 108–33.

Book, Cassandra and Wynkoop-Simmons, Katrina (1980), ‘Dimensions and perceived helpfulness of student speech criticism’, Communication Education, 29:2, pp. 135–45.

Brett, Joan and Atwater, Leeanne (2001), ‘360o feedback: Accuracy, reac-tions, and perceptions of usefulness’, Journal of Applied Psychology, 86:5, pp. 930–42.

Brinko, Kathleen (1993), ‘The practice of giving feedback to improve teaching: What is effective?’, The Journal of Higher Education, 64:5, pp. 574–93.

Camic, Paul, Rhodes, Jean and Yardley, Lucy (2003), ‘Naming the stars: Integrating qualitative methods into psychological research’, in P. Camic, J. Rhodes and L. Yardley (eds), Qualitative Research in Psychology: Expanding Perspective in Methodology and Design, Washington, DC: American Psychological Association, pp. 3-15.

Cathcart, Robert (1966), Post-Communication: Critical Analysis and Evaluation, Indianapolis: Bobbs-Merrill.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 155 12/24/12 4:02:59 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 18: Feedback: The Impact of textual channel variance on student learning

Keith Massie

156

Christophel, Diane (1990), ‘The relationship among teacher immediacy beha-viors, student motivation, and learning’, Communication Education, 37:4, pp. 323–40.

Cooper, Pamela and Simonds, Cheri (2003), Communication for the Classroom Teacher, 7th ed., Boston: Allyn and Bacon.

Fedor, Donald, Eder, Robert and Buckley, M. Ronald (1989), ‘The contribu-tory effects of supervisor intentions on subordinate feedback responses’, Organizational Behavior and Human Decision Processes, 44:3, pp. 396–414.

‘Feedback loops’, (2003), Concordia University, Montreal, CA, http://artsciccwin.concordia.ca/edtech/ETEC606/feedback.html. Accessed 12 February 2003.

Feyerabend, Paul (1987), Farewell to Reason, New York: Verso.Frymier, Ann and Houser, Marian (2000), ‘The teacher-student relationship as

an interpersonal relationship’, Communication Education, 49:3, pp. 207–19.Gamble, Teri and Gamble, Michael (1987), Communication Works, 2nd ed.,

New York: Random House, Inc.Goham, Joan (1988), ‘The relationship between verbal teacher imme-

diacy behaviors and student learning’, Communication Education, 37:1, pp. 40–53.

Hargreaves, Eleanore, McCallum, Bet and Gipps, Caroline (2000), ‘Teacher feedback strategies in primary classrooms – new evidence’, in S. Askew (ed.), Feedback for Learning, New York: RoutledgeFalmer, pp. 21-31.

Heylighen, Francis and Joslyn, Cliff (2001), ‘Cybernetics and second-order cybernetics’, in R. Meyers (ed.), Encyclopedia of Physical Science & Technology, 3rd ed., New York: Academic Press, pp. 155-170.

Hybels, Saundra and Weaver, Richard II (1986), Communicating Effectively, New York: Random House, Inc.

Innis, Harold (1951), The Bias of Communication, Toronto: University of Toronto Press.

Iwashita, Noriko (2003), ‘Negative feedback and positive evidence in task-based interaction’, Studies of Second Language Acquisition, 25:1, pp. 1–36.

Jongekrijg, Terri and Russell, James (1999), ‘Alternative techniques for provi-ding feedback to students and trainees: A literature review with guidelines’, Educational Technology, 39:6, pp. 54–58.

Jurma, William and Froelich, Deidre (1984), ‘Effects of immediate instructor feedback on group discussion participants’, Central States Speech Journal, 35, pp. 178–86.

Krippendorf, Klaus (1996), ‘A second-order cybernetics of otherness’, Systems Research, 13:3, pp. 311–28.

Leavitt, Harold and Mueller, R. (1951), ‘Some effects of feedback on commu-nication’, Human Relations, 4, pp. 401–10.

Lucas, Stephen (2000), The Art of Public Speaking, 7th ed. (custom published for Illinois State University), New York: McGraw Hill.

Marchand, Philip (1998), Marshall McLuhan: The Medium and the Messenger, Toronto: Random House.

McLuhan, Marshall (1994), Understanding Media: Extensions of Man, Cambridge, MA: MIT Press.

Mumford, Lewis (1934), Technics and Civilization, Chicago: University of Chicago Press.

Nadler, David (1977), Feedback and Organization Development: Using Data-Based Methods, Menlo Park, CA: Addison-Wesley Publishing Company.

Ong, Walter (2002), Orality and Literacy: The Technologizing of the Word, 2nd ed., New York: Routledge.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 156 12/24/12 4:03:00 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 19: Feedback: The Impact of textual channel variance on student learning

Feedback

157

Pear, Joseph & Crone-Todd, Darlene (2002). ‘A social constructivist approach to computer-mediated instruction’, Computers & Education, 38, pp. 221-231.

Plato (1973), Phaedrus and Letters VII and VIII, New York: Penguin Classics.Postman, Neil (1985), Amusing Ourselves to Death, New York: Viking

Penguin Inc.____ (1992), Technopoly: The Surrender of Culture to Technology, New York:

Vintage.____ (1995), The End of Education: Redefining the Value of School, New York:

Vintage.____ (2009), ‘What is media ecology?’, http://www.media-ecology.org/media_

ecology/index.html#What is Media Ecology? (Neil Postman). Accessed 22 June 2012.

Reed, Jane and Stoll, Louise (2000), ‘Promoting organizational learning in schools – the role of feedback’, in S. Askew (ed.), Feedback for Learning, New York: RoutledgeFalmer, pp. 127-143.

Reynolds, Dana, Hunt, Stephen, Simonds, Cheri and Cutbirth, Craig (2004), ‘Written speech feedback in the basic communication course: Are instruc-tors too polite to students?’, in S. Titsworth (ed.), Basic Communication Course Annual, vol. 16, pp. 36-71.

Rogers, Everett (1997), A History of Communication Study: A Biographical Approach, New York: The Free Press.

Rosenshine, Barak (1976), ‘Recent research on teaching behaviors and student achievement’, Journal of Teacher Education, 27:1, pp. 61–64.

Roxburgh, Andrew (2004), ‘The impact of positive and negative feedback in insight problem solving’, http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/2004/hons_0410.pdf. Accessed 2 May 2012.

Russell, Bruce (1992), ‘The effects of computer-generated instructional feedback and videotape on the speaking performance of college students in a basic speech course’, Speech Communication Association Conference, 29 October -1 November, Chicago, IL.

Sandler, Bernice (1991), ‘Women faculty at work in the classroom, or, why it still hurt to be a woman in labor’, Communication Education, 40:1, pp. 6–15.

Schramm, Wilbur (1977), Big Media Little Media: Tools and Technologies for Instruction, Beverly Hills: Sage Publications, Inc.

Seiler, William (1988), Introduction to Speech Communication, Glenview, IL: Scott, Foresman and Company.

Shalizi, Cosma (1997), ‘W. Ross Ashby’, http://cscs.umich.edu/~crshalizi/notebooks/ashby.html. Accessed 10 December 2002.

Skinner, B. F. (1938), The Behavior of Organisms: An Experimental Analysis, New York: Appleton-Century-Crofts.

Smith, Glenn, Ferguson, David and Caris, Mieke (2001), ‘Teaching college courses online vs face-to-face’, http://thejournal.com/Articles/2001/04/01/Teaching-College-Courses-Online-vs-FacetoFace.aspx?Page=1. Accessed 3 November 2012.

Snyder, Chris and Cowles, Chris (1979), ‘Impact of positive and negative feedback based on personality and intellectual assessment’, Journal of Consulting and Clinical Psychology, 47:1, pp. 207–09.

Trotter, Andrew (2002), ‘States testing computer-scored essays’, Education Week, 21:38, pp. 1–14.

Tunstall, Pat and Gipps, Caroline (1996), ‘Teacher feedback to young children in formative assessment: A typology’, British Educational Research Journal, 22:4, pp. 389–404.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 157 12/24/12 4:03:02 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n

Page 20: Feedback: The Impact of textual channel variance on student learning

Keith Massie

158

Tzetzis, George, Votsis, Evandros and Kourtessis, Thomas (2008), ‘The effect of different corrective feedback methods on the outcome and self confidence of young athletes’, Journal of Sports Science and Medicine, 7:3, pp. 371–78.

White, Ken and Weight, Bob (2000), The Online Teaching Guide: A Handbook of Attitudes, Strategies, and Techniques for the Virtual Classroom, Needham Heights, MA: Allyn & Bacon.

Wiener, Norbert (1954), The Human Use of Human Beings, Boston: Houghton Mifflin.

Winninger, John (2003), Feedback in Learning and Performance Situations (videorecording), Indiana University, Department of Recreation and Park Administration, HPER Building, Room 133, Bloomington.

suggested citation

Massie, K. (2012). ‘Feedback: The impact of textual channel variance on student learning’. Explorations in Media Ecology 11: 2, pp. 139–158, doi: 10.1386/eme.11.2.139_1

contributor detaiLs

Keith Massie (Ph.D., University of Utah) is an assistant professor and programme coordinator at Louisiana State University at Alexandria. He has published in a number of peer-reviewed journals, has a book chapter in Online Gaming in Context: The Social and Cultural Significance of Online Games, and is author of Communication Connections: From Aristotle to the Internet (Kendall Hunt, 2013), which provides a brief introduction to media ecology within the chapter on media history. The current article is a revision of his master’s thesis, which was directed by Professor Patrick O’Sullivan at Illinois State University. In addition, it was presented on a media ecology panel at NCA (2006). He would like to thank the reviewers of EME for assisting in crafting it into its present form.

Contact: 103 Pine Lake Drive, Pineville, LA 71360, USA.E-mail: [email protected]

Keith Massie has asserted their right under the Copyright, Designs and Patents Act, 1988, to be identified as the author of this work in the format that was submitted to Intellect Ltd.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52.

EME_11.2_Pedagogy_Massie_139-158.indd 158 12/24/12 4:03:02 PM

Copyri

ght In

tellec

t Ltd

2012

Not for

distr

ibutio

n