Top Banner
10/22/22 Tutor versus computer: A prospective comparison of interactive tutorial and computer-assisted instruction in radiology education Gillian B. Lieberman, M.D. Richard G. Abramson, M.D. Kevin Volkan, Ed.D., Ph.D., M.P.H. Patricia McArdle, Ed.D. Objective. The purpose of this study is to compare the educational effectiveness of interactive tutorial with interactive computer- assisted instruction (CAI) and to determine the effects of personal preference, learning style and/or level of training. To our knowledge, this is the first such study comparing these formats and the first where the CAI being assessed was created by the sole tutor presenting the comparison. Materials and methods. 58 participants were prospectively randomized to receive instruction from different sections of an interactive tutorial and an interactive CAI module. In a modified crossover design, students switched formats halfway through, learning different subject matter in the two halves. Students took tests of factual knowledge at the beginning and end of the course, and a test of visual diagnosis at the end. Students completed questionnaires to objectively evaluate their preferred learning styles and to subjectively elicit their attitudes toward the two formats. Mean scores between the tutorial and CAI groups were compared by analysis of covariance (ANCOVA) and by two-tailed repeated measures F- test. 1
35

Tutor versus Computer

Feb 26, 2023

Download

Documents

Jose Alamillo
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Tutor versus Computer

10/22/22

Tutor versus computer: A prospective comparison ofinteractive tutorial and computer-assisted

instruction in radiology education

Gillian B. Lieberman, M.D.Richard G. Abramson, M.D.

Kevin Volkan, Ed.D., Ph.D., M.P.H. Patricia McArdle, Ed.D.

Objective. The purpose of this study is to compare the educational effectiveness of interactive tutorial with interactive computer-assisted instruction (CAI) and to determine the effects of personal preference, learning style and/or level of training. To our knowledge, this is the first such study comparing these formats and the first where the CAI being assessed was created by the sole tutor presenting the comparison.

Materials and methods. 58 participants were prospectively randomized to receive instruction from different sections of an interactive tutorialand an interactive CAI module. In a modified crossover design, students switched formats halfway through, learning different subject matterin the two halves. Students took tests of factualknowledge at the beginning and end of the course, and a test of visual diagnosis at the end. Students completed questionnaires to objectively evaluate their preferred learning styles and to subjectively elicit their attitudes toward the twoformats. Mean scores between the tutorial and CAI groups were compared by analysis of covariance (ANCOVA) and by two-tailed repeated measures F-test.

1

Page 2: Tutor versus Computer

10/22/22

Results. Both the tutorial and CAI groups demonstrated significant improvement in post-test scores (p<.000 and p<.000, respectively). with the tutorial group’s mean post-test score marginally but significantly higher (32.84 vs. 28.13, p<.001). There were no significant interaction effects with students’ year of training (p=.845), objectively-evaluated preferredlearning style (p=.312), subjectively-elicited attitude toward CAI learning (p=.703), or visual diagnosis (tutorial=7.61, CD-ROM=7.75; p=.79). Conclusion. Tutorial-based learning and optimallytailored CAI are both effective instructional formats. Interactive tutorial was marginally but significantly superior for teaching factual knowledge. This superiority will increase where CAI modules are constructed in commercially expedient “shell” formats. CAI technology should be endorsed selectively, based on the educational goal and on evidence for context-specific value.

Tutor versus computer: A prospective comparison ofinteractive tutorial and computer-assisted instruction in

radiology education

IntroductionExplosive advances in computer technology, along with

escalating time and cost pressures on educators, have ignited numerous attempts to integrate computer-assisted instruction (CAI) into the medical curriculum. However, despite the rapid proliferation of instructional software packages, it has been difficult to measure CAI’s effectiveness in head-to-head comparisons with more traditional teaching formats. Countless uncontrolled studies have shown that students do learn well with CAI [1, 2], but controlled studies are often subject to an importantmethodological limitation—if the CAI and “control” modules

2

Page 3: Tutor versus Computer

10/22/22

differ with respect to authors,content or even style then the true comparison of teaching formats becomes confounded and therefore invalid [3].

To address this important objection, we designed a prospective, randomized comparison of CAI and interactive tutorial in which both the CAI module and tutorial were designed, produced, and taught by the same teacher in the same style, using the same cases, the same images, and the same explanations. To our knowledge, this is the first comparison of teaching formats in the radiology literature explicitly designed to minimize confounding effects. This is also the first study to compare CAI with an interactive tutorial, rather than with more passive learning formats such as textbooks [4, 5] or lectures [6,7,8].

Methods

This study was performed at Beth Israel Deaconess Medical Center (Boston, MA) from July 1997 through December 1997. A total of 61 students and first-year residents rotated through the radiology department during this period.Three residents were eliminated due to failure to complete the post test within the given time frame of 5 days post intervention, leaving a total of 58 participants (26 3rd-year students and 17 4th year students on core rotation, 11 4th-year students taking an “advanced” radiology rotation, and 4 1st-year residents).

3

Page 4: Tutor versus Computer

10/22/22

CAI ProgramAn interactive CD-ROM teaching module was developed by

G.B.L. in collaboration with the Harvard Medical School BethIsrael Deaconess Medical Center Mt. Auburn Institute for Research and Education. The topic “Aberrant Air in the Abdomen”, which refers to air outside of the tubular gastrointestinal tract, was chosen because its prompt recognition by practicing physicians is deemed essential andpotentially life-saving. The CD-ROM was designed to mirror the informational content, educational methods, and presentation style of G.B.L.’s interactive tutorials on aberrant air in the abdomen, which she has conducted for several years as Director of Medical Student Education in the Department of Radiology. In order to recognize the abnormal, the student needs to be comfortable evaluating thenormal. The program therefore covers the basics of plain film radiography and patient positioning and details the principles and approach to reading plain abdominal films. Abrief introduction to ultrasound, CT and MRI is included. The basics of contrast agents, both oral and intravenous areintroduced. As radiology is “living anatomy” and visual computer technology facilitates anatomic teaching, extensiveanatomy “break aways” are presented, appropriately correlated to the abnormal radiographs. The radiologic menuof tests available to image suspected aberrant air is introduced and their efficacious ordering is stressed throughout. The aberrant air collections are presented as different case based problems and differential diagnoses forall are listed. The topics covered include Pneumoperitoneum, Pneumoperitoneum “fake outs” e.g. Chilaiditi’s syndrome, hydropneumoperitoneum, abdominal abscess, Pneumatosis coli, Air in the portal venous system, Air in the bilary tract, Emphysematous cholecystitis, Emphysematous nephritis, Emphysematous cystitis, Vesico-enteric fistula, Pneumoretroperitoneum and Subcutaneous emphysema. All are shown on plain films first and most are also shown on CT scanning. Ultrasound is often performed

4

Page 5: Tutor versus Computer

10/22/22

for right upper quadrant pain and therefore biliary tract air, emphysematous cholecystitis, and right sided emphysematous nephritis are shown by ultrasound as well.

The entire program is narrated by the author and the student is encouraged to learn the material in sequence rather than “skipping around” as the material is designed tobuild skills frame by frame. It is presented in the form ofa tutorial emphasizing many teaching ‘pearls’ and later cases presume knowledge gained from earlier cases. The module is highly interactive with questions posed throughout. The question format is varied to maintain optimal engagement and includes choosing from a list presented, typing in an answer de novo or merely waiting forthe verbal answer. The students are required to name the exam, and point out the abnormal air collections for all cases. All positive findings are then magnified, highlighted and /or outlined and described verbally. Teaching points are displayed in text alongside the image. This text builds a paragraph at a time to coincide with the film findings. All text relevant to the particular case is displayed by the end of the case. There are 33 case based problems, which are appropriately grouped into 6 minisections. There are 7 video clips of the author, an introduction, five reviews and a conclusion. A review follows on each minisection, which helps consolidate knowledge with salient similarities and differences enunciated, and principle images reviewed. There are approximately 100 new images, in excess of 200 annotated or magnified images, 36 anatomic line drawings and 6 animated drawings. The module took over a year to produce and was extremely time intensive for both the author and the Institute’s computer specialists.

The program was created using Apple Macintosh Power PCs(an 8500, 9500 and a 9600) and cross-platform tested on an HP Vector XM Pentium PC. The Macs had 100 Mbs of RAM and180and 200 MHz processors. The PCs had 32 Mbs of RAM with 133 MHz processors. Primary applications included: Macromedia Director 5.0, Adobe Photoshop 3.1, Adobe Premier 4.2, Macromedia SoundEdit 16, Adobe Illustrator 6.0 Video capture

5

Page 6: Tutor versus Computer

10/22/22

and digitizing hardware included a Truevision Targa 2000 andan HP ScanJet 4 C/T Scanner. Basic requirements for runningthe program are a Mac Power PC with 24 MB RAM and 100 MHz orfaster processor.

The TutorialThe same content was covered during two x 2 hour

interactive tutorials. The basics of plain film radiography, patient positioning and systematic evaluation of the normal abdominal film were covered in detail. Principles of CT, MRI, US and contrast agents were introduced. Efficacious test ordering was stressed. The exact same Xray images were used for these 33 case based problems and these were magnified and projected with positive findings carefully outlined. These were presented in the same order as the computer program, grouped in the same minisections, followed by the corresponding reviews. Identical differential diagnosis lists and anatomic teachingpoints were highlighted throughout.

FacilitiesThe Harvard Medical School Beth Israel Deaconess

Medical Center Institute for Research and Education has impressive facilities and allocated 4 to 9 computers for student use for this program. These were located in a quiet, spacious computer lab. These received maintenance prior to use. The material was only available on these computers for the five day test period and then made unavailable until the next test group arrived.

Study Design

Students were randomized to receive instruction first by either G.B.L.’s traditional interactive tutorial (n=28) or by the interactive CD-ROM (n=30). Before receiving instruction, all students completed a written pre-test of factual knowledge. Students also completed a questionnaire (Figure 1) to determine their score on the Preferred Learning Style Index [9], an instrument developed at the

6

Page 7: Tutor versus Computer

10/22/22

University of Wisconsin to categorize students as either “receptive learners” or “discovery learners.” A receptive learning style is defined as “characteristic of a learner who prefers to encounter the entire content of what is to belearned in a final form,” while a discovery style is “characteristic of a learner who prefers to encounter content unstructured or structured in a limited way, to gather information, and determine how one list of information is related to another by getting accurate responses to what he thinks the meaning is.”

In a modified cross-over design, students completed half of the aberrant air course in the format they were originally assigned, and then switched teaching formats for the remainder of the course. Students were taught differentsubject material in each half of the course. At the end of the course, students repeated the same written test to assess factual knowledge. They also took a test of visual diagnosis and completed questionnaires asking the question, “Was this the way you like to learn?” for both tutorial and CAI, respectively.

The written test of factual knowledge was constructed in 2 equal halves, which were not apparent to the learner but allowed for separate grading. In this way, students earned pre-test and post-test scores corresponding to the course halves in which they received either tutorial or CAI instruction.

Statistical analysis

For the test of factual knowledge, a repeated measures F test (ANOVA) with a Bonferroni correction was used to compare pre- and post-test scores. This comparison was performed for the tutorial groups and CAI groups independently.

In order to make a valid comparison between tutorial and CAI groups, a repeated measures analysis of covariance

7

Page 8: Tutor versus Computer

10/22/22

(ANCOVA) model [10] was employed. This design was chosen over a gain scores approach to better control for prior knowledge and regression effects [11]. Tutorial and CAI post-test scores were used in a general linear model (GLM) repeated measures procedure as the within-subjects variables. Bonferroni correction was applied to the comparison of the main effects. Between-subjects factors included in the model were as follows: year of training (3rd-year, 4th-year, or resident), score on Preferred Learning Style Index (either “receptive” or “discovery”), expressed enjoyment of tutorial-based learning (“yes” or “no”), and expressed enjoyment of CAI learning (“yes” or “no”). All two-way interactions with the within-subjects variables were considered. Pre-test scores were used as covariates.

For the test of visual diagnosis, a repeated measure F test (ANOVA) was used to examine the relationship between tutorial and CAI formats. A repeated measures ANOVA was used due to the within subjects correlation between tutorialand CAI scores. A repeated measures ANCOVA design was not used as there was no visual diagnosis pre-test to use as a covariate

All analyses were performed using the Statistical Package for the Social Sciences (SPSS) version 9.0 (SPSS, Inc., 1999). Significance levels were set at .05.

Results

Study variables

All continuous variables in the study were evaluated for normality and were found to be acceptable. For the ANCOVA, Box’s and Levine’s tests indicated that equality of covariances across groups could not be rejected. The assumption of sphericity also appeared valid. Table 1 shows the univariate breakdown of student characteristics as ascertained by questionnaire.

8

Page 9: Tutor versus Computer

10/22/22

Test of factual knowledge: Pre-test and post-test scores

Students randomized to Tutorial-based instruction firstimproved their scores on the test of factual knowledge from a mean of 11.89 (sd=.632) to a mean of 32.10 (sd=.576) ( F=621.00, p<000). Students randomized to CAI first improved their scores from a mean of 10.78 (sd=.739) to a mean of 30.50 (sd=.676) (F=559.19, p<000).

Test of factual knowledge: Tutorial versus CAI

The repeated measures ANCOVA revealed a significant main effect difference between tutorial and CAI post-test scores, with average tutorial scores being higher than CAI scores after adjustment for pre-test scores (32.84 vs. 28.13, p<.001). These results are presented in detail in Table 2

Table 3 shows interaction effects between teaching format and the between-subjects independent variables. There were no significant interaction effects attributable to students’ year of training (p=.845), objectively-evaluated preferred learning style (p=.312), or subjectively-elicited attitude toward CAI learning (p=.703).However, there was a significant interaction effect attributable to students’ attitude toward tutorial learning,in that students who expressed a dislike for tutorial learning were more likely to perform worse on the CAI test (p=.005).

Test of visual diagnosis

Repeated measures F test (ANOVA) showed no significant difference in visual diagnosis test scores between the tutorial and CAI groups ( mean difference =.136, SE=.515, P=.792). These results are presented in Table 4.

9

Page 10: Tutor versus Computer

10/22/22

Discussion

References to CAI first appeared in the general medicalliterature some 25 years ago [12], and in the radiology literature during the early 1980s [13]. Soon thereafter, researchers and educators were calling for diagnostic radiology to assume a leadership role in the development andimplementation of CAI [14], noting that radiology education depends on repeated exposure to visual images and therefore lends itself quite naturally to the use of computer aides [15]. Over the past 10 years, diagnostic radiology has developed an extensive literature on CAI, including both descriptive articles on CAI development [16,17,18,2,] and attempts to evaluate CAI’s educational effectiveness [6,4,5,7,8]. Meanwhile, CAI’s popularity has soared, due inpart to the proliferation of computers, the rise of the Internet, advancements in technological capabilities, declining hardware and software costs, pressures to economize on faculty teaching time, and the belief among educators that CAI is as good if not better than more traditional teaching formats [19].

In the face of this trend, however, the general medicalliterature has begun to reexamine the quality of research into CAI as an educational tool [19,3,20]. While early studies had suggested that CAI was equal or superior to other formats [21], subsequent review revealed significant methodological defects, and much of the early literature wasdeemed inconclusive. The primary objection to these early studies related to the issue of confounding—in this case, the failure to control for differing informational content, educational methods, or presentation styles when comparing instructional formats [3]. For example, if one were to compare a CAI module with a lecture, one would have to ensure that both interventions presented the same information e.g., using the same cases, the same methods e.g., asking questions, giving feedback, and the same style e.g., using a top-down format for differential diagnoses in order to conclude that

10

Page 11: Tutor versus Computer

10/22/22

any observed difference in educational outcomes were attributable solely to the choice of instructional format.

We designed our study specifically to address this issue. Our CAI and interactive tutorial modules were designed, produced, and taught by the same teacher, using the same cases, the same images, and the same explanations. With respect to content, methods, and style, the only aspects of the two interventions not exactly matched were those aspects that were inextricably tied to choice of format and were therefore impossible to duplicate. For example, students in the tutorial group heard other studentsasking questions and having their questions answered. This exposure could properly be considered an “educational method,” but it is one that could not be reproduced in the CD-ROM format. Therefore, one could attribute any effect from this unmatched educational method to the format itself,and it would not be considered a confounding effect [20].

Our most meaningful result was that students learned, no matter which format they used. This finding is consistent with multiple studies in the past, both uncontrolled [1,2] and controlled [6,4,5], which have shown CAI to be a viable educational tool. We conclude from this result that CAI is a reasonable substitute for interactive tutorial learning under most circumstances.

However, we also found upon direct comparison of post-test scores that students randomized to the tutorial group performed marginally but significantly better than students in the CAI group. This finding is consistent with recent comparative trials from other specialties that have shown a slight but significant advantage to human teaching, especially in settings emphasizing problem solving [22] and/or development of technical skills [23]. In the radiology literature, most trials have suggested that CAI and human teaching are essentially equivalent, but these studies mostly compared CAI with passive learning such as textbooks [4,5] or lectures [6,7,8]; it is possible that the

11

Page 12: Tutor versus Computer

10/22/22

tutorial’s superiority in our study was due in part to its interactive nature. Also, it is possible that other studieshave lacked sufficient power to detect small differences in outcome between tutor and computer.

The supposed advantages and disadvantages of CAI (Figure 2) have been discussed elsewhere at length [24,6,25,26,27,2]. Since our study design minimized differences in content, methods, and style between the tutorial and the CAI module, our proposed explanations for the discrepancy in outcomes relate to the formats themselves. Possible reasons for the tutorial’s superior performance include the following:

Students in the tutorial did not have the option to skip material, and were forced to learn the subject matter in its entirety

Students in the tutorial received specific answers to all of their own questions, while the CD-ROM gaveonly answers to programmed questions

Students in the tutorial benefited from hearing other students’ questions and answers

Mild pressure of the tutorial “hot seat” promoted concentration and peak performance.

The tutorial leader was able to include anecdotes arising from tutorial comments that reinforced learning

Students learned better in the tutorial due to a vaguely-defined benefit from human interaction

Again, we feel these factors represent true potential advantages to the tutorial format itself, rather than confounding effects that could have been controlled by duplication within the CAI module.

The tutorial leader in our study has been formally recognized via teaching awards by one of the better teachersat our medical school. We have not included this as a significant reason contributing to the tutorial’s

12

Page 13: Tutor versus Computer

10/22/22

superiority as this effect should have been minimized since the same teacher designed and produced the CD-ROM

Given a difference in educational effectiveness betweenCAI and tutorial-based learning, it would be helpful to knowwhether this effect was more pronounced for certain subgroups of students, as such knowledge might facilitate triage of students into appropriate learning environments. We therefore looked for covariance between students’ post-test scores and a variety of independent variables, including year of training, objective assessment of students’ preferred learning style, and students’ subjectivepreference for CAI and/or tutorial-based learning. We foundno statistically significant interaction effects attributable to students’ year of training, score on the Preferred Learning Style Index, or stated preference for CAIlearning. Therefore, our study does not provide any evidence to support the following hypotheses: (a) that students learn any better or worse from computers as they progress in their clinical education, (b) that “discovery” learners differ from “receptive” learners in how well they learn from computers versus tutorials, or (c) that students who subjectively prefer computers learn any better from themthan from tutorials or (d) that students who specifically dislike computers learn any less from them than tutorials.

We did find one significant interaction effect: students who stated that they did not like to learn in a tutorial format performed somewhat worse on the CD-ROM post-test (Figure 3). This result, at first somewhat counterintuitive, has two possible explanations. First, thestudents who expressed a dislike for tutorial-based learningmay have represented the more confident students in the population, those resenting the tutorial for its slow, methodical pace those wanting to control the pace of their own learning. These students may have rushed through the CD-ROM, missing important material that lowered their CD-ROMtest score; conversely, they would have been forced to complete the entire tutorial, thus learning more and

13

Page 14: Tutor versus Computer

10/22/22

performing better on that part of the test The second (and perhaps more likely) explanation is that this effect was simply due to chance, as only 5 out of the 58 students said they did not like learning in a tutorial format.

It is interesting to note that there were more studentswho said that they liked tutorial learning than students whosaid that they liked CAI learning. This finding contradictsearly predictions that CAI learning would be more fun and more enjoyable, and hence more motivating, than traditional learning formats. It is possible that computer-based learning has begun to lose some of its novelty value now that we are firmly entrenched in the Internet Age. Our findings may reflect students who, accustomed to daily computer use, prefer human attention and may even resent being handed off to impersonal computer terminals. Additionally, our students are comfortable with small group interactive tutorial teaching. It is considered most effective by our medical school and has largely replaced lectures in our “New Pathways” curriculum.

Our study may very well underestimate the tutorial’s advantage. First, our study took place at a highly selectivemedical school that employs a curriculum emphasizing self-motivation and independent learning. It is possible that byvirtue of their familiarity with other similar educational formats, our students extracted more benefit from the CD-ROMthan other students would have. This effect, if true, wouldimply that our results somewhat understate the tutorial’s superiority.

Second, the CAI module we developed was highly sophisticated without a uniform shell but with every portiondeveloped to maximize learning. It took the author and the computer programmer from the Harvard Medical School Beth Israel Deaconess Medical Center Mt. Auburn Institute for Research and Education a year to develop. The Institute is largely education driven rather than solely a commercial venture; thus this project was educationally optimal but not

14

Page 15: Tutor versus Computer

10/22/22

commercially feasible. Standard publishers will not replicate this. The publisher has the incentive to standardize the CAI program to limit production costs despite the recognition that the student benefit decrease with loss of variety. We would expect that the tutorial superiority would be even more pronounced if we did a head-to-head comparison between tutorials and most commercially available programs.

Although our results support that tutorial-based learning remains the indisputable gold standard, four important points must be made. First, while the difference between tutorial and CAI performance in our study was statistically significant, it was not necessarily pedagogically significant. That is, while the tutorial group performed better on a test, it is unclear whether the discrepancy in knowledge gained would translate into a meaningful difference in clinical skills. We can reasonablyconclude, however, that tutorial-based learning was superiorto CAI in this setting for transferring factual, testable knowledge to students.

Second, instructional formats are chosen not only to result in maximal educational benefit but also to fit in with the practical scheduling and availability of teaching faculty and with an eye to cost containment.[28,27]. Our study did not include a detailed cost-effectiveness analysis, but important factors to consider would include the substantial costs of CAI development and implementation [6,29] and any potential efficiency savings from decreased faculty teaching time. Cost-effectiveness considerations have become more important in recent years as the financial pressures on academic medical centers have forced the searchfor less staff-intensive teaching modalities; evidence of CAI’s cost-effectiveness would be a boon to those centers looking to free up valuable clinical resources without sacrificing the quality of student education.

15

Page 16: Tutor versus Computer

10/22/22

Third, we found no statistically significant differencebetween tutorial-based learning and CAI on the test of visual diagnosis; in fact, CAI had a slight, non-significantadvantage. This finding supports the intuitive theory that these two formats offer different educational advantages. It is possible that CAI is indeed as good as or perhaps better than tutorial-based learning at presenting images andreinforcing associative recall from visual stimuli. Our results support the obvious but important point that the instructional format should be chosen with the ultimate learning goal in mind. Further research will help delineatethe specific subcategories of learning (e.g., factual knowledge acquisition versus visual diagnosis skills) where CAI’s relative strengths are best utilized. Visual diagnosis relies on excellent image resolution on computers and adequate image projection during tutorials, both of which were optimal in our study.

Fourth, we did not officially measure the time spent learning by students in the two groups but verbal feedback indicated that they spent less time on the CAI. This is compatible with other studies that have suggested that even if students learn no more with CAI than with other formats, they expend less study time with CAI [30,31]. This efficiency gain would ideally be factored into a comprehensive cost-effectiveness analysis.

Due to the short time horizon of our study, we are unable to make any conclusions regarding long-term retentionrates. Other researchers have produced preliminary data on long-term knowledge retention [7,8], but regression-to-the-mean effects make these studies difficult to interpret [8].

In summary, our study demonstrated that both interactive tutorial and interactive CAI are effective teaching formats. Tutorial was marginally but significantlymore effective at teaching factual knowledge, and this

16

Page 17: Tutor versus Computer

10/22/22

effect was unrelated to students’ year of training, objective learning style, or stated enjoyment of CAI learning. Since the difference between instructional formats was of unclear pedagogical significance, optimally crafted CAI might still be a valid alternative for teaching core topics; a cost-effectiveness analysis considering costsof CAI development, potential savings from liberated facultytime, and efficiency gain from decreased student learning time would be beneficial. A test of visual knowledge showedno difference between CAI and tutorial, illustrating that different formats have different advantages in different settings. This last point will be important to remember as CAI technology continues to develop, embracing the powerful capabilities of hand-held computing and the Internet [32]. We must take care to endorse CAI technology selectively, based on evidence of value in different contexts, rather than viewing CAI as an educational panacea.

We are grateful to Ms. Beverlee Turner for her valuable assistance in researching and preparing this manuscript.

17

Page 18: Tutor versus Computer

10/22/22

References

1. Chew FS, Smirniotopoulos JG. Teaching skeletal radiology

with use of computer-assisted instruction with

interactive videodisc. J Bone Joint Surg Am 1995 Jul 77(7):

1080-6.

2. Kuszyk BS, Calhoun PS, Soyer PA, Fishman EK. An

interactive computer-based tool for teaching the

segmental anatomy of the liver: usefulness in the

education of residents and fellows. AJR 1997; 169(3):

631-634.

3. Clark RE. Dangers in the evaluation of instructional

media. Acad Med 1992; 67:819-820.

4. Chew FS, Smirniotopoulos JG. Educational efficacy of

computer-assisted instruction with interactive video disc

in radiology. Invest Radiol 1993; 28:1052-1058.

18

Page 19: Tutor versus Computer

10/22/22

5. Chew FS, Stiles RG. Computer-assisted instruction with

interactive videodisc versus textbook for teaching

radiology. Academic Radiology 1994; 1(4):326-331.

6. Jacoby CG, Smith WL, Albanese MA. An evaluation of

computer-assisted instruction in radiology. AJR 1984;

143(3):675-677.

7. Erkonen WE, D'Alessandro MP, Galvin JR, Albanese MA,

Michaelsen VE. Longitudinal comparison of multimedia

textbook instruction with a lecture in radiology

education. Academic Radiology 1994; 1(3):287-292.

8. D’Alessandro DM, Kreiter CD, Erkonen WE, Winter RJ, Knapp

HR. Longitudinal follow-up comparison of educational

interventions: multimedia textbook, traditional lecture,

and printed textbook. Acad Radiol 1997; 4:719-723.

19

Page 20: Tutor versus Computer

10/22/22

9. Stone HL, Meyer TC, Schilling R. Alternative medical

school curriculum design: the independed study proram.

Medical Teacher 1991; 13(2):149-156.

10. Norman GR, Streiner DL. Biostatistics: the bare

essentials. Hamilton, Ontario: BC Decker, 1998.

11. Cronbach LJ, Furby L. How we should measure “change” –

or should we? Psychological Bulletin 74:68-80.

12. Murray TS, Cupples RW, et al. Computer-assisted

learning in medical education. Lancet 1976; 1:474-476.

13. Renfrew DL, El-Khoury GY, Jacoby CG. Computer-assisted

instruction in radiology. Radiology 1982; 143(2):574.

14. Tessler FN. Computer applications in radiology

education: a challenge for the 1990s. AJR 1989;

152(6):1169-72.

20

Page 21: Tutor versus Computer

10/22/22

15. Jaffe CC, Lynch PJ. Computer-aided instruction in

radiology: opportunities for more effective learning. AJR

1995; 164(2):463-467.

16. D’Alessandro MP, Galvin JR, Erkonen WE, Santer DM,

Huntley JS, McBurney RM, Easley G. An approach to the

creation of multimedia textbooks for radiology

instruction. AJR 1993; 161:187-191.

17. Chew FS. Interactive videodisc for teaching radiology.

AJR 1994; 162(4): 987-991.

18. Calhoun PS, Fishman EK. Interactive multimedia program

for imaging the spleen: concept, design, and development.

Radiographics 1994; 14(6):1407-14.

19. Keane DR, Norman GR, Vickers J. The inadequacy of

recent research on computer-assisted instruction. Acad

Med 1991; 66:444-448.

21

Page 22: Tutor versus Computer

10/22/22

20. Friedman CP. The research we should be doing.

Academic medicine 1994; 69(6):455-457.

21. Cohen PA, Dacanay LS. Computer-based instruction and

health professions education: a meta-analysis of

outcomes. Eval Health Prof 1992; 15:259-281.

22. Lee CSC, Rutecki GW, Whittier FC, Clarett MR, Jarjoura

D. A comparison of interactive computerized medical

education software with a more traditional teaching

format. Teaching and Learning in Medicine 1997; 9(2):111-115.

23. Rogers DA, Regehr G, Yeh KA, Howdieshell TR. Computer-

assisted learning versus a lecture and feedback seminar

for teaching a basic surgical technical skill. American

Journal of Surgery. 175(6):508-10, 1998 Jun.

24. Park C. Computer assisted instruction in medical

education. Ann R Coll Physicians Surg Can 1981; 14:374-377.

22

Page 23: Tutor versus Computer

10/22/22

25. Piemme TE. Computer-assisted learning and evaluation

in medicine. JAMA 1988; 260:367-372.

26. Jaffe CC, Lynch PJ. Computer-aided instruction for

radiologic education. Radiographics 1993; 13:931-937.

27. Glenn J. A consumer-oriented model for evaluating

computer-assisted instructional materials for medical

education. Acad Med 1996; 71:251-255.

28. Levin H. Cost-effectiveness of computer-assisted

instruction: some insights. Stanford, CA: Stanford Univ.

Press, 1986.

29. Bramble JM. Hypermedia techniques for diagnostic

imaging instruction. Radiology 1989; 173(3):878.

23

Page 24: Tutor versus Computer

10/22/22

30. Marion R, Niebuhr BR, Petrusa ER, Weinholtz D.

Computer-based instruction in basic medical education. J

Med Educ 1981; 57:521-526.

31. Lyon HC, Healy JC, Bell JR, O’Donnell JF, Shultz EK,

Moore-West M, Wigton RS, Hirai F, Beck JR. PlanAlyzer,

an interactive computer-assisted program to teach

clinical problem solving in diagnosing anemia and

coronary artery disease. Acad Med 1992; 67:821-828.

32. Scalzetti EM. Radiology teaching-file cases on the

World-Wide Web: a second-generation viewer. AJR 1997;

168(3):615-617.

24

Page 25: Tutor versus Computer

10/22/22

Table 1: Student characteristics as assessed by questionnaire

Characteristic n(percent)

Year of training3rd-year medical student

26 (45)

4th-year medical student

28 (48)

1st-year resident 4 (7)TOTAL 58 (100)

Learning styleReceptive 29 (50)Discovery 29 (50)TOTAL 58 (100)

Liked learning by CD-ROM?Yes 37 (64)No 16 (28)

* Missing 5 (9)TOTAL 58 (100)

Liked learning by tutorial?Yes 51 (88)No 5 (9)

* Missing 2 (3)TOTAL 58 (100)

*unsure unanswered

25

Page 26: Tutor versus Computer

10/22/22

Table 2a: Main effects tutorial vs. CAI scores (ANCOVA)

Mean a Std.

Error

95%

Confidence

Interval

FACTOR1 Lower Bound Upper Bound

CDROM 28.127 1.308 25.493 30.761

Tutoria

l

32.842 1.018 30.791 34.893

a Adjusted for Between-Subjects Factors and Model Covariates

26

Page 27: Tutor versus Computer

10/22/22

Table 2b: Main effects tutorial vs. CAI scores (ANCOVA)

Mean

Differen

ce (I-J)

*

Std.

Error

Sig.

a

95% a

Confidence

Interval for

Difference

(I)

FACTOR1

(J)

FACTOR1

Lower Bound Upper

Bound

CDROM Tutorial -4.715 1.358 .001 -7.450 -1.980

Tutoria

l

CDROM 4.715 1.358 .001 1.980 7.450

Based on estimated marginal means

* The mean difference is significant at the .05 level.

a Adjustment for multiple comparisons: Bonferroni.

27

Page 28: Tutor versus Computer

10/22/22

Table 3: Interaction effects between teaching format andstudent characteristics

Source Type

III Sum

of

Squares

df Mean

Square

F Sig. Eta

Squared

FACTOR1 104.703 1 104.703 8.277 .006 .155

FACTOR1

* Like Tutorial learning

112.046 1 112.046 8.858 .005 .164

FACTOR1

* Like CD Learning

1.868 1 1.868 .148 .703 .003

FACTOR1

* Learning Style

13.243 1 13.243 1.047 .312 .023

FACTOR1

* Year in School

.488 1 .488 .039 .845 .001

FACTOR1

* Pretest Score

32.559 1 32.559 2.574 .116 .054

Error(FACTOR1) 569.220 45 12.649

28

Page 29: Tutor versus Computer

10/22/22

Table 4a: Comparison of scores on test of visual knowledge

Mean Std.

Error

95%

Confidence

Interval

FACTOR1 Lower Bound Upper Bound

CDROM 7.750 .439 6.865 8.635

Tutoria

l

7.614 .388 6.832 8.396

29

Page 30: Tutor versus Computer

10/22/22

Table 4b: Comparison of scores on test of visual knowledge

Mean

Differen

ce (I-J)

Std.

Error

Sig. 95%

Confidence

Interval for

Difference

(I)

FACTOR1

(J)

FACTOR1

Lower Bound Upper Bound

CDROM Tutorial .136 .515 .792 -.902 1.174

Tutoria

l

CDROM -.136 .515 .792 -1.174 .902

30

Page 31: Tutor versus Computer

10/22/22

Figure 1: Preferred Learning Style Index questionnaire

Utilizing the following scale, write the number which bestindicates your reaction to each of the following thirteen

statements.

Response Scale1 2 3 4

5 6

7

A. Attend classes and activities which involve active

discussion by students.

B. Acquire information through the use of textbooks or other

printed materials.

C. Plan learning goals and objectives in cooperation with

the instructor.

D. Listen to lectures concerning the subject matter to be

learned.

E. Utilize a variety of audio-visual materials concerning

the material to be learned.

31

Strongly

Neutra Strongly Do Not

Number from

Page 32: Tutor versus Computer

10/22/22

F. Work independently to accomplish the learning objectives

for a course.

G. Attend classes which are directed by an instructor to

accomplish the learning objectives for a course.

H. Use the library and other reference sources to gather

information for a course.

I. Work in a laboratory setting with a great deal of

personal activity and involvement.

J. Observe demonstrations by the instructor of the material

to be learned.

K. Attend classes where student progress is frequently

tested and graded by the instructor.

L. Work in classes that emphasize projects and problem-

solving approaches to learning.

M. Be responsible for my own achievement with infrequent

testing and grading by the instructor.

32

Page 33: Tutor versus Computer

10/22/22

Figure 2: Advantages and disadvantages of CAI as compared with traditional teaching formats [Park 1981; Jacoby et al 1984; Piemme 1988; Jaffe and Lynch 1993; Glenn 1996; Kuszyk et al 1997]

Advantages CAI can simulate a range of cases outside the

institution’s patient population CAI can teach long-term case management skills by

simulating longitudinal follow-up Lesson content is tailored to educational level of

student Student can control pace of lesson; promotes efficient

use of time, as students can choose to skip or review material appropriately

Absence of other students; learning environment more uninhibited, creative, no “performance anxiety”

Immediate feedback; allows student to perform self-assessment

Interactive nature promotes concentration and learning Novelty value may increase student motivation Students become adept at using medical informatics

technologies May provide better image quality, better diagrams Students can choose a convenient time of day or night in

which to learn Relieves instructor of necessity of presenting

repetitious lectures Computers give consistent best effort every day, but

human teacher can be variable

Disadvantages

Option to skip material may allow students to bypass material they don’t know

Computer gives only preprogrammed information; students may be left with unanswered questions

Students do not learn from other students’ questions and answers

33

Page 34: Tutor versus Computer

10/22/22

Students will not experience pressure to perform in frontof others, which may decrease learning

Students need to learn to negotiate the particular computer program’s interface and options in order to learn optimally; students who are less facile on the computer may find this burdensome

No reinforcement from a teacher relating true-life anecdotes and experiences

Absence of “the human factor”; vaguely-defined benefit from human interaction

Development and implementation of CAI tool requires substantial time and money

Commercially expedient CAI modules often offer uniformly structured limited teaching

Potential for poor image quality on low-resolution computer screens

Limited availability of workstations may cause poor access to learning

34

Page 35: Tutor versus Computer

10/22/22

Figure 3:

35

Did not like tutoriallearningLiked tutorial learning

Key: