10/22/22 Tutor versus computer: A prospective comparison of interactive tutorial and computer-assisted instruction in radiology education Gillian B. Lieberman, M.D. Richard G. Abramson, M.D. Kevin Volkan, Ed.D., Ph.D., M.P.H. Patricia McArdle, Ed.D. Objective. The purpose of this study is to compare the educational effectiveness of interactive tutorial with interactive computer- assisted instruction (CAI) and to determine the effects of personal preference, learning style and/or level of training. To our knowledge, this is the first such study comparing these formats and the first where the CAI being assessed was created by the sole tutor presenting the comparison. Materials and methods. 58 participants were prospectively randomized to receive instruction from different sections of an interactive tutorial and an interactive CAI module. In a modified crossover design, students switched formats halfway through, learning different subject matter in the two halves. Students took tests of factual knowledge at the beginning and end of the course, and a test of visual diagnosis at the end. Students completed questionnaires to objectively evaluate their preferred learning styles and to subjectively elicit their attitudes toward the two formats. Mean scores between the tutorial and CAI groups were compared by analysis of covariance (ANCOVA) and by two-tailed repeated measures F- test. 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
10/22/22
Tutor versus computer: A prospective comparison ofinteractive tutorial and computer-assisted
instruction in radiology education
Gillian B. Lieberman, M.D.Richard G. Abramson, M.D.
Kevin Volkan, Ed.D., Ph.D., M.P.H. Patricia McArdle, Ed.D.
Objective. The purpose of this study is to compare the educational effectiveness of interactive tutorial with interactive computer-assisted instruction (CAI) and to determine the effects of personal preference, learning style and/or level of training. To our knowledge, this is the first such study comparing these formats and the first where the CAI being assessed was created by the sole tutor presenting the comparison.
Materials and methods. 58 participants were prospectively randomized to receive instruction from different sections of an interactive tutorialand an interactive CAI module. In a modified crossover design, students switched formats halfway through, learning different subject matterin the two halves. Students took tests of factualknowledge at the beginning and end of the course, and a test of visual diagnosis at the end. Students completed questionnaires to objectively evaluate their preferred learning styles and to subjectively elicit their attitudes toward the twoformats. Mean scores between the tutorial and CAI groups were compared by analysis of covariance (ANCOVA) and by two-tailed repeated measures F-test.
1
10/22/22
Results. Both the tutorial and CAI groups demonstrated significant improvement in post-test scores (p<.000 and p<.000, respectively). with the tutorial group’s mean post-test score marginally but significantly higher (32.84 vs. 28.13, p<.001). There were no significant interaction effects with students’ year of training (p=.845), objectively-evaluated preferredlearning style (p=.312), subjectively-elicited attitude toward CAI learning (p=.703), or visual diagnosis (tutorial=7.61, CD-ROM=7.75; p=.79). Conclusion. Tutorial-based learning and optimallytailored CAI are both effective instructional formats. Interactive tutorial was marginally but significantly superior for teaching factual knowledge. This superiority will increase where CAI modules are constructed in commercially expedient “shell” formats. CAI technology should be endorsed selectively, based on the educational goal and on evidence for context-specific value.
Tutor versus computer: A prospective comparison ofinteractive tutorial and computer-assisted instruction in
radiology education
IntroductionExplosive advances in computer technology, along with
escalating time and cost pressures on educators, have ignited numerous attempts to integrate computer-assisted instruction (CAI) into the medical curriculum. However, despite the rapid proliferation of instructional software packages, it has been difficult to measure CAI’s effectiveness in head-to-head comparisons with more traditional teaching formats. Countless uncontrolled studies have shown that students do learn well with CAI [1, 2], but controlled studies are often subject to an importantmethodological limitation—if the CAI and “control” modules
2
10/22/22
differ with respect to authors,content or even style then the true comparison of teaching formats becomes confounded and therefore invalid [3].
To address this important objection, we designed a prospective, randomized comparison of CAI and interactive tutorial in which both the CAI module and tutorial were designed, produced, and taught by the same teacher in the same style, using the same cases, the same images, and the same explanations. To our knowledge, this is the first comparison of teaching formats in the radiology literature explicitly designed to minimize confounding effects. This is also the first study to compare CAI with an interactive tutorial, rather than with more passive learning formats such as textbooks [4, 5] or lectures [6,7,8].
Methods
This study was performed at Beth Israel Deaconess Medical Center (Boston, MA) from July 1997 through December 1997. A total of 61 students and first-year residents rotated through the radiology department during this period.Three residents were eliminated due to failure to complete the post test within the given time frame of 5 days post intervention, leaving a total of 58 participants (26 3rd-year students and 17 4th year students on core rotation, 11 4th-year students taking an “advanced” radiology rotation, and 4 1st-year residents).
3
10/22/22
CAI ProgramAn interactive CD-ROM teaching module was developed by
G.B.L. in collaboration with the Harvard Medical School BethIsrael Deaconess Medical Center Mt. Auburn Institute for Research and Education. The topic “Aberrant Air in the Abdomen”, which refers to air outside of the tubular gastrointestinal tract, was chosen because its prompt recognition by practicing physicians is deemed essential andpotentially life-saving. The CD-ROM was designed to mirror the informational content, educational methods, and presentation style of G.B.L.’s interactive tutorials on aberrant air in the abdomen, which she has conducted for several years as Director of Medical Student Education in the Department of Radiology. In order to recognize the abnormal, the student needs to be comfortable evaluating thenormal. The program therefore covers the basics of plain film radiography and patient positioning and details the principles and approach to reading plain abdominal films. Abrief introduction to ultrasound, CT and MRI is included. The basics of contrast agents, both oral and intravenous areintroduced. As radiology is “living anatomy” and visual computer technology facilitates anatomic teaching, extensiveanatomy “break aways” are presented, appropriately correlated to the abnormal radiographs. The radiologic menuof tests available to image suspected aberrant air is introduced and their efficacious ordering is stressed throughout. The aberrant air collections are presented as different case based problems and differential diagnoses forall are listed. The topics covered include Pneumoperitoneum, Pneumoperitoneum “fake outs” e.g. Chilaiditi’s syndrome, hydropneumoperitoneum, abdominal abscess, Pneumatosis coli, Air in the portal venous system, Air in the bilary tract, Emphysematous cholecystitis, Emphysematous nephritis, Emphysematous cystitis, Vesico-enteric fistula, Pneumoretroperitoneum and Subcutaneous emphysema. All are shown on plain films first and most are also shown on CT scanning. Ultrasound is often performed
4
10/22/22
for right upper quadrant pain and therefore biliary tract air, emphysematous cholecystitis, and right sided emphysematous nephritis are shown by ultrasound as well.
The entire program is narrated by the author and the student is encouraged to learn the material in sequence rather than “skipping around” as the material is designed tobuild skills frame by frame. It is presented in the form ofa tutorial emphasizing many teaching ‘pearls’ and later cases presume knowledge gained from earlier cases. The module is highly interactive with questions posed throughout. The question format is varied to maintain optimal engagement and includes choosing from a list presented, typing in an answer de novo or merely waiting forthe verbal answer. The students are required to name the exam, and point out the abnormal air collections for all cases. All positive findings are then magnified, highlighted and /or outlined and described verbally. Teaching points are displayed in text alongside the image. This text builds a paragraph at a time to coincide with the film findings. All text relevant to the particular case is displayed by the end of the case. There are 33 case based problems, which are appropriately grouped into 6 minisections. There are 7 video clips of the author, an introduction, five reviews and a conclusion. A review follows on each minisection, which helps consolidate knowledge with salient similarities and differences enunciated, and principle images reviewed. There are approximately 100 new images, in excess of 200 annotated or magnified images, 36 anatomic line drawings and 6 animated drawings. The module took over a year to produce and was extremely time intensive for both the author and the Institute’s computer specialists.
The program was created using Apple Macintosh Power PCs(an 8500, 9500 and a 9600) and cross-platform tested on an HP Vector XM Pentium PC. The Macs had 100 Mbs of RAM and180and 200 MHz processors. The PCs had 32 Mbs of RAM with 133 MHz processors. Primary applications included: Macromedia Director 5.0, Adobe Photoshop 3.1, Adobe Premier 4.2, Macromedia SoundEdit 16, Adobe Illustrator 6.0 Video capture
5
10/22/22
and digitizing hardware included a Truevision Targa 2000 andan HP ScanJet 4 C/T Scanner. Basic requirements for runningthe program are a Mac Power PC with 24 MB RAM and 100 MHz orfaster processor.
The TutorialThe same content was covered during two x 2 hour
interactive tutorials. The basics of plain film radiography, patient positioning and systematic evaluation of the normal abdominal film were covered in detail. Principles of CT, MRI, US and contrast agents were introduced. Efficacious test ordering was stressed. The exact same Xray images were used for these 33 case based problems and these were magnified and projected with positive findings carefully outlined. These were presented in the same order as the computer program, grouped in the same minisections, followed by the corresponding reviews. Identical differential diagnosis lists and anatomic teachingpoints were highlighted throughout.
FacilitiesThe Harvard Medical School Beth Israel Deaconess
Medical Center Institute for Research and Education has impressive facilities and allocated 4 to 9 computers for student use for this program. These were located in a quiet, spacious computer lab. These received maintenance prior to use. The material was only available on these computers for the five day test period and then made unavailable until the next test group arrived.
Study Design
Students were randomized to receive instruction first by either G.B.L.’s traditional interactive tutorial (n=28) or by the interactive CD-ROM (n=30). Before receiving instruction, all students completed a written pre-test of factual knowledge. Students also completed a questionnaire (Figure 1) to determine their score on the Preferred Learning Style Index [9], an instrument developed at the
6
10/22/22
University of Wisconsin to categorize students as either “receptive learners” or “discovery learners.” A receptive learning style is defined as “characteristic of a learner who prefers to encounter the entire content of what is to belearned in a final form,” while a discovery style is “characteristic of a learner who prefers to encounter content unstructured or structured in a limited way, to gather information, and determine how one list of information is related to another by getting accurate responses to what he thinks the meaning is.”
In a modified cross-over design, students completed half of the aberrant air course in the format they were originally assigned, and then switched teaching formats for the remainder of the course. Students were taught differentsubject material in each half of the course. At the end of the course, students repeated the same written test to assess factual knowledge. They also took a test of visual diagnosis and completed questionnaires asking the question, “Was this the way you like to learn?” for both tutorial and CAI, respectively.
The written test of factual knowledge was constructed in 2 equal halves, which were not apparent to the learner but allowed for separate grading. In this way, students earned pre-test and post-test scores corresponding to the course halves in which they received either tutorial or CAI instruction.
Statistical analysis
For the test of factual knowledge, a repeated measures F test (ANOVA) with a Bonferroni correction was used to compare pre- and post-test scores. This comparison was performed for the tutorial groups and CAI groups independently.
In order to make a valid comparison between tutorial and CAI groups, a repeated measures analysis of covariance
7
10/22/22
(ANCOVA) model [10] was employed. This design was chosen over a gain scores approach to better control for prior knowledge and regression effects [11]. Tutorial and CAI post-test scores were used in a general linear model (GLM) repeated measures procedure as the within-subjects variables. Bonferroni correction was applied to the comparison of the main effects. Between-subjects factors included in the model were as follows: year of training (3rd-year, 4th-year, or resident), score on Preferred Learning Style Index (either “receptive” or “discovery”), expressed enjoyment of tutorial-based learning (“yes” or “no”), and expressed enjoyment of CAI learning (“yes” or “no”). All two-way interactions with the within-subjects variables were considered. Pre-test scores were used as covariates.
For the test of visual diagnosis, a repeated measure F test (ANOVA) was used to examine the relationship between tutorial and CAI formats. A repeated measures ANOVA was used due to the within subjects correlation between tutorialand CAI scores. A repeated measures ANCOVA design was not used as there was no visual diagnosis pre-test to use as a covariate
All analyses were performed using the Statistical Package for the Social Sciences (SPSS) version 9.0 (SPSS, Inc., 1999). Significance levels were set at .05.
Results
Study variables
All continuous variables in the study were evaluated for normality and were found to be acceptable. For the ANCOVA, Box’s and Levine’s tests indicated that equality of covariances across groups could not be rejected. The assumption of sphericity also appeared valid. Table 1 shows the univariate breakdown of student characteristics as ascertained by questionnaire.
8
10/22/22
Test of factual knowledge: Pre-test and post-test scores
Students randomized to Tutorial-based instruction firstimproved their scores on the test of factual knowledge from a mean of 11.89 (sd=.632) to a mean of 32.10 (sd=.576) ( F=621.00, p<000). Students randomized to CAI first improved their scores from a mean of 10.78 (sd=.739) to a mean of 30.50 (sd=.676) (F=559.19, p<000).
Test of factual knowledge: Tutorial versus CAI
The repeated measures ANCOVA revealed a significant main effect difference between tutorial and CAI post-test scores, with average tutorial scores being higher than CAI scores after adjustment for pre-test scores (32.84 vs. 28.13, p<.001). These results are presented in detail in Table 2
Table 3 shows interaction effects between teaching format and the between-subjects independent variables. There were no significant interaction effects attributable to students’ year of training (p=.845), objectively-evaluated preferred learning style (p=.312), or subjectively-elicited attitude toward CAI learning (p=.703).However, there was a significant interaction effect attributable to students’ attitude toward tutorial learning,in that students who expressed a dislike for tutorial learning were more likely to perform worse on the CAI test (p=.005).
Test of visual diagnosis
Repeated measures F test (ANOVA) showed no significant difference in visual diagnosis test scores between the tutorial and CAI groups ( mean difference =.136, SE=.515, P=.792). These results are presented in Table 4.
9
10/22/22
Discussion
References to CAI first appeared in the general medicalliterature some 25 years ago [12], and in the radiology literature during the early 1980s [13]. Soon thereafter, researchers and educators were calling for diagnostic radiology to assume a leadership role in the development andimplementation of CAI [14], noting that radiology education depends on repeated exposure to visual images and therefore lends itself quite naturally to the use of computer aides [15]. Over the past 10 years, diagnostic radiology has developed an extensive literature on CAI, including both descriptive articles on CAI development [16,17,18,2,] and attempts to evaluate CAI’s educational effectiveness [6,4,5,7,8]. Meanwhile, CAI’s popularity has soared, due inpart to the proliferation of computers, the rise of the Internet, advancements in technological capabilities, declining hardware and software costs, pressures to economize on faculty teaching time, and the belief among educators that CAI is as good if not better than more traditional teaching formats [19].
In the face of this trend, however, the general medicalliterature has begun to reexamine the quality of research into CAI as an educational tool [19,3,20]. While early studies had suggested that CAI was equal or superior to other formats [21], subsequent review revealed significant methodological defects, and much of the early literature wasdeemed inconclusive. The primary objection to these early studies related to the issue of confounding—in this case, the failure to control for differing informational content, educational methods, or presentation styles when comparing instructional formats [3]. For example, if one were to compare a CAI module with a lecture, one would have to ensure that both interventions presented the same information e.g., using the same cases, the same methods e.g., asking questions, giving feedback, and the same style e.g., using a top-down format for differential diagnoses in order to conclude that
10
10/22/22
any observed difference in educational outcomes were attributable solely to the choice of instructional format.
We designed our study specifically to address this issue. Our CAI and interactive tutorial modules were designed, produced, and taught by the same teacher, using the same cases, the same images, and the same explanations. With respect to content, methods, and style, the only aspects of the two interventions not exactly matched were those aspects that were inextricably tied to choice of format and were therefore impossible to duplicate. For example, students in the tutorial group heard other studentsasking questions and having their questions answered. This exposure could properly be considered an “educational method,” but it is one that could not be reproduced in the CD-ROM format. Therefore, one could attribute any effect from this unmatched educational method to the format itself,and it would not be considered a confounding effect [20].
Our most meaningful result was that students learned, no matter which format they used. This finding is consistent with multiple studies in the past, both uncontrolled [1,2] and controlled [6,4,5], which have shown CAI to be a viable educational tool. We conclude from this result that CAI is a reasonable substitute for interactive tutorial learning under most circumstances.
However, we also found upon direct comparison of post-test scores that students randomized to the tutorial group performed marginally but significantly better than students in the CAI group. This finding is consistent with recent comparative trials from other specialties that have shown a slight but significant advantage to human teaching, especially in settings emphasizing problem solving [22] and/or development of technical skills [23]. In the radiology literature, most trials have suggested that CAI and human teaching are essentially equivalent, but these studies mostly compared CAI with passive learning such as textbooks [4,5] or lectures [6,7,8]; it is possible that the
11
10/22/22
tutorial’s superiority in our study was due in part to its interactive nature. Also, it is possible that other studieshave lacked sufficient power to detect small differences in outcome between tutor and computer.
The supposed advantages and disadvantages of CAI (Figure 2) have been discussed elsewhere at length [24,6,25,26,27,2]. Since our study design minimized differences in content, methods, and style between the tutorial and the CAI module, our proposed explanations for the discrepancy in outcomes relate to the formats themselves. Possible reasons for the tutorial’s superior performance include the following:
Students in the tutorial did not have the option to skip material, and were forced to learn the subject matter in its entirety
Students in the tutorial received specific answers to all of their own questions, while the CD-ROM gaveonly answers to programmed questions
Students in the tutorial benefited from hearing other students’ questions and answers
Mild pressure of the tutorial “hot seat” promoted concentration and peak performance.
The tutorial leader was able to include anecdotes arising from tutorial comments that reinforced learning
Students learned better in the tutorial due to a vaguely-defined benefit from human interaction
Again, we feel these factors represent true potential advantages to the tutorial format itself, rather than confounding effects that could have been controlled by duplication within the CAI module.
The tutorial leader in our study has been formally recognized via teaching awards by one of the better teachersat our medical school. We have not included this as a significant reason contributing to the tutorial’s
12
10/22/22
superiority as this effect should have been minimized since the same teacher designed and produced the CD-ROM
Given a difference in educational effectiveness betweenCAI and tutorial-based learning, it would be helpful to knowwhether this effect was more pronounced for certain subgroups of students, as such knowledge might facilitate triage of students into appropriate learning environments. We therefore looked for covariance between students’ post-test scores and a variety of independent variables, including year of training, objective assessment of students’ preferred learning style, and students’ subjectivepreference for CAI and/or tutorial-based learning. We foundno statistically significant interaction effects attributable to students’ year of training, score on the Preferred Learning Style Index, or stated preference for CAIlearning. Therefore, our study does not provide any evidence to support the following hypotheses: (a) that students learn any better or worse from computers as they progress in their clinical education, (b) that “discovery” learners differ from “receptive” learners in how well they learn from computers versus tutorials, or (c) that students who subjectively prefer computers learn any better from themthan from tutorials or (d) that students who specifically dislike computers learn any less from them than tutorials.
We did find one significant interaction effect: students who stated that they did not like to learn in a tutorial format performed somewhat worse on the CD-ROM post-test (Figure 3). This result, at first somewhat counterintuitive, has two possible explanations. First, thestudents who expressed a dislike for tutorial-based learningmay have represented the more confident students in the population, those resenting the tutorial for its slow, methodical pace those wanting to control the pace of their own learning. These students may have rushed through the CD-ROM, missing important material that lowered their CD-ROMtest score; conversely, they would have been forced to complete the entire tutorial, thus learning more and
13
10/22/22
performing better on that part of the test The second (and perhaps more likely) explanation is that this effect was simply due to chance, as only 5 out of the 58 students said they did not like learning in a tutorial format.
It is interesting to note that there were more studentswho said that they liked tutorial learning than students whosaid that they liked CAI learning. This finding contradictsearly predictions that CAI learning would be more fun and more enjoyable, and hence more motivating, than traditional learning formats. It is possible that computer-based learning has begun to lose some of its novelty value now that we are firmly entrenched in the Internet Age. Our findings may reflect students who, accustomed to daily computer use, prefer human attention and may even resent being handed off to impersonal computer terminals. Additionally, our students are comfortable with small group interactive tutorial teaching. It is considered most effective by our medical school and has largely replaced lectures in our “New Pathways” curriculum.
Our study may very well underestimate the tutorial’s advantage. First, our study took place at a highly selectivemedical school that employs a curriculum emphasizing self-motivation and independent learning. It is possible that byvirtue of their familiarity with other similar educational formats, our students extracted more benefit from the CD-ROMthan other students would have. This effect, if true, wouldimply that our results somewhat understate the tutorial’s superiority.
Second, the CAI module we developed was highly sophisticated without a uniform shell but with every portiondeveloped to maximize learning. It took the author and the computer programmer from the Harvard Medical School Beth Israel Deaconess Medical Center Mt. Auburn Institute for Research and Education a year to develop. The Institute is largely education driven rather than solely a commercial venture; thus this project was educationally optimal but not
14
10/22/22
commercially feasible. Standard publishers will not replicate this. The publisher has the incentive to standardize the CAI program to limit production costs despite the recognition that the student benefit decrease with loss of variety. We would expect that the tutorial superiority would be even more pronounced if we did a head-to-head comparison between tutorials and most commercially available programs.
Although our results support that tutorial-based learning remains the indisputable gold standard, four important points must be made. First, while the difference between tutorial and CAI performance in our study was statistically significant, it was not necessarily pedagogically significant. That is, while the tutorial group performed better on a test, it is unclear whether the discrepancy in knowledge gained would translate into a meaningful difference in clinical skills. We can reasonablyconclude, however, that tutorial-based learning was superiorto CAI in this setting for transferring factual, testable knowledge to students.
Second, instructional formats are chosen not only to result in maximal educational benefit but also to fit in with the practical scheduling and availability of teaching faculty and with an eye to cost containment.[28,27]. Our study did not include a detailed cost-effectiveness analysis, but important factors to consider would include the substantial costs of CAI development and implementation [6,29] and any potential efficiency savings from decreased faculty teaching time. Cost-effectiveness considerations have become more important in recent years as the financial pressures on academic medical centers have forced the searchfor less staff-intensive teaching modalities; evidence of CAI’s cost-effectiveness would be a boon to those centers looking to free up valuable clinical resources without sacrificing the quality of student education.
15
10/22/22
Third, we found no statistically significant differencebetween tutorial-based learning and CAI on the test of visual diagnosis; in fact, CAI had a slight, non-significantadvantage. This finding supports the intuitive theory that these two formats offer different educational advantages. It is possible that CAI is indeed as good as or perhaps better than tutorial-based learning at presenting images andreinforcing associative recall from visual stimuli. Our results support the obvious but important point that the instructional format should be chosen with the ultimate learning goal in mind. Further research will help delineatethe specific subcategories of learning (e.g., factual knowledge acquisition versus visual diagnosis skills) where CAI’s relative strengths are best utilized. Visual diagnosis relies on excellent image resolution on computers and adequate image projection during tutorials, both of which were optimal in our study.
Fourth, we did not officially measure the time spent learning by students in the two groups but verbal feedback indicated that they spent less time on the CAI. This is compatible with other studies that have suggested that even if students learn no more with CAI than with other formats, they expend less study time with CAI [30,31]. This efficiency gain would ideally be factored into a comprehensive cost-effectiveness analysis.
Due to the short time horizon of our study, we are unable to make any conclusions regarding long-term retentionrates. Other researchers have produced preliminary data on long-term knowledge retention [7,8], but regression-to-the-mean effects make these studies difficult to interpret [8].
In summary, our study demonstrated that both interactive tutorial and interactive CAI are effective teaching formats. Tutorial was marginally but significantlymore effective at teaching factual knowledge, and this
16
10/22/22
effect was unrelated to students’ year of training, objective learning style, or stated enjoyment of CAI learning. Since the difference between instructional formats was of unclear pedagogical significance, optimally crafted CAI might still be a valid alternative for teaching core topics; a cost-effectiveness analysis considering costsof CAI development, potential savings from liberated facultytime, and efficiency gain from decreased student learning time would be beneficial. A test of visual knowledge showedno difference between CAI and tutorial, illustrating that different formats have different advantages in different settings. This last point will be important to remember as CAI technology continues to develop, embracing the powerful capabilities of hand-held computing and the Internet [32]. We must take care to endorse CAI technology selectively, based on evidence of value in different contexts, rather than viewing CAI as an educational panacea.
We are grateful to Ms. Beverlee Turner for her valuable assistance in researching and preparing this manuscript.
Table 2a: Main effects tutorial vs. CAI scores (ANCOVA)
Mean a Std.
Error
95%
Confidence
Interval
FACTOR1 Lower Bound Upper Bound
CDROM 28.127 1.308 25.493 30.761
Tutoria
l
32.842 1.018 30.791 34.893
a Adjusted for Between-Subjects Factors and Model Covariates
26
10/22/22
Table 2b: Main effects tutorial vs. CAI scores (ANCOVA)
Mean
Differen
ce (I-J)
*
Std.
Error
Sig.
a
95% a
Confidence
Interval for
Difference
(I)
FACTOR1
(J)
FACTOR1
Lower Bound Upper
Bound
CDROM Tutorial -4.715 1.358 .001 -7.450 -1.980
Tutoria
l
CDROM 4.715 1.358 .001 1.980 7.450
Based on estimated marginal means
* The mean difference is significant at the .05 level.
a Adjustment for multiple comparisons: Bonferroni.
27
10/22/22
Table 3: Interaction effects between teaching format andstudent characteristics
Source Type
III Sum
of
Squares
df Mean
Square
F Sig. Eta
Squared
FACTOR1 104.703 1 104.703 8.277 .006 .155
FACTOR1
* Like Tutorial learning
112.046 1 112.046 8.858 .005 .164
FACTOR1
* Like CD Learning
1.868 1 1.868 .148 .703 .003
FACTOR1
* Learning Style
13.243 1 13.243 1.047 .312 .023
FACTOR1
* Year in School
.488 1 .488 .039 .845 .001
FACTOR1
* Pretest Score
32.559 1 32.559 2.574 .116 .054
Error(FACTOR1) 569.220 45 12.649
28
10/22/22
Table 4a: Comparison of scores on test of visual knowledge
Mean Std.
Error
95%
Confidence
Interval
FACTOR1 Lower Bound Upper Bound
CDROM 7.750 .439 6.865 8.635
Tutoria
l
7.614 .388 6.832 8.396
29
10/22/22
Table 4b: Comparison of scores on test of visual knowledge
Mean
Differen
ce (I-J)
Std.
Error
Sig. 95%
Confidence
Interval for
Difference
(I)
FACTOR1
(J)
FACTOR1
Lower Bound Upper Bound
CDROM Tutorial .136 .515 .792 -.902 1.174
Tutoria
l
CDROM -.136 .515 .792 -1.174 .902
30
10/22/22
Figure 1: Preferred Learning Style Index questionnaire
Utilizing the following scale, write the number which bestindicates your reaction to each of the following thirteen
statements.
Response Scale1 2 3 4
5 6
7
A. Attend classes and activities which involve active
discussion by students.
B. Acquire information through the use of textbooks or other
printed materials.
C. Plan learning goals and objectives in cooperation with
the instructor.
D. Listen to lectures concerning the subject matter to be
learned.
E. Utilize a variety of audio-visual materials concerning
the material to be learned.
31
Strongly
Neutra Strongly Do Not
Number from
10/22/22
F. Work independently to accomplish the learning objectives
for a course.
G. Attend classes which are directed by an instructor to
accomplish the learning objectives for a course.
H. Use the library and other reference sources to gather
information for a course.
I. Work in a laboratory setting with a great deal of
personal activity and involvement.
J. Observe demonstrations by the instructor of the material
to be learned.
K. Attend classes where student progress is frequently
tested and graded by the instructor.
L. Work in classes that emphasize projects and problem-
solving approaches to learning.
M. Be responsible for my own achievement with infrequent
testing and grading by the instructor.
32
10/22/22
Figure 2: Advantages and disadvantages of CAI as compared with traditional teaching formats [Park 1981; Jacoby et al 1984; Piemme 1988; Jaffe and Lynch 1993; Glenn 1996; Kuszyk et al 1997]
Advantages CAI can simulate a range of cases outside the
institution’s patient population CAI can teach long-term case management skills by
simulating longitudinal follow-up Lesson content is tailored to educational level of
student Student can control pace of lesson; promotes efficient
use of time, as students can choose to skip or review material appropriately
Absence of other students; learning environment more uninhibited, creative, no “performance anxiety”
Immediate feedback; allows student to perform self-assessment
Interactive nature promotes concentration and learning Novelty value may increase student motivation Students become adept at using medical informatics
technologies May provide better image quality, better diagrams Students can choose a convenient time of day or night in
which to learn Relieves instructor of necessity of presenting
repetitious lectures Computers give consistent best effort every day, but
human teacher can be variable
Disadvantages
Option to skip material may allow students to bypass material they don’t know
Computer gives only preprogrammed information; students may be left with unanswered questions
Students do not learn from other students’ questions and answers
33
10/22/22
Students will not experience pressure to perform in frontof others, which may decrease learning
Students need to learn to negotiate the particular computer program’s interface and options in order to learn optimally; students who are less facile on the computer may find this burdensome
No reinforcement from a teacher relating true-life anecdotes and experiences
Absence of “the human factor”; vaguely-defined benefit from human interaction
Development and implementation of CAI tool requires substantial time and money
Commercially expedient CAI modules often offer uniformly structured limited teaching
Potential for poor image quality on low-resolution computer screens
Limited availability of workstations may cause poor access to learning
34
10/22/22
Figure 3:
35
Did not like tutoriallearningLiked tutorial learning