University of Central Florida University of Central Florida STARS STARS Retrospective Theses and Dissertations 1988 The Feasibility of Computerized Cognitive Testing as a Surrogate The Feasibility of Computerized Cognitive Testing as a Surrogate Measure for Assessment Center Performance Measure for Assessment Center Performance Leilani M. de Saram University of Central Florida Part of the Industrial and Organizational Psychology Commons Find similar works at: https://stars.library.ucf.edu/rtd University of Central Florida Libraries http://library.ucf.edu This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation STARS Citation de Saram, Leilani M., "The Feasibility of Computerized Cognitive Testing as a Surrogate Measure for Assessment Center Performance" (1988). Retrospective Theses and Dissertations. 4272. https://stars.library.ucf.edu/rtd/4272
47
Embed
The Feasibility of Computerized Cognitive Testing as a ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
University of Central Florida University of Central Florida
STARS STARS
Retrospective Theses and Dissertations
1988
The Feasibility of Computerized Cognitive Testing as a Surrogate The Feasibility of Computerized Cognitive Testing as a Surrogate
Measure for Assessment Center Performance Measure for Assessment Center Performance
Leilani M. de Saram University of Central Florida
Part of the Industrial and Organizational Psychology Commons
Find similar works at: https://stars.library.ucf.edu/rtd
University of Central Florida Libraries http://library.ucf.edu
This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for
inclusion in Retrospective Theses and Dissertations by an authorized administrator of STARS. For more information,
STARS Citation STARS Citation de Saram, Leilani M., "The Feasibility of Computerized Cognitive Testing as a Surrogate Measure for Assessment Center Performance" (1988). Retrospective Theses and Dissertations. 4272. https://stars.library.ucf.edu/rtd/4272
The absence of any identifiable relationships between
assessment center performance and the tests used in this
study both confirms aspects of earlier findings related to
assessment center evaluations and fails to support the
stated hypotheses.
In terms of the latter component, assessments of
cogniti v e processing abilities are not related to
assessment center performance ratings of dimensions or
overall ratings. In no case was an individual test, nor a
combination of tests' scores predictive of performance in
an assessment center setting. Apparently the elements
determining law enforcement assessment center evaluations
are not of the cognitive processing domain as assessed by
the computerized battery.
This finding is noteworthy in that it supports earlier
research which reported evidence of common global factors
which ultimately determine assessment center performance
(Outcalt, 1988; Robertson et al., 1987; Sackett & Dreher,
1982; and Turnage & Muchinsky, 1982). Unequivocally,
cognitive abilities are essential to successful job
performance. However, successful assessment center
30
31
performance appears to be most heavily dependent on ratings
of non-cognitive aspects of performance for this sample.
This speculation is corroborated by the Hirsh et al.
(1986) conclusion regarding performance prediction for law
enforcement occupations. Lower validity coefficients for
predictors of law enforcement job performance, in
comparison with other occupational types, may be due to the
existence of a non-cognitive, interpersonal performance
component inherent to police work alone. The assessment
center may be evaluating this non-cognitive element
associated with successful law enforcement job performance.
The lack of statistically significant overlap between
the computerized test battery and assessment center
evaluations should also be examined in a methodological
context. The reliability of the test battery was
confirmed, though the reliability of assessment center
ratings, in terms of internal consistency, has been
consistently low across several studies previously cited
(see Thornton & Byham, 1982). Assessor evaluations, in
turn, are subject to rating errors (e.g., halo, central
tendency, etc.). All contribute to criterion
unreliability.
Range restriction compounds the problem of criterion
unreliability. Due to the "hurdle" system utilized to
select assessment center candidates (technical knowledge
test, tenure requirements, and a multiple choice in-basket
32
test), the final group of subjects were severely limited in
terms of variability. Range restriction of sample subjects
results in a demand for a greatly increased sample size in
order to detect a significant relationship among variables
(Schmidt, Hunter, & Urry, 1976). An additional source of
range restriction is evident in terms of the scaling of
assessment center ratings. Use of the full range in the 1
to 7 point scale in determining dimension .and subsequent
OAR scores was not evident. Ratings appear to exhibit the
error of central tendency. Mid-range ratings abound, thus
excluding the high and low extremes of the ratings
parameters. Only the OAR rating exhibited a standard
deviation greater than .89.
It is suspected that the unreliability of the
criterion sharply reduced the power of the study to detect
relationships of statistical significance, given limited
sample size (N = 27). This speculation, combined with the
likelihood that the content of the two respective measures
is fundamentally different (cognitive versus
non-cognitive), comprises a factor for consideration. This
factor may be the chief explanation for the absence of an
identifiable relationship between predictor and criterion.
The findings of this research indicate that
computerized cognitive testing does not exhibit acceptable
correlations with assessment center performance in the law
33
enforcement domain, which precludes its use as a surrogate
measure in that context. The extent to which these results
would generalize to assessment centers for other
occupational types is unknown. It is recommended that the
relationship between computerized cognitive testing and
assessment center performance criteria be investigated
within occupational types demanding high level information
processing abilities.
The findings also suggest the need for an examination
of the relationship between computerized testing and actual
job performance estimates. Traditional measures, such as
education or general ability tests, do not appear to be
predictive of supervisory job performance (Turnage &
Muchinsky, 1984). Computerized cognitive testing could be
examined in various occupational settings as a
non-traditional predictor of job performance. Areas
considered for further research with this sample include:
diagnostic applications (e.g., correlation with
psychological or personality assessments); environmental
applications (e.g., performance decrements under varying
stressors encountered in law enforcement); and relatedness
to other law enforcement tests (e.g., correlation with the
screening test utilized in assessment center candidate
selection).
APPENDIX
oaAHCt COUNTY SHtatrr-s orrlCE
DtFIMITION or SKILLS TO IE HEASUltD
L!AD!lSfttP1 THt ABILITY TO TA~! CHAICt; TO Dil!CT COURSES or ACtlOII TO PIOVtDI CUlDA"CI TO SUIOIDI ■AT!S 1 ■ HEltllC COALS AID OIJECTlY!S; TO lMITIATE ACTto•; TO t•su11 COKPLIAICI WITS STANDAlDS A•D TO !NCOOUCI COIFID!XCI AID ?I.IDI U VOi.~.
35
l!T!lP!lSO"ALt TB! ABILITY TO ACT l ■ A SENSITIVE KAKBEa atCAIDlH~ THE •Etos. f!ELIKCS AKD CAPASILlTIES or OTIIUJ TO A.DTISE SOIOIDIKATES OF CBA"CES: TO TACTFULLY DUL VITS IESSlTlV& ISSUES: TO CONSTaUCTIY!LT ClITICIZ!; TO !STAILISS UPlOIT WITR OTBEIS; TO LISTEK PIODOCTIYELY TO OTBElS.
OICAKIZIRC ARD PLANNINC: TH! ABILITY TO ESTABLISH AND fOLLOV 0&0£~Ll COURSES OF ACTION FOi SELF AND OTBE~S; TO ~E!P 01D!ILY atcoaos: TO EFFECTIVELY PLAH WOl~ SCB!DULES; TO !STABLISH OBJECTIVES AND PI.IOI.ITitS.
PllCtPTIO!: TBt ABILITY TO IDENTIFY, ORDEaSTARD ARD IRT!CUTE INfORHATIOM RELATED TO A SITUATION 01. P10BL!KS; TO OIS!&VI ARD &ECORD FACTS; TO EVALUATE INFORKATIOK OBJECTIVELY AJID COMPLtIELY; TO IDENTIFY P~OJLEKS AND HEEDS.
JODCK!NT: TB! A!ILITT TO H.AX! SOUND "AND LOGICAL DtCISIO!IS; TO APPL? PlIHCIPLES TO SOLVE PRACTICAL P&OBLtHS: TO DtTEaKtNg WBtK TO CONTACT A SOPEIIOI. ARD WHAT TO TELL HIM/BEi: TO DllV YALID CONCLUSIONS F&OK AVAILABLE INFOI.H.ATIOK.
D!CISIVtN!SS1 THE ABILITY TO HAEE DECISIONS AHD TAXE ACTIO• I ■ A llKLLl .KANNE& AND TO DEFERD DECISIONS WBEX CHALLENCED.
OU.L COKKOKICATION: THE ABILITY TO CLEARLY PlEStRT ARD EUl!SS llifO~KATlOH ORALLY; TO OTILI%E EFFECTIVE OR.AL S~ILLS SUCB AS ltl COKTACT, CESTO&Es. VOlC! ISFL!CTION. ARD APPIOPaun TOCAIOUat 1 ■ COHKOKICATINC WITB OTBE&S.
VllTT!W COKMOMtCATIOar TBE AllLITY TO CL!AaLT Pl!S!•T AIID IIl~ISS lNfO&KATIO• lN VllTlMC; TO UTILIZE zrrECTlfl VIITI■C I ~ILLS soc ll AS coaaECT ca.AKKAa. PO ICC TOA TIOII. IPELLISC, TU.KSITIOK, SENTENCE A•D PAlACUPB STlDCTUII 1 ■ OID&a TO CLUILT ARD CONCISELY f&!SEMT W&ITT!■ l ■ FOIKATlO■•
REFERENCES
Baddeley, A. D. (1968). A three-minute reasoning test based on grammatical transformation. Psychonomic Science, lQ, 341-342.
Benson, A. J., & Gedye, J. L. (1963). Logical processes in the resolution of orientation conflict (Report No. 259). Farnborough, UK: Royal Air Force Institute of Aviation Medicine.
Bittner, A. C., Smith, M. G., Kennedy, R. S., Staley, C. F., & Harbeson, M. M. (1985). Automated Performance Test System (APTS): Overview and prospects. Behavior Research Methods, Instruments, and Computers, 11., 217-221.
Bray, D. W., & Campbell, R. J. (1968). Selection of salesmen by means of an assessment center. Journal of Applied Psychology, g, 36-41.
Bray , D. W., Campbell, R. J., & Grant, D. L. (1974). Formative years in business: A long-term AT&T study of managerial lives. New York: John Wiley & Sons.
Bray, D. W., & Grant, D. L. (1966). The assessment center in the measurement of potential for business management. Psychological Monographs: General and Applied, 80(7, Whole No. 625).
Byham, W. C., & Temlock, S. (1972, Sept.). Operational validity - A new concept in personnel testing. Personnel Journal, .§.1_, 639-654.
Department of Defense. (1972). Armed Services Vocational Aptitude Battery: Test description. Washington, DC: Government Printing Office.
Educational Testing Service. (1975). Test - RG-1: Test description.
Arithmetic Aptitude Princeton,NJ: Author.
Englund, C. E., Reeves, D. L., Shingledecker, C. A., Thorne, D.R., Wilson, K. P., & Hegge, F. W. (1987). Unified Tri-service Cognitive Performance Assessment Battery (UTC PAB): I. Design and specification of the battery (Rep. No. 87-10). San Diego, CA: Naval Health Research Center.
36
37
Gaugler, B. B., Rosenthal, D. B., Thornton, III, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, l._g_(3), 493-511.
Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw Hill, 400-402.
Hakel, M. D. (1986). Personnel selection and placement. Annual Review of Psychology, n, 351-380.
Hirsh, H. R., Northrop, L. C., & Schmidt, F. L. (1986). Validity generalization results for law enforcement occupations. Personnel Psychology, ~(2), 399-420.
Joiner, D. A. (1984). Assessment centers in the public sector: A practical approach. Public Personnel Management, .Ll_(4), 435-450.
Kennedy, R. S., Lane, N. E., & Kuntz, L.A. (1987, August). Surrogate measures: A proposed alternative in human factors assessment of operational measures of performance. Paper presented at the 1st Annual Workshop on Space Operations Automation & Robotics, Houston, TX: NASA Johnson Space Center.
Kennedy, R. S., Wilkes, R. L., Dunlap, W. P., & Kuntz, L.A. (1987, Oct.). Microbased repeated-measures performance testing and general intelligence. Paper presented at the 29th Annual Conference of the Military Testing Association, Ottowa, Ontario, Canada.
Kennedy, R. S., Wilkes, R. L., Lane, N. E., & Hornick, J. L. (1985). Preliminary evaluation of microbased repeatedmeasures testing system (Report No. EOTR-85-1). Orlando, FL: Essex Corporation.
Klein, R. S., & Armitage, R. (1979). Rhythms in human performance: 1 1/2-hour oscillations in cognitive styles. Science, 204, 1326-1328.
Kyllonen, P. C. (1986, January) Theory-based cognitive assessment (Tech. Rep. No. AFHRL-TP-85-30). Brooks Air Force Base, TX: Air Force Human Resources Laboratory, Manpower and Personnel Division.
Lane, N. E., & Kennedy, R. S. (1988, May). Users manual for the U.S. Army Aeromedical Research Laboratory portable performance assessment battery (Final Report, Contract No. DAMD17-85-C-5095). Ft. Rucker, AL: U.S. Army Aeromedical Research Laboratory.
38
Lane, N. E., Kennedy, R. S., & Jones, M. B. (1986, Aug.). Overcoming unreliability in operational measures: The use of surrogate measure systems. Paper presented at the Annual Meeting of the Human Factors Society, Dayton, OH.
Outcalt, D. (1988). A research program on General Motors' foremen selection assessment center: Assessor/assessee characteristics and moderator analysis. Paper presented at the 3rd. Annual Meeting of the Society for Industrial and Organizational Psychology, Dallas, TX.
Posner, M. I., & Mitchell, R. F. (1967). Chronometric analysis of classification. Psychological Review, li, 392-409.
Robertson, I., Gratton, L., & Sharpley, D. (1987). The psychometric properties and design of managerial assessment centres: Dimensions into exercises won't go. Journal of Occupational Psychology, 60(3), 187-195.
Robertson, I., & Makin, P. J. (1986, Mar.). Management selection in Britain: A survey and critique. Journal of Occupational Psychology, §__[(1), 45-57.
Sackett, P.R., & Dreher, G. F. (1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, §.1_(4), 401-410.
Schmidt, F. L., & Hunter, J.E. (1981). Employment testing: Old theories and new research findings. American Psychologist,~, 1128-1137.
Schmidt, F. L., Hunter, J.E., & Urry, V. W. (1976). Statistical power in criterion-related validation studies. Journal of Applied Psychology, §..1_(4), 473-485.
Schmitt, N., Gooding, R. z., Noe, R. A., & Kirsh, M. (1984). Meta-analyses of validity studies published between 1964 and 1982 and the investigation of study characteristics. Personnel Pyschology, 11(3), 407-422.
Shingledecker, C. A. (1984). A task battery for applied human performance assessment research (Tech. Rep. No. AFAMRL-TR-84). Dayton, OH: Air Force Aerospace Medical Research Laboratory.
39
Smith, R. L. (1988, April). A research program on General Motors' foremen selection assessment center: Improving the selection process. Paper presented at the 3rd. Annual Meeting of the Society for Industrial and Organizational Psychology, Dallas, TX.
Tabler, R. E., Turnage, J. J., & Kennedy, R. S. (-1987). Repeated-measures analyses of selected psychomotor tests from PAB and APTS: Stability, reliability, and cross-task correlations: Study 2. Orlando, FL: Essex Corporation.
The Psychological Corporation. (1969). Bennett Mechanical Comprehension test: Test description. New York: Author.
Thorndike, R. L. (1985). The central role of general ability in prediction. Multivariate Behavioral Res e arch, 20, 241-254.
Thornton, III, G. C., & Byham, W. C. (1982). Assessment centers and managerial performance. Orlando, FL: Academic Press.
Turnage, J. J., Kennedy, R. S., & Osteen, M. K. (1987). Repeated-measures analyses of selected psychomotor tests from PAB and APTS: Stability, reliability, and cross-task correlations. Orlando, FL: Essex Corp.
Turnage, J. J., & Muchinsky, P. M. (1982). Transsituational variability in human performance within assessment centers. Organizational Behavior and Human Performance,~, 174-200.
Turnage, J. J., & Muchinsky, P. M. (1984). A comparison of the predictive validity of assessment center evaluations versus traditional measures in forecasting supervisory job performance: Interpretive implications of criterion distortion for the assessment paradigm. Journal of Applied Psychology, 69(4), 595-602.
Venkatraman, N., & Grant, J. H. (1986, Jan.). Construct measurement in organizational strategy research: A critique and proposal. Academy of Management Review, 11(1), 71-87.
Wechsler, D. (1958). Measurement and appraisal of adult intelligence (4th ed.). Baltimore: Williams and Wilkins Co.
Wingrove, J., Jones, A., & Herriot, P. (1985). The predictive validity of pre- and post-discussion assessment centre ratings. Journal of Occupational Psychology, ~(3), 189-192.
Zenith Data Systems Corporation. (1987). Z-180 PC series Computers Owner's Manual. St. Joseph, MI: Author.