4/24/2010 1 Guest Lectures to Peking University (Beijing, P.R. China) Main lecture: Measurement Equivalence (ME) of Paper-and-pencil and Online Organizational Surveys … and after the break… Additional lecture: Using Ad Hoc Measures For Response Styles. A Cautionary Note. Prof. Dr. Alain De Beuckelaer [email protected]April 28 th 2010 2 INTRODUCING COMPARATIVE (CROSS-CULTURAL) SURVEY METHODOLOGY
20
Embed
INTRODUCING COMPARATIVE (CROSS-CULTURAL) SURVEY …€¦ · especially when Likert-type of (agree/disagree) scales are used to rate survey items. (e.g., Billiet & McClendon, 2000;
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
4/24/2010
1
Guest Lectures to Peking University (Beijing, P.R. China)
Main lecture: Measurement Equivalence (ME) of Paper-and-pencil and Online OrganizationalSurveys
… and after the break…
Additional lecture: Using Ad Hoc Measures For Response Styles. A Cautionary Note.
Guiding principle throughout all research stages: Establishment of (cross-cultural/group) equivalence,that is minimizing (comparative) bias
Research topics:• Sample design• Design of a questionnaire/survey instruments• Translation and adaptation of survey instruments• Pretesting (translated) surveys• Interviewer recruitment, selection and training• Monitoring interviewer quality (e.g. using paradata)• Harmonization of data collection
4
• Harmonization of survey and statistical data (after data collection)
• Statistical adjustment for sample design • Psychometric quality assessment of (multi-item)
scales used, e.g. checking measurement equivalenceof scales across populations
• Quantifying and correcting for biasing effects(e.g. response styles)
• Analysis of survey and statistical data
4/24/2010
3
5
Selection of My Publications
On measurement invariance of scales across nations:
De Beuckelaer, A., Lievens, F., & Swinnen, G. (2007). Measurement equivalence in the
conduct of a global organizational survey across six cultural regions.
Journal of Occupational and Organizational Psychology, 80, 575-600.
On measurement invariance of scales across modes of data collection:
De Beuckelaer, A., & Lievens, F. (2009). Measurement equivalence of paper-and-pencil
and online organizational surveys: A large scale examination in 16 countries.
Applied Psychology. An International Review, 58, 336-361.
6
On the biasing effect of scale items not exhibiting measurement invariance
across (cultural) groups:
De Beuckelaer, A., & Swinnen, G. (in press). Biased latent variable mean comparisons due to
measurement non-invariance: A simulation study. Forthcoming in E. Davidov, P. Schmidt,
& J. Billiet (Eds.), Methods and applications in cross-cultural analysis. Taylor & Francis.
On the quantification and correction for response styles
(one important source of cross-cultural bias):
De Beuckelaer, A., Weijters, B., & Rutten, A. Using ad hoc measures for response styles.
A cautionary note. Forthcoming in Quality and Quantity.
4/24/2010
4
Main Lecture: Measurement Equivalence(ME) of Paper-and-pencil and Online Organizational Surveys
8
Overview
1. Introduction
2. Method
3. Results
4. Reflection
5. Questions and discussion
ME of Paper-and-Pencil and Online SurveysPAPER
4/24/2010
5
9
‘mixed-mode organizational surveys’
Online (OL) and paper-and-pencil (PP) surveying combined in organizational surveys
OL surveys
Advantages: less costly, faster responses, greater flexibility in survey design, widergeographical reach, do not suffer from [human] coding errors, less sensitive to order of question effects, and more complete in terms of information provided(various references to journal papers)
Disadvantages: higher non-response rates, higher probability of dishonest answers, potential technological problems, decreased item reliability (higher measurementerror), possibility of multiple submissions, no full coverage of all occupationalgroups represented within the organization (various references to journal papers)
1. IntroductionPAPER
10
Organizational surveys partly rely on OL surveying because of:
(1) Increased efficiency of the data collection process
(2) Elimination of human coding errors
(3) Cost-reductions
1. IntroductionPAPER
4/24/2010
6
11
Research question
Is mixing OL and PP surveys acceptable from a ‘methodological point of view’?
In other words, is measurement equivalence (ME) between both modes of
data collection ensured?
1. IntroductionPAPER
12
Job level % Online % Paper-and-pencil
Lowest 26.7% 73.3%
Intermediate 76.4% 23.6%
Highest 66.4% 33.6%
Danger of sample bias (e.g., higher-level managers prefer to answer online, whereaslower-level managers may not be able to use computers at work)
Job level and mode of data collection (across all countries)
Direct implication for analyses: ME assessment before and after controlling forjob-level differences
1. IntroductionPAPER
4/24/2010
7
13
Sample
N=52,461 managers; k=16 countries; in alphabetical order:
Australia, Brazil, P.R. China, Czech Republic, France, Germany, Netherlands, Nigeria, Pakistan, Puerto Rico, Russian Federation,Spain, Sweden, UK, US, Vietnam
Overall response rate: 86% (across countries)
Measurement Instrument
Five factors: F1: team commitment (3 items); F2: supervisor support (3 items); F3: goal clarity (3 items); F4: decision making (2 items); F5: environmental and societal responsability (2 items)
Scale: 5-point Likert-type of agreement/disagreement rating scale
2. MethodPAPER
14
Analysis method
Part A: Ordinary covariance-structure (CS) model to check the plausibility of the
hypothesized five-factor model (i.e., construct validity assessment)
Part B: Mean-and Covariance Structure (MACS) Analyses; three nested models:
2. Method
ME model Type of equivalence Implication
Form invariance model
(least restrictive)
Identical pattern of salient
and non-salient factor loadings
Meaning of the factors is ‘roughly the same’
Metric invariance model
Factor loadings identical across OL and PP surveys
The extent to which indicators capture changes in the underlying construct(s) is identical!
Scalar invariance model
(most restrictive)
Factor loadings and indicator intercepts identical across OL and PP surveys
Estimated (mode-specific) factor means scores may be compared in a meaningful way!
PAPER
4/24/2010
8
15
Part A: CB-based model to test the five-factor structure
Test results: Five-factor structure fits reasonably well in the 16 countries under
study
(RMSEA slightly too high in Sweden [.062] and the US [.064];
TLI slightly too low in some countries [but no other problems!])
Implication: All 16 countries will be further examined in part B of the analysis
3. ResultsPAPER
16
Part B: nested MACS models to test for ME across OL and PP surveys
Test results: Scalar invariance of the survey instrument is establishedin all countries but P.R. China, and France.
In P.R. China and France the metric invariance model did fit.
Implication: In most countries (14 out of 16) mixing OL and PP surveysdoes not harm! (i.e., scalar equivalence is established)
3. ResultsPAPER
4/24/2010
9
17
After controlling for job level by means of a ‘matched samples approach’ the data of 13 countries* were re-analyzed. The analyses virtually led to the sameoverall conclusion (i.e., strong support for scalar equivalence). After controllingfor job level scalar invariance across OL and PP surveys was also establishedin France.
*The sample size (after matching) was too small (N<90) in 3 countries(i.e., P.R. China, Puerto Rico, and Vietnam)
3. ResultsPAPER
18
Contribution made
First study to assess ME of an organizational survey across two modes of datacollection (i.e., OL and PP) in a large number of countries (k=16).
Result: good news ! (i.e., mixing these modes is not problematical from a methodological point of view)
Limitations
Results are instrument- and organization-specific! They are not generalizeable toother organizational surveys!
Other modes of data collection (e.g., telephone interviewing) are not considered.
Within-country variations (e.g., ethnic groups) are not considered!
4. ReflectionPAPER
4/24/2010
10
Additional lecture: Using Ad Hoc Measures For Response Styles. A Cautionary Note.
20
Overview
1. Introduction
2. Method
3. Results / interpretation
4. Implications for cross-cultural research
5. Questions and discussion
6. References
Ad Hoc Measures For Response Styles
4/24/2010
11
21
1. Introduction
Response styles (RS) in survey response lead to substantial bias in cross-cultural comparisons,
especially when Likert-type of (agree/disagree) scales are used to rate survey items.
(e.g., Billiet & McClendon, 2000; Smith, 2004; Van Herk et al., 2004; Harzing, 2006 )
Adequate quantification (and correction for RS) is required.
In this study, the focus is on two types of RS which are known to bias cross-cultural
comparisons severely (Cheung & Rensvold, 2000):
Acquiescence (ARS): respondents’ tendency to agree (say yes) regardless of item content
Extreme (ERS): respondents’ tendency to pick the extreme category points
of the rating scale regardless of item content
22
1. Introduction
Many well-cited papers have identified potential determinants of RS?
They tap into individual [I]- level and societal [S]-level variables influencing ARS