MEM and SEM in the GME framework: Modelling Perception and Satisfaction SYstemic Risk TOmography: Signals, Measurements, Transmission Channels, and Policy Interventions Maurizio Carpita, University of Brescia Enrico Ciavolino, University of Salento Ies2013. Milan – December, 10 2013
42
Embed
MEM and SEM in the GME framework: Modelling Perception and Satisfaction - Carpita, Ciavolino. December, 10 2013
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
MEM and SEM in the GME framework: Modelling Perception and Satisfaction
SYstemic Risk TOmography:
Signals, Measurements, Transmission Channels, and Policy Interventions
Maurizio Carpita, University of Brescia Enrico Ciavolino, University of Salento Ies2013. Milan – December, 10 2013
Innovation and Society Metodi statistici per la valutazione
Milano -‐ December 10, 2013
MEM and SEM in the GME framework: Modelling Percep9on and Sa9sfac9on
Maurizio Carpita DEM – University of Brescia Enrico Ciavolino DSS – University of Salento
This research is supported by Project SYRTO (SYstemic Risk TOmography: Signals, Measurements,
Transmission Channels and Policy Interven9ons; syrtoproject.eu), funded by the European Union under the 7th Framework Programme (FP7-‐SSH/2007-‐2013), Grant Agreement n. 320270
Objective and contents • To review the Measurement Errors Model (MEM) and the Structural Equa;ons Model (SEM) used to represent rela9ons between subjec9ve percep9ons (as job sa9sfac9on) in the framework of the
Generalized Maximum Entropy (GME) es;mator
Objective and contents • To review the Measurement Errors Model (MEM) and the Structural Equa;ons Model (SEM) used to represent rela9ons between subjec9ve percep9ons (as job sa9sfac9on) in the framework of the
Generalized Maximum Entropy (GME) es;mator
• The talk is in three parts: 1. Introducing the GME es9mator 2. The MEM with one composite indicator 3. The SEM with many Rasch measures
1. Introducing the GME es9mator
Introducing the GME estimator • Consider the simple linear regression model:
y = β·x + ε
Introducing the GME estimator • Consider the simple linear regression model:
y = β·x + ε • Idea: re-‐parameterize it in the classical Shannon’s Maximum Entropy Framework
β = Σk zkβ pk
β (expecta9on of the r.v. Zβ)
ε = Σh zhε ph
ε (expecta9on of the r.v. Zε)
Introducing the GME estimator • Consider the simple linear regression model:
y = β·x + ε • Idea: re-‐parameterize it in the classical Shannon’s Maximum Entropy Framework
β = Σk zkβ pk
β (expecta9on of the r.v. Zβ)
ε = Σh zhε ph
ε (expecta9on of the r.v. Zε)
• Problem: es9mate probabili9es pβ and pε in presence of data and model constraints
Introducing the GME estimator • Solu;on: using a sample ( yi , xi) of n data,
maximize the Entropy Func;on
H( pβ, pε) = - Σk pkβ log( pk
β) - Σhi phiε log( phi
ε)
Introducing the GME estimator • Solu;on: using a sample ( yi , xi) of n data,
maximize the Entropy Func;on
H( pβ, pε) = - Σk pkβ log( pk
β) - Σhi phiε log( phi
ε)
subject to the system of restric;ons
1. yi = (Σk zkβpk
β)·xi + (Σh zhεphi
ε) ∀i
2. pkβ ≥ 0 and phi
ε ≥ 0 ∀k, h, i
3. Σk pkβ = 1 and Σh phi
ε = 1 ∀i
Introducing the GME estimator • Advantages: -‐ No distribu9onal errors assump9ons are required -‐ Robustness for a general class of error distribu9ons
-‐ Good with small samples and ill-‐posed design matrices -‐ Allows to use inequality constraints on parameters
Introducing the GME estimator • Advantages: -‐ No distribu9onal errors assump9ons are required -‐ Robustness for a general class of error distribu9ons
-‐ Good with small samples and ill-‐posed design matrices -‐ Allows to use inequality constraints on parameters
• Drawbacks: -‐ Cumbersome for models with many pars/errs -‐ Not very suitable for “big data” problems
2. The MEM with one composite indicator
the MEM with one composite indicator • Consider the MEM with mul;ple indicators:
y = η + ε = β·ξ + ε xj = ξ + δj j = 1, 2,…, J
with (η,ξ) latent vars. and β structural parameter
the MEM with one composite indicator • Classical solu;on: use the (equal weight)
composite indicator
ξ^ = Σj xj /J
to compute β ̂ OLS = Cov(Y, ξ^ )/Var(ξ^ )
the MEM with one composite indicator • Classical solu;on: use the (equal weight)
composite indicator
ξ^ = Σj xj /J
to compute β ̂ OLS = Cov(Y, ξ^ )/Var(ξ^ )
and obtain the OLS Adjusted for a`enua9on
β ̂ OLSA = β^ OLS /κ̂ ξ with the es9mate of the reliability index
X
X
rJrJ⋅−+
⋅=
)1(1ˆξκ
the MEM with one composite indicator • GME solu;on: using a sample ( yi , xij) of n data,
maximize the Entropy Func;on
H( pβ, pδ, pε) for the data-‐model
yi = β·(ξ^ i – δ i) + εi = ∀i
= (Σk zkβpk
β)·(ξ^ i – Σh zhδph i
δ) + (Σh zhεphi
ε)
subject to the related system of restric;ons
the MEM with one composite indicator • Choice of the support points: -‐ As usual, for zk
β we use (-100, -50, 0, 50, 100)
the MEM with one composite indicator • Choice of the support points: -‐ As usual, for zk
β we use (-100, -50, 0, 50, 100)
-‐ For zhδ and zh
ε we use the 3σ rule with
Var(δ) = Var(ξ^ )·(1 – κ̂ ξ )
Var(ε) = Var( y)·(1 – ρ̂ ξ y ) and the es9mated adjusted correla;on
ρ̂ ξ y = Cor(ξ^ , y)/(κ̂ ξ )1/2
the MEM with one composite indicator • Advantages: -‐ Consider the apriori informa9on on δ and ε
the MEM with one composite indicator • Advantages: -‐ Consider the apriori informa9on on δ and ε
-‐ Obtain an es9mate of the error terms
δ^ iGME = Σh zh
δ p̂ hi
δ i = 1, 2,..., n
the MEM with one composite indicator • Advantages: -‐ Consider the apriori informa9on on δ and ε
-‐ Obtain an es9mate of the error terms
δ^ iGME = Σh zh
δ p̂ hi
δ i = 1, 2,..., n
and therefore an es9mate of the latent variable
ξ^ iGME = ξ^ i – δ^ i
GME i = 1, 2,..., n
the MEM with one composite indicator • Simula;on scenario:
-‐ Normal distribu9ons for ξ, δj and ε
-‐ Four con9nuous mul9ple indicators xj
-‐ One structural parameter β = 0.5
-‐ Six reliability levels κ ξ = 0.70 (0.05) 0.95
-‐ Two sample sizes n = 30, 60
-‐ Average results with 2,000 random replica9ons
the MEM with one composite indicator • Results for the case n = 30:
the MEM with one composite indicator • Innova;on example: concerns 27 Countries of the EU from the Global Innova9on Index 2012 Report, to the study their innova9on level
the MEM with one composite indicator • We have also studied the case of the MEM with discrete mul;ple indicators
• We consider the case of the Likert-‐type scale in the case of parallel measures
j = 1, 2,..., J
the MEM with one composite indicator • Likert-‐type scale with parallel measures
−4 −2 0 2 4
0.0
0.2
0.4
Standard Normal Variable
Prob
abilit
y de
nsity
1 2 3 4 5
Discrete Variable Optimal (O)
Prob
abilit
y m
ass
0.0
0.2
0.4
−4 −2 0 2 4
0.0
0.2
0.4
Standard Normal Variable
Prob
abilit
y de
nsity
1 2 3 4 5
Discrete Variable Right−Skewed (R)
Prob
abilit
y m
ass
0.0
0.2
0.4
−4 −2 0 2 4
0.0
0.2
0.4
Standard Normal Variable
Prob
abilit
y de
nsity
1 2 3 4 5
Discrete Variable Left−Skewed (L)
Prob
abilit
y m
ass
0.0
0.2
0.4
the MEM with one composite indicator • Simula;on results 1:
the MEM with one composite indicator • Simula;on results 2:
the MEM with one composite indicator • McDonald example: Y is the overall satisfaction measured on a 10 points scale, the composite indicator is obtained using a 5 points Likert-‐type scale (1: very bad, 2: bad, 3: equal, 4: good, 5: very good) with 4 aspects:
the MEM with one composite indicator • McDonald example (n = 100)
3. The SEM with many Rasch measures
the SEM with many Rasch measures • Consider the standard linear SEM:
η = Bη + Γξ + τ y = ΛYη + ε x = ΛXξ + δ
the SEM with many Rasch measures • Consider the standard linear SEM:
η = Bη + Γξ + τ y = ΛYη + ε x = ΛXξ + δ
• The GME es9mator use the re-‐parameteriza9on in term of expecta9ons of the matrices B, Γ, Λ and the errors τ, ε, δ for the data-‐model
yi = ΛY(I – B)– s[ΓΛX– 1(xi – δi) + τi ] + εi ∀i
the SEM with many Rasch measures • The ICSI-‐SEM example: a representa9on of the subjec9ve quality of work in the Italian social coopera9ves (ICSI2007 survey)
➸ 9 composite indicators and 5 latent variables
the SEM with many Rasch measures • Two-‐step es;ma;on approach:
1st Step -‐ from the discrete mul9ple indicators (Likert-‐type data) construct the composite indicators with the Rasch -‐ Ra,ng Scale Model
the SEM with many Rasch measures • Two-‐step es;ma;on approach:
1st Step -‐ from the discrete mul9ple indicators (Likert-‐type data) construct the composite indicators with the Rasch -‐ Ra,ng Scale Model
• 2nd Step -‐ use the GME es9mator of the parameters considering for the errors the reliability levels of the composite indicators
the SEM with many Rasch measures
• GME measurement parameters and errors:
the SEM with many Rasch measures • GME structural parameters and errors:
• Correla;on matrix of the GME es;mated LVs:
Epilogue • Simula9on suggest that the GME es9mator performs as well as the OLSA es9mator with rela9vely small samples
• The two step approach have same advantages (reliability versus substan9ve research)
• The GME allows the reconstruc9on of the LVs • Some computa9onal problems with big datasets
Epilogue • Simula9on suggest that the GME es9mator performs as well as the OLSA es9mator with rela9vely small samples
• The two step approach have same advantages (reliability versus substan9ve research)
• The GME allows the reconstruc9on of the LVs • Some computa9onal problems with big datasets
Thank you
This project has received funding from the European Union’s
Seventh Framework Programme for research, technological
development and demonstration under grant agreement n° 320270