1 ‘PERCEIVED’ COMPETITION AND PERFORMANCE IN ITALIAN SECONDARY SCHOOLS: NEW EVIDENCE FROM OECD-PISA 2006 Tommaso Agasisti * and Samuele Murtinu Department of Management, Economics and Industrial Engineering Politecnico di Milano Via Lambruschini 4b, 20156 Milano (Italy) * Corresponding author. Email: [email protected]. Tel.: 0039-2- 23993963. Fax: 0039-2-23992710. Abstract In this paper, we investigate the effects of competition on the performance of Italian secondary schools as measured by Maths achievement scores (PISA 2006 dataset). Competition is measured by an indicator of ‘perceived’ competition (generated from an answer provided by the schools’ principals). The methodology employed is a propensity score matching that is corrected to take into account heteroskedasticity and finite sample bias. The results show a positive effect of competition on school performance. Nevertheless, this effect is quite low (between 3.62% and 4.05% computed at the average score level) and is consistent with previous findings about educational systems in Italy and
40
Embed
‘Perceived’ competition and performance in Italian secondary schools: new evidence from OECD–PISA 2006
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
‘PERCEIVED’ COMPETITION AND PERFORMANCE IN ITALIAN
SECONDARY SCHOOLS:
NEW EVIDENCE FROM OECD-PISA 2006
Tommaso Agasisti* and Samuele Murtinu
Department of Management, Economics and Industrial Engineering
In this paper, we investigate the effects of competition on the performance of
Italian secondary schools as measured by Maths achievement scores (PISA
2006 dataset). Competition is measured by an indicator of ‘perceived’
competition (generated from an answer provided by the schools’ principals).
The methodology employed is a propensity score matching that is corrected to
take into account heteroskedasticity and finite sample bias. The results show a
positive effect of competition on school performance. Nevertheless, this effect
is quite low (between 3.62% and 4.05% computed at the average score level)
and is consistent with previous findings about educational systems in Italy and
2
worldwide. This is relevant for policy-making because competition appears to
impact school performance even in a country like Italy where specific pro-
competitive policies are quite absent.
Keywords
School performance, Competition, Propensity Score Matching
JEL Codes:I21
3
1. Introduction and objectives
International comparisons of students’ test scores, such as PISA and TIMSS,
have become widespread and raised many questions about the determinants of
a school’s performance. This issue is particularly relevant in Italy, given that
Italian students perform well below the OECD average (Montanaro, 2008).
Previous studies on school performance have shown that many factors
contribute substantially (see the meta-analysis recently proposed by
Kyriakides et al., 2010), such as the socio-economic status of the student, the
educational background of the family, the location of the school (schools
located in Northern Italy perform much better), and the ownership of the
school (as private schools have worse results than public; for a review of this
evidence about the Italian educational sector, see Bratti et al., 2007).
Little attention has been paid to the role of institutional factors, such as the
autonomy of schools or the competition among schools. In this work, we focus
on the latter aspect. Data from the PISA 2006 dataset showed that many
schools’ principals have reported the presence of competitive pressure.
To some extent, economic theory predicts that the presence of competitive
forces should have an effect on performance (Belfield and Levin, 2005, pp.
28-33). If there are many schools competing in the same area, then schools
face an incentive to improve their quality (performance) to attract students.
Clearly, for this mechanism to operate, many conditions must be met: (i)
families must decide on the basis of quality considerations, (ii) there must be
many schools in the same area, and (iii) school funding must be related to the
4
number of students so that schools will decide to maximise their student load.
Although these characteristics are met in some international educational
contexts (e.g., England – Bradley and Taylor, 2010), it is questionable whether
they are operating in Italy. Specifically, in Italy, school expenditures are
related to the number of teachers, the number of teachers is regulated on the
basis of the number of students, and the number of students is mainly
regulated by public authorities (e.g., Regional Offices for Education – Uffici
Scolastici Regionali); thus, the schools have very little autonomy to determine
their student numbers. However, non financial mechanisms may also be
relevant. Thus, it is still important (and probably more interesting than in other
countries) to analyse whether competition is actually a force operating in the
Italian educational context. Although the regulatory framework is still
centralistic (i.e., the Ministry of Education makes decisions on several school-
level activities), this situation is gradually evolving since the national law n.
59/1997 and the Decree of the Republic’s President n. 275/1999 were instated.
Indeed, these regulations transferred autonomy to schools in some fields; this
autonomy included (i) the definition of their educational programs, (ii) the
organisation of their activities, and (iii) the use of innovations in teaching
methodologies and initiatives. These margins of autonomy have provided an
incentive to schools to operate more strategically, and now (after several years
of the above-mentioned measures) schools can compete on the basis of their
different profiles. A potential perspective for this idea in the Italian
educational system is increasing differentiation. If a school’s decision makers
5
agree with this perspective, then the school’s activities (and results) should be
sensitive to the presence of ‘competitors’ in the same area.
To investigate the potential effect of competition on a school’s performance,
we adopt a perspective related to ‘perceived’ competition (that is, the school
head’s perception). We benefited from a particular characteristic of the PISA
2006 questionnaire, which is the presence of the following question to school
principals: “Which of the following statements best describes the schooling
available to students in your location? There are two or more other schools in
this area that compete for our students/ There is one other school in this area
that competes for our students/ There are no other schools in this area that
compete for our students”. We transformed the answer to obtain a dummy
variable: 0 indicates that there are no schools competing, 1 indicates
otherwise. Thus, we consider the presence of competition as the ‘treatment’
variable; the main hypothesis is that schools facing higher competitive
pressure (‘treated’ schools) should show higher results given their need to
respond to competition by increasing the quality of their activities.
Preliminary evidence for the existence of competitive pressure in the Italian
educational system can be found by looking at the question included in the
PISA questionnaire (see above). When compared with other OECD countries,
Italy emerged among those in which competition is higher: almost 70% of
school principals answered that their school is competing with two or more
schools (see Table 1).
6
<Table 1> around here
To summarise, in this paper we use PISA 2006 data to assess whether
competition among schools is an element affecting school performance. More
precisely, the main research question is whether ‘perceived’ competition has
affected the performance of schools, in terms of higher achievement in Maths
scores.
The analysis is conducted at the school level with a modified version of the
PISA 2006 dataset regarding Italian schools (details are provided in Section
3).
As is evident in the extant literature, selection bias lies at the heart of the
dispute over the effect of competition on school performance. The estimated
positive effect of competition might be a consequence of the non-random
selection of schools. Thus, the beneficial effect of competition may simply be
driven by selection bias. If one would isolate the specific effect of competition
on school performance, it is fundamental to control for the endogeneity of
competition. Our strategy attempted to assess the risk of selection bias with a
matching technique based upon propensity scores wherein ‘perceived’
competition is considered as an endogenous treatment.
The remainder of the paper is organised as follows. Section 2 reviews the
relevant literature. Section 3 and Section 4 describe the data and the
methodology, respectively. Section 5 presents the results, and Section 6
discusses policy implications and concludes.
7
2. Previous literature
Many papers have attempted to estimate the effects of competition on school
performance. More specifically, several studies of this kind were conducted in
the US, where the debate about school choice and school competition is older
than in Europe and other areas of the world. Belfield and Levin (2002)
reviewed more than 40 studies conducted since 2002 and concluded that the
empirical evidence demonstrates a positive but small effect of competition on
academic results. The authors also presented more evidence on this topic in a
book (Belfield and Levin, 2005) concluding, “The above evidence shows
reasonably consistent evidence of a link between competition (choice) and
education quality. Increased competition and higher educational quality are
positively correlated. To an economist, this conclusion is highly plausible.
However, the simple summary fails to capture another important conclusion
from the evidence: the effects of competition on educational outcomes appear
substantively modest, between one-third and two-thirds of the estimates lack
statistical significance, and the methods applied are often multivariate
regressions” (p. 141). Among the studies conducted after the Belfield and
Levin (2002) review, it is important to recall the extensive work collected by
Hoxby (2003), who aimed at demonstrating a strong and statistically robust
effect of competition (school choice) in explaining higher school performance.
This book, which contains several papers conducting empirical analyses on
US data, not only revealed that it is possible to detect a (positive) effect of
8
competition on schools’ and students’ performances, but it also provided
important methodological and theoretical considerations on this topic.
Recently, Rouse and Barrow (2009), in their wider review about the effects of
vouchers on educational outcome, devoted a part of their contribution to the
issue of the effects of competition. Consistent with previous findings, they
suggested that (i) the existing evidence is still inconclusive but also that (ii)
the effects on school performance seem positive, although quite small in
magnitude.
Some evidence also exists about the educational systems in Europe. The main
portion of these studies relate to the UK, given the introduction of pro-
competitive policies (quasi-markets) there in the late 1980s (Glennerster,
1991). Woods and Levačić (2002) presented three case studies of
disadvantaged schools, focusing on their responsiveness to such policies. In
particular, they studied the barriers to responsiveness that impede school
improvement. A research group at Lancaster University investigated the
effects of competition on school performance over a long period (Bradley et
al., 2000; Bradley et al., 2001; Bradley and Taylor, 2002). All these studies
found strong evidence that the presence of a quasi-market has led to
substantial improvements in the performance of secondary schools. This
evidence has been recently tested using data from a longer period, and the
findings confirm the previous results (Bradley and Taylor, 2010). Among
recent studies, Allen and Vignoles (2009) estimated the impact of competition
9
induced by faith schools in the English educational sector, finding a positive
(and statistically significant) effect on the educational system.
Another paper about the UK that should be cited here is Levačić (2004)
because her approach is quite similar to ours in considering a measure of
‘perceived’ competition (the data were collected to measure “perceptions of
competitive conduct” and were derived from a survey of head teachers).
However, she also used a measure of competition calculated through
administrative data or concentration indices. Her results are intriguing: “The
structural competitive variables make only a modest contribution to explaining
perceptions of competition. (…) It was found that one of the indicators of
perceived competition (…) had a consistent and positive impact on both the
level of the GCSE results in 1997 and 1998, and its change over 1991-1998”
(p. 188).
Finally, some results about competition have also arisen out of the Swedish
experience with educational vouchers. Evidence from Sandstrom and
Bergstrom (2005) showed that the performance of public schools increased
because of competition.
In this paper we use Propensity Score Matching (PSM, hereafter) to estimate
the role of ‘perceived’ competition in explaining schools’ performance
differentials. Our approach shares methodological similarities with that
proposed by Lauen (2009). His dataset is limited to a program for enhancing
public school choice in Chicago, and his findings are “(…) a graduation
benefit of exercising school choice. (…) A propensity score analysis, using the
10
principal stratification approach, results in an estimate of 3.6 percentage
points” (p. 195). However, our study is different for two main reasons. First,
we use school-level data instead of student-level data. In fact, our results show
the benefits deriving from competition on the schools in that area and not the
effects on the individual students who are exercising a choice. Second, our
study involves a sample of schools selected from the whole country (Italy) and
it is not focused on a specific experiment in a single city.
3. Data
The main source of the data used here is the PISA 2006 dataset (OECD,
2007a, b; OECD 2005 for technical notes; http://pisa2006.acer.edu.au/ for
more details).
In this paper, we use average student achievement as the key (dependent)
variable to measure the effects of competition. We focused the analysis on
Maths scores.
Some technical issues must be discussed here. PISA does not provide point
estimates of student scores but instead provides five plausible values (PVs)
drawn upon test score distributions (OECD, 2005, pp. 71-80). Moreover, the
complex design of PISA (a two-stage stratified sample) requires using
Balanced Repeated Replications when estimating individual-level effects.
Because in this paper the analysis has been conducted at the school level, we
simply used an ‘average’ PV (that is, measured at the school level) to
overcome this problem. We used the easiest shortcut, which is to employ only
11
one PV per student (instead of five different regressions using five PVs); as
the OECD (2005) technical manual explains, this will provide unbiased
estimates1.
A further caution must be expressed here. The PISA standard rule is to not use
school-level data but to merge the school dataset into the student-level dataset
and use that for the analysis. Although we are aware of this, our focus is on
schools, not individuals. Thus, we felt that employing a school-level dataset
would be a better strategy. As a consequence, we lose a lot of useful
individual-level information, but it is information we are not interested in.
Further discussion may be helpful in this respect. The majority of the variance
in student achievement is situated at the student level; that is, there is much
more variation within schools (i.e., between students) than between schools.
Thus, when using school-level data, it is difficult to detect effects that are
statistically strong and of a relevant magnitude. Nevertheless, this is exactly
the challenge that we addressed in our paper in which we try to identify an
effect of a school-level characteristic – namely, the “competitive pressure”
under which each school operates.
1 “The choice of using one PV instead of the five PVs is due to technical reasons. Indeed, the
use of 5 PVs is very complicated; thus, a statistical procedure can be used to simplify the use
of cognitive achievement scores. However, it is necessary to strictly follow the technical
suggestions provided by OECD (2005). More specifically, “A common fatal error when
analysing with plausible values involves computing the mean of the five plausible values,
before further analysis”. (p. 180). By contrast, “On average, analysing one PV instead of five
PVs provides unbiased population estimates as well as unbiased sampling variances on these
estimates” (p. 109).
12
In the matching approach used, we considered four categories of covariates to
control for: (i) macrogeographical area; (ii) general school characteristics, (iii)
school resources, and (iv) average socio-economic condition of the students
attending the school.
We first consider the macro geographical area through dummies (Macroarea)
because the extensive work by Bratti et al. (2007) largely demonstrated that
there are huge differences in school performance in different areas of the
country – they argued that this variable accounts for about 25% of
achievement differentials.
We introduced several school characteristics as controls (Hanushek, 1986,
1999). First, there is a dummy for the size of the city (City) in which the
school is located (the dummy is city, compared against village, small town and
town). Then, whether the school is a liceo (comprehensive secondary school),
a technical school or a vocational school (reference category) is considered
(TypeSchool). As PISA tested students when they were 15 years old, some
lower secondary schools were included in the original dataset (because some
students repeat grades); following Bratti et al. (2007), we dropped these
observations from the analysis. The following characteristics were also
included: a dummy for private/public school ownership (Private); the sizes of
the school and the class (Size and ClassSize, respectively); and the percentage
of girls (GirlsPerc).
Some indicators of school resources were further added. Following a
consolidated approach (Hanushek, 1986, 1999), the student:teacher ratio (ST
13
ratio) is a proxy for human resource intensity. The proportion of the
computers connected to the web (CompWeb) is a proxy for ICT resources.
A measure of (school-average) socio-economic conditions was controlled for
with a proxy for the educational level of parents (HISCED). The literature
pointed out that the educational and socio-economic characteristics of students
attending a school are important for its performance (Bratti et. al., 2007). The
school average of these indicators can therefore provide an indication of such
peer effects, even though it is in an indirect way, because it measures parents’
characteristics rather than pupils’. All the indicators derived from the student-
level dataset have been weighted according to student weights.
As described above, an indicator of ‘perceived’ competition was employed
here. This indicator was extracted from the PISA 2006 dataset and was used as
the ‘treatment’ variable. This indicator derives from the answer to the
following question in the school questionnaire: “Which of the following
statements best describes the schooling available to students in your location?
There are two or more other schools in this area that compete for our
students, there is one other school in this area that competes for our students,
there are no other schools in this area that compete for our students”. We
assigned 0 if there were no schools competing, 1 otherwise.
Definitions of all the explanatory variables used in the econometric model are
reported in Table 2.
<Table 2> around here
14
Table 3 presents the descriptive statistics for the main variables used in this
study.
<Table 3> around here
4. Methodology
Our aim is to estimate the impact (i.e., the treatment effect) of ‘perceived’
competition on school performance. To obtain consistent estimates, the
selection bias problem must be addressed (as explained in Section 1).2 Any
systematic differences between ‘treated’ schools (schools that perceive
competition) and ‘untreated’ ones (schools that perceive no form of
competition, i.e., the control group) would cause the econometric results to be
unreliable. To some extent, the control group must consist of schools judged
to be comparable to the treated schools, except for not perceiving competition.
In this work, we resort to PSM. A propensity score (PS, hereafter) is a
conditional probability of participation in a treatment (in this work, the
treatment is ‘perceived’ competition). More specifically, we consider school
performance to be the realisation of two potential outputs (Rubin, 1974): Yi1
(when school i perceives competition) and Yi0 (when school i does not
perceive competition). If we were able to observe both potential outputs for
each school we would have no evaluation problems. In fact, we may take the 2 For more details, see Appendix.
15
difference between Yi1 and Yi0 for each school and take the mean of that
difference for all schools. Given that this difference cannot be observed, the
evaluation problem is a typical problem of missing data (Heckman et al.,
1997). For each school, the researcher observes only one potential output; the
other represents the so-called unobservable ‘counterfactual result’. To deal
with this econometric problem, a matching methodology can be employed.
Under the conditional independence assumption (CIA) (Rosenbaum and
Rubin, 1983; Lechner, 2002), we have the conditional independence of school
performance from the treatment variable (perceived competition) given a set
of observable covariates. It is feasible to have a control group of schools that,
even if it is not drawn randomly from the population, is comparable with the
treated schools given the vector X. The vector X includes the observable
covariates used to estimate the propensity score (before the application of the
matching procedure). In this case, exact matching on X can be conducted to
provide a perfect identification of the parameter of interest, i.e., the treatment
effect of perceived competition on school performance.
However, a problem related to the implementation of exact matching is the
dimension of the vector X. If that vector is composed of many variables (as in
our case), the conditioning may be difficult. Therefore, we resort to PSM. The
utility of the PS derives from the fact that if school performance is
independent of the treatment conditional on X, it is also independent of the
treatment conditional on a non-linear function of X. Thus, propensity scores
simplify matching by reducing the dimensionality of the matching problem
16
(Wilde and Hollister, 2007). The idea behind this estimator is that if two
schools have the same PS but are in different treatment groups, the assignment
of the schools to the different treatment groups can be thought of as random.
PSM is a two-step methodology. First, we estimate the PS for each school
through a non-linear regression in X, namely a logit or a probit model. Second,
we match treated schools with one or more control schools with a tailored
distance metric. Then, we estimate the impact of perceived competition on
school performance.
It is important to describe in detail the methodological procedure we used. As
shown in Table 3, among 732 schools (for which we have data on ‘perceived’
competition), 575 school principals report that there is at least one school
competing in the same area. First, starting from this sample of 732 schools, we
calculate the PS, namely the estimated probability that a school is in
competition with at least another school in the same area. We estimate this PS
through both a logit model and a probit model in which our dependent variable
is perceived competition. Among regressors, we include the four covariate
categories (geographical macro-area, general school characteristics, school
resources, and the average socio-economic condition of students attending the
school) explained above.3 At this step, we excluded 138 schools due to lack of
data on the covariates. This procedure left 466 schools with ‘perceived’
competition equal to one (“treated” schools), and 128 schools with ‘perceived’
3 Results from both logit and probit estimations are available upon request from the authors.
17
competition equal to zero (“untreated” schools composing the control group),
either through the application of logit or probit.
Second, we restrict our sample to common support. More specifically, for
each group of schools (“treated” and “untreated”), we look at the distribution
of the estimated propensity scores, looking for overlap in the PS distributions
of the treated and untreated schools. In fact, if either treated schools or control
schools with specific PS values have zero probability of occurring, we are not
able to identify and consistently estimate any effects in that region. To avoid
extrapolation, we restrict the sample by eliminating schools whose propensity
scores are not included in the common area of the two distributions. Thus, we
compare the two PS distributions and we restrict the sample. The question is:
“what is common support?” The meaning of this proposition is the following.
If for a specific PS value we have only one school, for this PS value we cannot
estimate a treatment effect, i.e., an effect of competition on school
performance. In other words, the only thing we can do is to match the only
school observed with this PS value (either a treated school or an untreated
one) to the closest school (in terms of its PS value) belonging to the other
group. The risk is that the difference in these two PS values might be very
large. In our case, if we did not restrict the sample to common support we
would match an untreated school with the nearest treated school whose PS is
more than 9% higher. Through either logit or probit procedures, after
restriction to common support we lost 6 schools. Thus, the total number of
schools in Tables 5a, 5b and 5c is 588.
18
Third, we perform nearest neighbour matching with the Mahalanobis distance
(as suggested by Czarnitzki et al., 2007) by taking one control school for each
treated school. In so doing, we control for potential heteroskedasticity and
finite sample bias (as suggested by Abadie et al., 2004). Finally, we perform a
sensitivity analysis by taking two and three control schools for each treated
school to test the robustness of our results.4
Some advantages of PSM are suggested by Wilde and Hollister (2007):
“The recent work by Dehejia and Wahba (1999, 2002) gave rise to considerable interest in the potential for using PSM as a means of obtaining better nonexperimental impact estimates. [...] Dehejia and Wahba (1999, 2002) found that using propensity score methods, they could reasonably replicate experimental impact estimates. [...] Because of this, and the interest generated by the initial Dehejia and Wahba articles, policymakers and practitioners became excited about the potential use of propensity scores to produce comparison groups and thereby provide a credible nonexperimental evaluation as an alternative to a full experimental design evaluation study”. However, PSM also has some drawbacks. The primary disadvantage of
matching methods is that the results are sensitive to both the functional form
(e.g., logistic or standard normal) chosen to estimate the PS and to the number
of control units to be matched with treated units, as demonstrated by Smith
and Todd’s (2001) re-analysis of the results obtained by Dehejia and Wahba
(1999). As previously mentioned, we perform two types of robustness checks:
i) we use two methods to estimate PS values (i.e., logit or probit), and ii) we
match different numbers (one, two or three) of control schools with treated
schools. As we show in Section 5, the results are very similar alongside the
4 In all three matching procedures, with one, two, or three untreated schools for each treated
school, the matching procedure uses replacement. Thus, each untreated school may serve as a
matched untreated school for more than one treated school (also due to the larger number of
treated schools than untreated ones).
19
two “directions” of robustness checks, thereby excluding the possibility that
our results are strongly data-driven.
5. Results
Table 4 contains some preliminary descriptive statistics related to the output
values for the two groups of schools: the ‘treated’ (those for which principals
reported competition) and the ‘untreated’.
<Table 4> around here
It is noteworthy to see that the average scores are quite similar, and the t-tests
confirm this first impression. However, this apparent similarity in scores may
be driven by systematic differences between the two groups that may reduce
the ‘real’ effect of competition on school performance. As explained in
Section 4, we employ a PSM approach to detect a potential influence of
competition on school output (as measured by Maths achievement scores) by
comparing schools that are as similar as possible in all observable
characteristics except for the value of the treatment variable (i.e., perceived
competition).
Table 5a illustrates the main results obtained from the PSM analysis by taking
one control school for each treated school. The first two columns refer to PSs
estimated through a logit model; the third and fourth columns show PSM
20
results in which the PS is estimated through a probit model. In columns 2 and
4, we control for potential heteroskedasticity and finite sample bias.
<Table 5a> around here
The use of the PSM procedure allows a positive effect of competition on
school results to be detected. This effect is estimated to be between 17 and 19
points (it is important to remember that the average Maths scores for untreated
and treated schools are 466.1 and 469.4, respectively; see Table 4). Thus, the
impact of this effect due to competition when computed at the mean output
value is between 3.62% and 4.05%. This result is somewhat consistent with
findings reported by Agasisti (2011), who used a different measure of
competition (namely, the ‘density’ of schools at the regional level) and a
different empirical strategy – the specification of an educational production
function.
Tables 5b and 5c show the sensitivity analysis that was performed.
<Table 5b> around here
<Table 5c> around here
In each table, columns 1 and 2 show results from taking two control schools
for each treated school, and columns 3 and 4 show results from taking three
control schools for each treated school. In both tables, columns 2 and 4 show
21
results obtained through the use of the robust procedure (with corrections for
heteroskedasticity and finite sample bias). The results differ depending on the
procedure used to estimate the PS (logit versus probit in tables 5b and 5c,
respectively). More specifically, the use of the robust procedure allows the
positive effects of competition on school results to be detected. These effects
are estimated to be between 14 and 16.62 points. Thus, the effect due to
competition, when computed at the mean output value, is between 2.98% and
3.54%.
6. Discussion
This study aims to contribute to the debate about the relationship between
competition and performance in the educational setting. In our opinion, this
study is noteworthy for three main reasons.
First, the PISA 2006 dataset was used to explore the effects of competition.
Indeed, a typical problem in this field is the limited availability of datasets. In
this case, the presence of a question specifically related to the role of
competition helped us to directly investigate this issue. We suggest that this
characteristic of the dataset could be used to analyse this topic in other OECD
countries and for comparisons between countries.
The second point is methodological, and it regards the use of PSM. This
methodological tool is not new in the educational context, and it has been used
for multiple purposes. For instance, PSM has been used to study the
effectiveness of private schooling (e.g., Vanderberghe and Robin, 2003),
22
Catholic schooling (Nguyen et al., 2006), and class size reduction (Wilde and
Hollister, 2007). However, the recent literature has adopted different strategies
(such as difference-in-difference estimates, educational production functions
with the inclusion of Herfindahl indexes, regression discontinuity design),
which often require panel data. In Italy, no such complete datasets exist. As a
consequence, the use of PSM strategies instead of simple cross-sectional
analyses can help to shed more light on the complex impact of competition on
school performance. An advantage of PSM, especially in comparison with the
instrumental variables (IV) approach or the Heckman selection estimator, is
that an exclusion restriction, i.e., a suitable instrumental variable for
identifying the effect of ‘perceived’ competition on school performance, is not
needed. In fact, it is assumed that the assignation of schools into the treatment
group (i.e., schools facing competition) and the control group (i.e., schools
that do not face competition) is based on observable differences between
schools in the two groups. However, it is worth stressing that in the presence
of relevant unobservable characteristics that impact the probability of
receiving treatment (i.e., of facing competition in this work) estimates
obtained through PSM might be biased (Abadie, 2005). Finally, it is worth
noting that we do not presume to estimate a causal link between perceived
competition and school performance. This is due to the cross-sectional nature
of our dataset. At the same time, the challenge of collecting data from other
sources about the same phenomenon is very important, as it allows the
reliability of the findings from the empirical analysis presented here to be
23
verified. The “generalisability theory” applied in the educational field focuses
on this task by providing a robustness check on the phenomenon (i.e., the
relationship between competition and school performances) and by estimating
possible sources of measurement error via multiple approaches (Shavelson, et
al., 1989; Brennan, 1992). Further research will be devoted to exploring the
role of alternative measures of competition. Moreover, alternative methods
will be applied to the same data, especially using student-level data together
with school-level data. In the present paper, we only considered school-level
data because our study focuses on school-level differences (schools influenced
by competition or not).
Finally, the third point concerns the results obtained through the empirical
analysis. Overall, the findings presented in this paper suggest that competition
exerts a positive influence on school performance. The limited extent of this
positive effect seems consistent with previous analyses conducted in the US
(Belfield and Levin, 2005). The relevance for policy-making here is that
competition forces appear to operate even in the absence of specific pro-
competitive policies. Previous studies on this topic have detected a positive
role of competition in countries where such policies (especially vouchers)
have been implemented: the UK (e.g., Bradley and Taylor, 2010), Sweden
(Sandström and Bergström, 2005) and Chile (McEwan and Carnoy, 2000) (a
preliminary overall evaluation of these two cases has been provided by
Carnoy, 1998). Our findings suggest that schools in which the governing body
“feels” the pressure of competition perform better on average than those that
24
are not subject to competition. These results shed light on one possible factor
that should be included in educational effectiveness analyses. Ample literature
looks for “school factors” affecting educational performance (Creemers, et al.,
1989; Creemers and Kyriakides, 2006), but little attention has traditionally
been paid to the potential role of competition. The results of this paper suggest
that indicators of competition should be included more frequently because
they could be very useful for understanding the “dynamic” of competitive
effects on schools’ activities and performance.
Thus, our results have some policy implications. To the extent that
competition affects performance positively, any intervention that promotes
competition should be promoted. Another point is notable here: our results
show the effects of ‘perceived’ competition. Thus, policies should foster the
perception of competition in the educational sector. In this respect, a typical
policy is the introduction of competitive allocation of public funds, e.g.,
through a funding formula based on the number of students and/or
performance indicators. In Italy, the present educational system lacks
competitive pressures of this type because the public funds follow the number
of teachers (supply-driven) instead of the number of students (demand-
driven). As a consequence, schools have an incentive to attract teachers
instead of students. Moreover, the strict regulatory framework imposes
standard allocations based on the student:teacher ratio, so the ‘market’ for
students is impoverished. In other words, schools do not compete for students
25
because increasing the number of students does not lead to more public
resources.
A further stimulus for future research regards the relationship between
measures of ‘perceived’ and ‘measurable’ competition. This issue was
previously explored in the context of the UK educational setting (Levačić,
2004); the author showed a poor correlation between competition as reported
by school principals and certain ‘objective’ measures of competition such as
the number of schools in the same area. Future studies in this field should
focus on this issue by addressing the determinants of ‘perceived’ competition.
For instance, is it clear that ‘perceived’ competition is determined by the
number of (potential) competitors in the area? Or are other factors more
relevant, for instance the quality of the competitors? If so, more research is
needed to explore the patterns of competition among schools.
Acknowledgements
We are grateful to two anonymous referees who provided valuable comments.
All eventual errors are our solely responsibility.
We are also grateful to the Istituto nazionale per la valutazione del sistema
educativo di istruzione e di formazione (INVALSI), which provided us the
Italian OECD-PISA2006 complete dataset.
26
References
Abadie, A. (2005) Semiparametric difference-in-differences estimators,
Review of Economic Studies, 72 (1), 1-19.
Abadie, A., Drukker, D., Herr, J., Imbens, G. (2004) Implementing
matching estimators for average treatment effects in Stata, Stata Journal, 4(3),
290-311.
Agasisti, T. (2011) Does competition affect schools’ performance? Some
evidence from Italy through OECD-PISA data, European Journal of
Education, forthcoming.
Allen, R., Vignoles, A. (2009) Can school competition improve standards?
The case of faith schools in England. Available online at
http://eprints.ncrm.ac.uk/1292/1/qsswp0904.pdf.
Belfield, C.R, Levin, H.M. (2002) The effects of competition between
schools on educational outcomes: a review for the United States, Review of
Wilde, E.T., Hollister, R. (2007) How Close Is Close Enough? Evaluating
Propensity Score Matching Using Data from a Class Size Reduction
Experiment, Journal of Policy Analysis and Management, 26 (3), 455–477.
Woods, P.A., Levačić, R. (2002) Raising school performance in the League
Tables (Part 2): barriers to responsiveness in three disadvantaged schools,
British Educational Research Journal, 28(2), 227-247.
32
Tables
Table 1 – Competition among schools in the OECD area
(results from the PISA 2006 questionnaire)
How many schools in the same area compete for your students?
Two or more other schools
One other school No other schools
Indonesia 90.0 4.8 5.2 Hong Kong-China 89.6 9.2 1.2 Australia 88.4 5.2 6.4 Slovak Republic 85.0 6.4 8.6 United Kingdom 83.7 8.7 7.6 New Zealand 82.1 7.1 10.8 Japan 82.0 7.6 10.4 Chinese Taipei 80.9 12.7 6.4 Macao-China 80.8 8.6 10.6 Latvia 80.7 15.2 4.1 Korea 75.7 8.7 15.6 Netherlands 74.2 15.3 10.5 Czech Republic 73.9 12.1 14.1 Ireland 73.8 9.8 16.4 Montenegro 73.6 24.9 1.5 Belgium 71.9 18.6 9.4 Argentina 71.3 9.4 19.3 Israel 69.1 13.6 17.4 Italy 68.8 12.0 19.2 Germany 68.8 14.2 17.0 Mexico 67.6 16.7 15.7 Bulgaria 67.4 17.4 15.2 Thailand 65.8 22.5 11.7 Croatia 65.4 11.6 23.0 Chile 63.9 17.3 18.8 United States 63.6 10.5 25.9 Spain 62.1 17.7 20.2 Hungary 59.7 15.9 24.4 Denmark 59.2 18.2 22.6 Canada 58.8 18.5 22.7 Colombia 57.6 18.2 24.2 Estonia 56.4 22.2 21.4 Turkey 52.7 15.6 31.7 Russian Federation 51.1 16.9 32.0 Luxembourg 51.0 15.7 33.3 Sweden 49.6 13.5 36.8 Serbia 49.6 23.4 27.1 Portugal 48.2 24.7 27.1 Azerbaijan 48.0 26.6 25.4 Kyrgyzstan 47.6 17.3 35.1 Austria 45.2 19.2 35.6 Greece 44.8 14.9 40.3 Poland 44.4 20.5 35.1 Lithuania 42.1 30.4 27.5 Finland 40.5 15.5 44.0 Slovenia 40.2 12.2 47.6 Jordan 36.4 19.4 44.1 Uruguay 34.5 13.5 52.0 Romania 31.1 23.7 45.2 Tunisia 29.3 21.8 48.9 Brazil 28.4 38.8 32.8 Qatar 27.7 15.7 56.6 Switzerland 27.5 14.1 58.4 Iceland 22.8 5.0 72.2 Norway 21.8 12.4 65.9
Source: PISA 2006 dataset. The columns represent percentages.
33
Table 2 – Definitions of explanatory variables
Variable Description Macroarea Dummies for the three areas in Italy: Northern, Central and Southern Italy City Dummy to indicate whether the school is located in a city (or in a rural area)
TypeSchool Dummies for separately considering vocational, technical or “academic” (Licei) school
Private Dummy: if the school is private = 1, 0 otherwise Size Number of students in the school ClassSize Average size of the school’s classes (number of students) GirlsPerc Percentage of girls in the school ST Ratio Average number of students for each teacher in the school CompWeb Proportion of computers connected to the web in the school HISCED An indicator of family socio-economic status (school average)
‘Perceived’ Competition Dummy: 0 if the school’s principal considers that there are no schools competing in the same area, 1 otherwise
Table 4 – Output values for ‘treated’ and ‘untreated’ schools
Treated schools N. obs. Mean Median St. dev. Min Max Skewness Kurtosis
Maths score
575 469.4 472.0 69.2 277.4 726.9 -0.081 2.594
Untreated schools N. obs. Mean Median St. dev. Min Max Skewness Kurtosis
Maths score
157 466.1 461.5 70.22 275.8 625.2 -0.095 2.60
Test Maths score
3.2441 (6.2518)
The last row reports the results of two-sample mean comparison t tests. Overall standard errors are in brackets.
36
Table 5a – Propensity score matching results
LOGIT PROBIT
m=1
baseline m=1
adjusted m=1
baseline m=1
adjusted Maths score
17.1083* (10.1840)
17.0797** (7.7878)
18.7283* (11.0613)
18.7555** (9.0277)
Obs. 588 588 588 588
Notes: (1) Standard errors are in parentheses; (2) * and ** indicate statistical significance at 10% and 5%, respectively. In columns 1 and 2, the logit functional form was used to derive propensity scores; in columns 3 and 4, the probit functional form was used to derive propensity scores. The sample of schools was restricted to common support. Technique for the matching procedure: nearest neighbours with Mahalanobis distance. m= number of schools used for matching. The adjusted results (columns 2 and 4) are corrected for both heteroskedasticity and finite sample bias.
Notes: (1) Standard errors are in parentheses; (2) * indicates statistical significance at 10%. The logit functional form was used to derive propensity scores. The sample of schools was restricted to common support. Technique for the matching procedure: nearest neighbours with Mahalanobis distance. m= number of schools used for matching. The adjusted results (columns 2 and 4) are corrected for both heteroskedasticity and finite sample bias.
Notes: (1) Standard errors are in parentheses; (2) * and ** indicate statistical significance at 10% and 5%, respectively. The probit functional form was used to derive propensity scores. The sample of schools was restricted to common support. Technique for the matching procedure: nearest neighbours with Mahalanobis distance. m= number of schools used for matching. The adjusted results (columns 2 and 4) are corrected for both heteroskedasticity and finite sample bias.
39
Appendix
Selection bias
To facilitate discussion, consider the following equation:
�� � ��� � ��� � � � � (1)
where Yi is the performance of school i; Di is a dummy variable that equals
one if school i faces competition; Xi is a vector of covariates; αi is a school-
specific shock in school performance that is unobservable by the
econometrician but may be observable by the school’s principal; and εi is the
usual error term, which is assumed to be uncorrelated with Di.
If Di is uncorrelated with αi, we can consistently estimate Eq. (1) using the
ordinary least squares (OLS) technique. If this assumption does not hold, OLS
estimates are likely to be biased. In fact, the treatment variable Di is correlated
with the composite error term (Grilli and Murtinu, 2011), which may lead to a
biased estimate of the coefficient β due to the non-random selection process.
For instance, if there is a variable unobservable to the econometrician
(therefore, it is not included in the model), there may be a correlation between
this variable and the likelihood of facing competition. This makes OLS biased
due to an incorrect assumption regarding the exogeneity of ‘perceived’
competition. Thus, we consider ‘perceived’ competition as an endogenous
variable.
The simplest approach to controlling for the endogeneity of the treatment
variable Di (resulting from its correlation with the error term) is the inclusion
in the regression of proxy variables to control for unobserved effects.
40
However, it is very difficult to know whether such controls represent school-
specific factors able to explain the likelihood of facing competition. In this
respect, it would be more technically feasible to assume that school
assignation into either the treated sample (i.e., schools facing competition) or
the control sample (i.e., schools that to do not face any sort of competition)