1 Organizational Red Tape: The Conceptualization of a Common Measure Mary K. Feeney Assistant Professor Department of Public Administration University of Illinois at Chicago Email: [email protected]Prepared for the 11 th Annual Public Management Research Conference Syracuse, N.Y. June 2-4, 2011 Abstract Multiple public administration survey research projects have asked respondents to assess the level of red tape in their organizations. Most of these surveys use the following questionnaire item: If red tape is defined as “burdensome rules and procedures that have negative effects on the organization’s effectiveness,” how would you assess the level of red tape in your organization? [Response categories are a scale from 0 (almost no red tape) to 10 (a great deal of red tape)]. Unfortunately, no research has tested the validity of this measure or the ways in which respondents may or may not be assessing red tape based on this definition or some other preconceived notion of ―red tape‖. This research aims to test whether or not the Organizational Red Tape scale is capturing the multidimensional nature of red tape. Does the question wording bias individual perceptions of organizational red tape? Is the definition of red tape that is presented in the question necessary for guiding respondents and differentiating red tape from general rules? Does the definition exclude other important negative outcomes of organizational red tape, such as equity and fairness? This research tests the validity of the traditional Organizational Red Tape scale. In a 2010 national survey of 2500 local government managers, we randomized the following four organizational red tape items: (1) the aforementioned original red tape scale, (3) a question focused on rules in general, instead of red tape, (3) a question focused on accountability, transparency, equity, and fairness instead of efficiency, and (4) an item that provided no red tape definition. We use responses from these four items to investigate whether or not there is variation in perceived Organizational Red Tape based on the question wording. The findings from this research contribute to the red tape literature by providing the first empirical evidence that the Organizational Red Tape measure, commonly used in public administration research, is not capturing the multiple dimensions of red tape.
31
Embed
Organizational Red Tape: The Conceptualization of a … · 2 Organizational Red Tape: The Conceptualization of a Common Measure ―If your experiment needs statistics, you ought to
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Organizational Red Tape: The Conceptualization of a Common Measure
2002). Red tape researchers note that rules are relevant to other public administration values,
such as transparency, accountability, equity, representation, and fairness (Feeney, Moynihan, &
Walker 2010). Many of the 2010 Red Tape Workshop participants noted that developing a multi-
dimensional concept and definition of red tape would enable researchers to broaden the study of
red tape to consider these important public administration values. While there is some research in
public administration and policy that investigates how rules are related to values (Moynihan
study and others), this research is not described as red tape research. Moreover, red tape
researchers are not currently operationalizing measures or definitions that capture the multi-
dimensionality of red tape. While many researchers agree that red tape affects other values,
besides efficiency and effectiveness, there is no empirical red tape research or questionnaire
items that guide research subjects to conceptualize these multiple components of red tape.
Wright and colleagues (2004) argue that public administration researchers need to be much more
concerned with measurement issues and many red tape researchers are in agreement (Feeney,
Moynihan, & Walker 2010). Thus, this research takes a first step at testing the measure of
7
Organizational Red Tape and whether or not this common measure is missing important
dimensions.
Research Design and Data
This analysis uses data from a web survey on e-government technology and civic
engagement conducted by the Science, Technology and Environmental Policy Lab at the
University of Illinois at Chicago and supported by the Institute of Policy and Civic Engagement
(IPCE). The IPCE Local Government Survey was administered to government managers in 500
local governments with citizen populations ranging from 25,000 to 250,000. Because larger
cities often have greater financial and technical capacity for e-government, all 184 cities with a
population over 100,000 were selected while a proportionate random sample of 316 out of 1,002
communities was drawn from cities with populations under 100,000. The data are weighted to
reflect this sampling procedure.1 For each city, lead managers were identified in each of the
following five departments: general city management, community development, finance, the
police, and parks and recreation. A total of 2,500 city managers were invited to take part in the
survey. The survey began on August 2, 2010 and closed on October 11, 2010. A total of 902
responses were received for a final response rate of 37.9%.2
We designed the survey to randomly test a set of four questionnaire items for the
Organizational Red Tape measure. The four items had identical response categories, asking
respondents to rate the level of organizational red tape on a scale of 0 [Almost no red tape] to 10
[Great deal of red tape]. We label the four variations of the questionnaire item: Original Red
Tape, Rules Red Tape, Other Outcomes Red Tape, and No Definition Red Tape. The specific
questionnaire items are listed in table 1. The Original Red Tape item uses the definition that
first appeared in Rainey, Pandey, and Bozeman (1995) and was subsequently used in multiple
instruments (e.g. NASP I, NASP II, NASP III, Kansas Study, English Local Government study)
and publications. Since some researchers have argued that respondents might not know the
difference between red tape and rules in general, the second item, Rules Red Tape, does not
provide the respondent with a formal definition of red tape, but rather asks the respondent to
1 Weights for the data were calculated based on respondent city size (the sampling procedure). We used the
percentage of individuals per city grouping in the population and the percentage of individuals from those cities in
the sample to calculate weights that ranged from.42 (largest cities) to 1.34 (smallest cities) 2 The population size was reduced to 2380 after removing bad addresses and individuals who were no longer
working in the position.
8
think about burdensome rules and procedures that negatively affect the organization. Because
some researchers have noted that previous red tape research has focused on efficiency and
organizational performance, the item labeled Other Outcomes Red Tape includes a definition
of red tape as having negative effects on accountability, transparency, equity, and fairness.
Finally, the No Definition Red Tape measure provides no definition of red tape but simply asks
the respondent to assess the level of red tape in the organization.
[Insert table 1 about here]
Random Assignment: The four red tape items were randomly assigned to respondents
when they logged into the survey. Because some individuals reentered the survey, completing
portions of the survey during multiple sittings, they may have been reassigned a different red
tape item upon their second or third entry to the instrument (though they would only have
answered the item once). Of the 902 respondents to the survey, 863 completed the red tape
items.3 The Original Red Tape item had the fewest respondents (n=205) and the most
respondents completed the Rules Red Tape item (n=228). The mean response varied from 4.40
for the Other Outcomes Red Tape measure and 5.36 for the No Definition Red Tape measure
(see table 2).
[Insert table 2 about here]
Because this survey was administered to respondents from five departments in local
governments and from cities with populations ranging 25,000 to 250,000, it is important to see
that each red tape item was randomly assigned to respondents from each category. Table 3
outlines the respondents for each red tape item by city department and city size. If we look at
respondents by city department, we see that between 21% and 30% of respondents from each
department responded to each red tape item.
Within each city size, we see a relatively stable distribution of responses per red tape
item. For example, 23% to 27% of individuals in the smallest cities completed each red tape
item. At least 19% of those living in medium sized cities completed each item. The lowest
proportion of responses were to the No Definition Red Tape item among those in cities with a
population ranging from 150,000-199,999 (12%).
3 Not all 902 respondents made it through the entire survey, since it was quite long. We retained all respondents who
completed more than 2/3 of the survey and who completed the e-government components of the survey, which was
the primary purpose of the study. Respondents who skipped the red tape items or did not complete the final pages of
the survey are still included in the overall study. The present analysis focuses on the 863 who completed the red tape
section.
9
[Insert table 3 about here]
Finally, to ensure that the four red tape items were administered randomly across the
sample, we compared each of the items by the following sample characteristics: gender,
education, race, age, and time working in the city. Table 3 indicates the mean values of each red
tape item by the characteristics of the sample. We see that 30% of the women in the sample
responded to the Rules Red Tape Measure and 21% responded to the Other Outcomes Red Tape
Measure. About one quarter of the men responded to each red tape items. In general, one quarter
of the women, men, college graduates, MPA holders, and white respondents answered each item.
The mean age of respondents for each item are noted in table 3 along with the mean number of
years that respondents have worked in the city. Comparison of means tests indicate that there are
no significant differences across the groups who responded to the four red tape items based on
city size, department type, gender, education, race, age, or time working in the city.
Methods
The empirical red tape literature indicates that the following individual and
organizational characteristics and factors are related to perceptions of red tape: alienation, job
satisfaction, public service motivation, organizational commitment, sector, role clarity, job
routineness, age and a number of other factors. While an ideal study would have designed a
questionnaire that asked each of these items in addition to randomly assigning the four red tape
items under study, the IPCE study was not designed primarily for the purposes of red tape
research. Thus, the present analysis will be restricted to investigating the ways in which the four
red tape items are related to the following organizational and individual characteristics: city size,
department type /function, organizational size, respondent gender, age, race, education level, and
job tenure. We then investigate the relationships between the red tape items and the following
managerial perceptions: Public Service Motivation, job satisfaction, centralization, personnel
flexibility, and bureaucratic personality. Specifically, we are interested in determining whether
these concepts and measures are differently related to the four red tape items under study. If the
red tape items are differently related to these measures, then it is possible that responses are
influenced by the ways in which we ask about red tape.
Before presenting the analysis, we describe the measures. City size is measured using
five dummy variables indicating city population: 25,000 - 49,999 (=1), 50,000-99,999 (=1),
10
100,000-149,999 (=1), 150,000-199,999 (=1), and 200,000-250,000 (=1). Department is captured
with five dummy variables: Mayor’s Office (=1), Community Development (=1), Finance
Department (=1), Parks & Recreation (=1), and Police Department (=1). Organizational
size is the natural log of a continuous variable indicating the number of full time employees in
the respondent’s organization. Female is coded one if the respondent is female, zero if male. Age
is a continuous variable. White is coded one if the respondent is white and zero if not. Education
is captured with three measures: College Graduate is coded one if the respondent graduated
from college, MPA is coded one if the respondent has a master’s degree in public administration,
public policy, or public service; and MBA is coded one if the respondent has a MBA. Job
Tenure is a continuous variable indicating the number of years that the respondent has worked
for the city.
Public Service Motivation is the sum of responses to seven items from Perry’s (1996)
original scale (see below). The survey had included 10 items from Perry’s (1996) original
measures of Civic Duty and Commitment to the Public Interest constructs, but a factor analysis
indicated that only seven of the items loaded together (Eigenvalue 3.534; %Variance explained
50.485). A scale reliability test indicated that these seven items have a Cronbach’s Alpha of .831.
1. I consider public service my civic duty.
2. I unselfishly contribute to my community.
3. I am willing to go to great lengths to fulfill my obligations to my country.
4. I believe everyone has a moral commitment to civic affairs no matter how busy they are.
5. It is my responsibility to help solve problems arising from interdependencies among people.
6. Meaningful public service is very important to me.
7. Public service is one of the highest forms of citizenship.
Job Satisfaction is measured on a five-point agreement scale (1=strongly disagree;
5=strongly agree) to the following item ―All in all, I am satisfied with my job.‖ Centralization
is a summative scale comprised of the following three items: (1) There can be little action taken
here until a supervisor approves a decision; (2) In general, a person who wants to make his own
decisions would be quickly discouraged in this agency; and (3) Even small matters have to be
referred to someone higher up for a final answer. A low score on the Centralization scale
indicates low perceived centralization; a high score is high perceived centralization. The
Cronbach’s Alpha for the Centralization scale is .750. Personnel flexibility is captured by
summing the 5-point agreement scale responses to two survey items: (1) The formal pay
11
structures and rules make it hard to reward a good employee with higher pay here and (2) Even if
a manager is a poor performer, formal rules make it hard to remove him or her from the
organization. The item ranges from 1 (low flexibility) to 10 (high flexibility). The Cronbach’s
Alpha for Personnel Flexibility is .652. Descriptive statistics for all variables are in Table 2.
Analysis
The analysis is presented in three stages. First, using one-sample t-tests, we investigate
the ways in which responses to the four red tape items vary from one another and in comparison
to the Original Red Tape item. Second, through t-tests, analysis of variance, and OLS regression
we investigate the relationships between a number of organizational and individual
characteristics and the four red tape items. Third, we assess the linguistic difficulty of the red
tape items.
A one sample t-test enables us to test whether the sample mean significantly differs from
a hypothesized value. In this case, because we are interested in testing if responses to the Rules,
Other Outcomes, and No Definition Red Tape measures vary significantly from the Original Red
Tape scale, we use the mean response from Original Red Tape (4.84) as the test value. The one
sample t-test presented in Table 4 indicates that the mean responses for two of the items are
significantly different (p< .01) from the test value (the mean of responses to Original Red Tape).
Local government managers who responded to the Other Outcomes Red Tape reported a mean
value significantly lower than responses to the Original Red Tape item. Those who responded to
the No Definition Red Tape item reported organizational red tape levels that are significantly
higher than those who responded to the Original Red Tape item. In comparison, those who
responded to the Rules Red Tape reported perceived organizational red tape that did not
significantly differ from the mean response values in the Original Red Tape item.
[Insert table 4 about here]
In summary, in comparison to the Original Red Tape item, respondents indicated
significantly different mean levels of organizational red tape when responding to the Other
Outcomes Red Tape and No Definition Red Tape items. The Original Red Tape item asks
respondents to rate red tape in their organizations as it relates to organizational effectiveness,
while the Other Outcomes item asks respondents to rate red tape as it relates to organizational
accountability, transparency, equity, and fairness. Respondents, when guided by the definitions,
12
are responding in significantly different ways. Moreover, we see that when provided with no
definition of red tape, respondents rate organizational red tape, on average, higher than when
asked about organizational effectiveness in particular. Thus, we see that the definition provided
in the questionnaire item is accountable for some level of variation in organizational red tape
ratings.
Organizational and individual characteristics. Before investigating a causal model
predicting the red tape items, we ran t-tests and analysis of variance tests to investigate variation
across the four red tape items and organizational and individual characteristics. The results from
a two independent samples t-test for the Red Tape Items by gender, race, and education indicate
indicate that there are no significant differences between the mean red tape scores for women
and men, whites and nonwhites, and those with an MPA or MBA. There is a statistically
significant difference between the mean score for Other Outcomes Red Tape for college
graduates and non-college graduates (t = 3.300, p < .05). There is also a statistically significant
difference in the mean score for No Definition Red Tape and whites and non-whites (-0.216, p<
.05). In other words, college graduates have a statistically significantly higher mean score on
Other Outcomes Red Tape than non-college graduates and whites have a statistically
significantly lower mean score on No Definition Red Tape than non-whites.
The ANOVA tested for within and between group variation. Table 5 notes the F-statistics
and significance level for the ANOVA tests. First, we see that there is no variation in the red tape
items and having an MPA or MBA degree, organization size and city size. We find significant
variance in responses to the No Definition Red Tape item and job tenure and gender. We also
find significant variance in responses to the Other Outcomes Red tape item among those who
have a college degree, compared to those that do not, and respondents who are white, compared
to those who are not white.
[Insert table 5 about here]
Previous research leads us to expect that job satisfaction is significantly related to red