ORGANIZATIONAL IMPACT OF MANAGEMENT THEORIES Randall L. Schultz University of Iowa Management theories range from fads to those that become part of the repertoire of decision making. This research classifies management theories and proposes a "Beaufort-type" scale to measure organizational impact. March, 2006
25
Embed
ORGANIZATIONAL IMPACT OF MANAGEMENT THEORIEStheproduct.com/faculty/papers/organizational_impact.pdf · ORGANIZATIONAL IMPACT OF MANAGEMENT THEORIES Randall L. Schultz University of
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
ORGANIZATIONAL IMPACT OF MANAGEMENT THEORIES
Randall L. Schultz
University of Iowa
Management theories range from fads to those that become part of the
repertoire of decision making. This research classifies management
theories and proposes a "Beaufort-type" scale to measure organizational
impact.
March, 2006
2
MANAGEMENT THEORIES
One seemingly sure way to write a best selling book is to come up with a new diet
or a new management theory. The routine for management theories is straightforward:
take an idea—however small—write a book, visit the talk shows, count the royalties. As
a preliminary matter to developing a new scale for measuring the impact of such
“theories” of management, we collect most of these theories over the past four decades.
USE OF MANAGEMENT THEORIES
Our primary question is: Are any of these theories actually used in corporations?
The nature of use is subtle. Take the theory of “core competency,” proposed by Hamel
and Pralahad (1990). Is the theory used if an organization talks about it, perhaps often,
and perhaps in meetings as a matter of course? Is the theory used if it becomes a part of
reports that state the firm’s core competence? Or should we demand something more than
that to consider a theory to be “used,” such as the possible fact that some or all decisions
are made with reference to core competence?
Even then, such “use” of the theory may make no difference to the actual
decisions. What, then, does this mean? This hierarchy of use parallels the hierarchy of
use observed in the implementation of models and systems in organizations (cf. Schultz,
Ginzberg and Lucas, 1984), where measures of use range from no use at all to change
without use—a situation where the very fact that the model or system was introduced to
the organization in some way accounts for a change in decision making, but not the
model or system itself.
3
IMPACT OF MANAGEMENT THEORIES
Equally important is the question of impact: If a theory is used, did it
make any difference in performance? Like use, the nature of impact is varied.
There can be impact without change, change without impact, change with
impact, positive impact and negative impact. How can these types of impact
(and the types of use) be sorted out? We propose a simple scale that embodies
both use and impact so that the usefulness and consequences of all of these
popular—and not so popular—management theories can be judged for what
they are supposed to be: ways of improving organizational effectiveness.
LIMITATIONS OF EXISTING MEASURES OF USE
Most existing measures of use are of limited practicality, primarily because they
must be parameterized for each type of innovation and each organization. Consider the
most common way of looking at use in organizations: innovation and adoption.
ADOPTION
There is a vast literature on adoption that generally takes as its starting point the
influential book of Rogers (1995), the first edition of which was published in 1962. The
suggestion of Rogers was that innovation in organizations followed a five stage process,
viz.
1. Agenda-Setting
2. Matching
4
3. Redefining/Restructuring
4. Clarifying
5. Routinizing
This process begins with agenda-setting, conceived of as in continual operation with
action triggered by performance gaps between actual and desired states or goals.
Organizations scan the environment looking for new ideas—innovations—that could help
close the performance gaps. Matching is essentially a feasibility check and, according to
Rogers, this may result in termination of the idea if there seem to be too many problems
with fit. Although Rogers does not discuss this, it is implied that there would be some if
not considerable discussion about the innovation (i.e., talking about it in groups).
The first two stages are considered as an initiation process and the next three
stages as the implementation process. So, in this view, implementation only begins after
“talk” about fit. Redefining/restructuring implies that either the innovation or the
organization is changed so that the fit is improved. This concept is similar to the concept
of “organizational validity” used in the implementation research literature to show a pre-
condition for implementation (Schultz and Slevin, 1975a). The clarifying stage would
then occur (if the implementation proceeded), and here Rogers says that the innovation is
“put into more widespread use” (Rogers, 1975, p. 399). Since Rogers does not discuss
management innovations or theories per se, the nature of this use is not elaborated. But,
importantly, the innovation is linked with the question of who will be affected by the
implementation, especially the individual (“Will it affect me?”). This concept also finds
support in implementation research which has found that the single most important factor
5
is personal stake or what the innovation will do for the individual adopter (Schultz and
Slevin, 1975b).
Finally, in Roger’s scheme, comes routinizing, where the innovation “has become
incorporated into the regular activities of the organization, and the innovation loses its
separate identity” (Rogers, 1975, p. 399). This may or may not imply “use by all,” and
for many innovations from information and decision support systems to management
theories (that involve certain reports and stylized calculations) there would be reason to
believe that they would not lose their separate identity. Indeed, that is one way to
measure their continued use—by looking for the reports, calculations or decisions that
give evidence of the innovation.
LIMITATIONS OF EXISTING MEASURES OF IMPACT
Like measures of use, most existing measures of impact are of limited usefulness,
primarily because they must be parameterized for each type of innovation and each
organization. In addition, there is a “pro-innovation” bias in most research such that the
negative consequences of use are often ignored.
ORGANIZATIONAL CONSEQUENCES
Rogers (1995) ends his book on diffusion of innovations with a chapter on
“organizational consequences,” by which he means “the changes that occur to an
individual or to a social system as a result of the adoption or rejection of an innovation”
(p, 405). This definition does not attempt to separate changes that may be indicators of
6
use from changes that may be indicators of performance gains or loss. Rogers points to
one problem with almost all studies of organizational consequences: a pro-innovation
bias that looks for positives from the innovation but not negatives. This problem clearly
needs to be dealt with in any measure of organizational impact.
From the perspective of organizational innovations (and it should be noted that
Rogers does not focus on organizational innovations when he discusses organizational
consequences) Rogers examples of consequences are all related to “performance,” e.g.,
increased production or greater expense (Rogers, 1995, p. 410). Such measures are
clearly part of any change in organizational effectiveness due to an innovation. The
Rogers approach, however, is limited by its inclusion of any change in an organization as
evidence of organizational consequences.
A better approach would be one that separates change as an indicator of use and
change that serves as an indicator of effectiveness. This is because the adoption process
can lead to changes in effectiveness without actual adoption (use) and changes in
behavior that may be indicators of use that are not necessarily followed by changes in
organizational effectiveness. In other words, merely “talking” about a management
theory may improve things but actually using one may not.
ORGANIZATIONAL EFFECTIVENESS
The implementation literature has long focused on organizational effectiveness as
the appropriate measure of organizational impact (Schultz and Slevin, 1979). In addition,
one definition of implementation separates use from effectiveness by arguing that
implementation is changed decision making and successful implementation is improved
7
decision making (Schultz and Henry, 1981). This view allows “organizational
consequences” to fall into the two logical groups of indicators of use and indicators of
organizational effectiveness.
Performance
A more straightforward indicator of organizational effectiveness—and one that
applies particularly to management theories—is performance.
In the information systems literature, the main impact measure of model use has
been performance. Depending on the nature of the model, performance could refer to an
individual decision maker’s performance (Schultz, Ginzberg and Lucas, 1984; Lucas,
Ginzberg and Schultz, 1990) or an organization-wide measure of performance such as
profit.
“Good” Performance. To most managers and shareholders, good performance
means good financial performance, although other measures of success such rates of
technology development or new product success may also be meaningful. So any
management theory that improves performance would be considered as having led to
good performance.
“Bad” Performance. But the use of management theories doesn’t necessarily lead
to good performance and businesses are all to aware of theories and plans leading
nowhere or, worse, to declines in performance. We must consider, then, bad performance
as a possible outcome of the use of a management theory that has had an impact, in this
case a poor one.
8
MEASURING THE IMPACT OF A THEORY
What is really interesting about a management theory is whether it has had an
impact on anything. Did it change the way decisions are made? Did it improve
performance? Did it simply lead to improvements without actually being “used?” These
questions suggest that a common scale of “force” of impact could be useful. Particularly
useful would be a scale that uses levels of force that are apparent to any observer. A
model of such a scale is the Beaufort scale for wind.
THE BEAUFORT SCALE
Although it takes his name, Admiral Francis Beaufort of the British Royal Navy
did not originate the “Beaufort Scale.” Attempts to measure wind force with a descriptive
scale were made many hundreds of years before Beaufort came up with his version in
1805. Not surprisingly, Beaufort scale points (13 at first, 12 later on) that ranged from
“calm” to “storm” were based on nautical observation of the wind by its effect on the
sails of a frigate. Thus, by 1838, the Royal Navy was using scale points that ranged from
Calm (0) to Hurricane (12) with descriptors such as Beaufort 1 (Light Air) “Just
sufficient to give steerage way” and Beaufort 11 (Storm) “With which she would be
reduced to storm staysails.”
More relevant to our current task is the version of the Beaufort scale for reckoning
wind force on land since that does not require sailing experience—especially in a frigate!
Any dictionary would have a table showing the Beaufort Scale. My old Webster’s New
Collegiate Dictionary (1960) has the definition shown in Table 1. The first thing to be
9
-------------------------------
Insert Table 1 about here.
-------------------------------
noticed is that the scale has 12 points, each with a name, although some of the names are
the same, e.g., two levels of “Strong” wind. Next, miles per hour have been estimated for
the various levels of force. While this is exceedingly useful with modern anemometers, it
is less useful to a casual observer who is simply out in the wind. The final column is what
is so special about the Beaufort scale and why it provides a prototype for a scale of
organizational impact. It can be seen that the descriptions are so universal almost any
person would observe the same physical phenomena. The descriptions are so rich and
evocative—yet at the same time simply stated—that the picture they form is one that can
be easily recognized. More importantly, almost any observer would see the same thing
and thus arrive at the same level of wind force. This is what a good scale should do:
measure with accuracy independent of the observer.
THE ORGANIZATIONAL IMPACT SCALE
The Organizational Impact Scale is shown in Table 2.
-------------------------------
Insert Table 2 about here.
-------------------------------
10
This scale is currently being tested with data from corporations that have experience with
one or more—usually many—of the management theories and tools given in Table 3.
-------------------------------
Insert Table 3 about here.
-------------------------------
We have also expanded the scope of the study to include marketing theories and tools as
shown in Table 4.
-------------------------------
Insert Table 4 about here.
-------------------------------
We report the theories and tools under consideration here to invite comment on errors of
omission or commission.
CONCLUSION
This paper provides background and a preliminary scale for measuring the impact
of management theories on organizations modeled after a simple, but robust, weather
scale. We also provide the theories and tools under consideration. The empirical results
will be available in a revision to this paper.
11
REFERENCES
Rogers, Everett M. (1995), Diffusion of Innovations. New York: The Free Press.
Lucas, Henry C. Jr., Michael J. Ginzberg and Randall L. Schultz (1990), Information
Systems Implementation: Testing a Structural Model. Norwood, NJ: Ablex Publishing
Corporation, 1990
Schultz, Randall L., Michael J. Ginzberg, and Henry C. Lucas, Jr. (1984), “A Structural
Model of Implementation,” in Management Science Implementation, Randall L.
Schultz and Michael J. Ginzberg, eds. Greenwich, CT: JAI Press, 55-87.
——— and Michael D. Henry (1981), “Implementing Decision Models,” in Marketing
Decision Models, Randall L. Schultz and Andris A. Zoltners, eds. New York: Elsevier
North-Holland, 275-96.
——— and Dennis P. Slevin (1975a), “A Program of Research on Implementation” in
Implementing Operations Research/Management Science, Randall L. Schultz and Dennis
P. Slevin, eds. New York: American Elsevier Publishing Company, 31-51.
——— and ——— (1975b), “Implementation and Organizational Validity: An Empirical
Investigation,” in Implementing Operations Research/Management Science, Randall L.
Schultz and Dennis P. Slevin, eds. New York: American Elsevier Publishing
Company, 153-82.
——— and ——— (1979), “Introduction: The Implementation Problem,” in The
Implementation of Management Science, Robert Doktor, Randall L. Schultz and
Dennis P. Slevin, eds. North-Holland/TIMS Studies in the Management Sciences,