Top Banner
This article was downloaded by: 10.3.98.104 On: 13 Jan 2022 Access details: subscription number Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place, London SW1P 1WG, UK The Routledge Companion to Production and Operations Management Martin K. Starr, Sushil K. Gupta Forecasting Publication details https://www.routledgehandbooks.com/doi/10.4324/9781315687803.ch3 Nada R. Sanders Published online on: 22 Mar 2017 How to cite :- Nada R. Sanders. 22 Mar 2017, Forecasting from: The Routledge Companion to Production and Operations Management Routledge Accessed on: 13 Jan 2022 https://www.routledgehandbooks.com/doi/10.4324/9781315687803.ch3 PLEASE SCROLL DOWN FOR DOCUMENT Full terms and conditions of use: https://www.routledgehandbooks.com/legal-notices/terms This Document PDF may be used for research, teaching and private study purposes. Any substantial or systematic reproductions, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The publisher shall not be liable for an loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
21

The Routledge Companion to Production and Operations ...

Feb 05, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: The Routledge Companion to Production and Operations ...

This article was downloaded by: 10.3.98.104On: 13 Jan 2022Access details: subscription numberPublisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place, London SW1P 1WG, UK

The Routledge Companion to Production and OperationsManagement

Martin K. Starr, Sushil K. Gupta

Forecasting

Publication detailshttps://www.routledgehandbooks.com/doi/10.4324/9781315687803.ch3

Nada R. SandersPublished online on: 22 Mar 2017

How to cite :- Nada R. Sanders. 22 Mar 2017, Forecasting from: The Routledge Companion toProduction and Operations Management RoutledgeAccessed on: 13 Jan 2022https://www.routledgehandbooks.com/doi/10.4324/9781315687803.ch3

PLEASE SCROLL DOWN FOR DOCUMENT

Full terms and conditions of use: https://www.routledgehandbooks.com/legal-notices/terms

This Document PDF may be used for research, teaching and private study purposes. Any substantial or systematic reproductions,re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contents will be complete oraccurate or up to date. The publisher shall not be liable for an loss, actions, claims, proceedings, demand or costs or damageswhatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Page 2: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

PART II

Core POM Functions

Page 3: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Page 4: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

45

1 Introduction to Forecasting in POM

Forecasts lie at the heart of Production and Operations Management (POM). They serve as a key input to all POM decisions such as inventory management, production planning, and scheduling as well as operations strategy and product innovation. All these decisions are based on a forecast of the future, both long term at the strategic planning level as well as short-term disaggregate forecasts at the SKU level used for tactical decisions. Improving forecasting performance has been shown to lead to significant benefits in both POM and across the supply chain (Oliva and Watson 2011; Moritz et al. 2014). However, implementation of forecasting processes and associ-ated technologies is a challenge. Methodological advancements, available technology, and infor-mation access have elevated forecasting capability. In practice, however, forecasting processes still rely heavily on human judgment (Lawrence et al. 2006). Forecasts within the practice of POM are usually produced as a combination of statistical forecasts and judgment (Fildes and Goodwin 2007), where an initial statistical forecast is adjusted judgmentally. Therefore, understanding fore-casting requires comprehending both statistical and judgmental methods, as well as ways they can be combined in practice to improve performance.

Improving the practice of forecasting and extending relevant research requires understanding methodological capabilities and their use but also current challenges and shortcomings. In this chapter, we provide a state-of-the-art presentation of these critical issues. We begin with an overview of the practice of forecasting by looking at its far reaching impact on POM decisions and organizational costs, as well as its role in non-typical POM environments. We then discuss the forecasting process and factors guiding method selection. Next, we present a deep discussion of forecasting methodologies from judgment to statistical methods, to combination methodolo-gies that have been shown to be particularly successful in both research and practice. Finally, we consider the future of forecasting, including the role of big data analytics and information, and implications for both managers and researchers.

1.1 Forecasting Versus Planning

Forecasting is the process of predicting future events. This can range from forecasting product demand, such as demand for the next iPhone, to forecasting the passage of a healthcare bill in

3

FORECASTING State-of-the-Art in Research and Practice

Nada R. Sanders

Page 5: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

46

Congress. Any prediction of future events is a forecast. Forecasting is one of the most important business activities because it drives all other business decisions. Decisions such as which markets to pursue, which products to produce, how much inventory to carry, and how many people to hire are all based on forecasts.

Consider the full range of POM decisions, from inventory ordering, production planning, scheduling, to project management. They are all based on a forecast of demand. In fact, the com-putations for order quantities and safety stocks contain a demand component obtained through a forecast. These decisions are all part of the planning process often confused with forecasting (Armstrong 2001). Planning is the process of selecting actions in anticipation of a forecasted event. A forecast drives the plan that is made in response to the forecast. As organizations attempt to decrease their statistical variability, they generate forecasts and plan their resources accordingly.

Planning involves the following decisions:

• Scheduling existing resources: One important aspect of planning is deciding how to best utilize existing resources. This includes decisions regarding product variety and volumes produced, transportation and deliveries at right locations, labor allocation and utilization, facilities plan-ning, and capital usage.

• Determining future resource needs: Organizations must also determine what resources will be needed in the future. These decisions depend on forecasts of emerging markets, future cus-tomer demands, technological advancements, and competition.

• Acquiring new resources: It takes time to acquire new facilities, new technologies, new equip-ment, and expand to new locations. Plans must be made well in advance, and procedures to acquire new resources and capabilities put in place ahead of time.

1.2 Demand Management

Forecasting and planning are intertwined and often confused in practice as companies have the ability to affect actual events, especially demand. This can be done through promotional cam-paigns and advertisements, offering incentives to sales staff and personnel, and through cutting prices. This is called “demand management” and is the process of attempting to modify demand. This is in contrast to supply management. Where supply management involves managing the supply base and sourcing decisions on the upstream side of the supply chain, demand manage-ment focuses on customers on the downstream side.

Demand management cannot occur without first having a forecast—a prediction of what future demand is going to be. Once a forecast and a resulting plan have been made, the organi-zation may decide to “manage” the demand in order to better utilize its resources, and the plan is reconfigured accordingly. This is shown in Figure 3.1. The forecast is the starting point as an objective prediction of what is an iterative process. The plan follows the forecast as a course of action. However, demand management may modify the plan, such as with an advertising cam-paign, which then changes the forecast. They each impact each other as illustrated with two sided arrows.

The impact of forecasts also extends to the supply chain. Independent forecasting by mem-bers of the supply chain gives rise to the bullwhip effect (Forrester 1961), which is the increased volatility in orders as they propagate through the supply chain. The bullwhip effect occurs when each individual company in the supply chain forecasts its own demand, plans stock levels, and makes replenishment decisions independently of other companies in the chain (Chen et al. 2000; Lee et al. 1997; Lee et al. 2000). This creates volatility in orders, which makes forecasting more difficult, leads to increases in inventory throughout the supply chain, has a higher stock-out risk,

Page 6: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

47

and results in inefficient use of working capital and production capacity. Thus, forecasting has an impact on the entire supply chain. One strategy for mitigating the bullwhip effect is the sharing of forecast information with supply chain partners, discussed later in Section 4.3.

1.3 Impact on Costs

Forecasts not only impact POM decisions. They directly impact organizational costs. A number of studies have looked at the relationship between forecast error and organizational costs in the POM environment. Their collective finding is that forecast errors can increase organizational costs from 10% to 30%. Furthermore, this impact is dependent upon the characteristics of the organizational environment.

Although the initial studies in this area made many simplifying assumptions, such as single product production and a homogeneous workforce (Bowman 1963), they provided estimates of the strength of the relationship between forecast error and cost. Later studies further examined forecast errors in more realistic manufacturing settings. These studies confirmed that reducing forecast errors increases profitability and reduces cost roughly by 10%–30%. The impact, how-ever, depends on the structure of the POM environment, such as inventory ordering decisions and workforce scheduling policies (Metters 1997; Sanders and Graman 2015). Collectively, these studies demonstrate that the impact of forecast errors varies based upon characteristics of the POM environment, underscoring the relationship between forecasting and planning and POM decisions.

2 The Forecasting Process

2.1 Steps in the Forecasting Process

Forecasts need to be credible in order to be justified to other parties that include management, suppliers, shareholders, and other stakeholders (Makridakis et al. 1983). To ensure credibility the forecasting process needs to follow an established set of steps regardless of what is being

FORECASTING

Predicting future events

DEMANDMANAGEMENT

Engaging in efforts tochange demand

PLANNING

Developing actions torespond to the forecast

Figure 3.1 The Iterative Cycle of Forecasting, Planning, and Demand Management

Page 7: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

48

forecast or the model used (Armstrong 2001; Makridakis et al. 1982; Makridakis, Wheelwright, and Hyndman 2006). The steps are shown in Figure 3.2 and described below in the following sequence:

a) Decide what to forecast: Although seemingly simple, this initial step requires thought. A fore-cast provides an answer to a question and formulating the right question is critical. For example, there is an obvious difference between forecasting sales (dollars) versus demand (units). A little less obvious may be a scenario where a company forecasts demand to better manage inventory levels but finds that forecasting delivery lead times actually results in better performance. Forecasts are made in order to develop plans for the future, and it is important to first identify what forecasts are actually needed to develop them.

b) Analyze data: Before we can forecast, we must analyze data in order to identify patterns pres-ent and—as noted next—select the forecasting model most appropriate for the identified pattern. The most commonly observed data patterns are:

i) Level or horizontal: This is the simplest pattern and exists when data fluctuate around a constant mean. It is the easiest to predict and is common for commodity products in the mature stage of the life cycle, such as table salt or toothpaste.

ii) Trend: Trend is present when data exhibit an increasing or decreasing pattern over time. The simplest type of trend is a straight line, or linear trend, but a variety of trend patterns can exist such as exponential or logarithmic.

iii) Seasonality: Seasonality is any pattern that regularly repeats itself. Most intuitive are demand patterns that fluctuate with the seasons, such as sale of ice cream or snow shovels. However, any pattern that regularly repeats itself is a seasonal pattern, including holiday retail sales or end of the month production rush.

iv) Cycles: Cycles are patterns created by economic fluctuations. Examples include a reces-sion, inflation, or even the life cycle of a product. The major distinction between sea-sonal patterns and cycles is that the latter do not have a repeating length or magnitude. As a result, they are most difficult to predict.

In addition, data inherently contains a certain amount of random variation. The greater the random variation, the harder it is to forecast as high degrees of randomness can obscure the pat-tern. Forecasting focuses on identifying the patterns in the data, attempting to smooth out the random variation, as shown below:

Data = Pattern + Random Variation Data = Level + trend + seasonality + cycles + Random Variation

Decide What to Forecast

AnalyzeData

Select Model

GenerateForecast

Measure Accuracy

Figure 3.2 The Forecasting Process

Page 8: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

49

a) Select the forecasting model: Once data patterns have been identified, we can select an appropri-ate forecasting model that is suitable for the identified data pattern. All models are not suited for every pattern. Generally, two or three different forecasting models are selected, and then they are tested on historical data to compare accuracy.

b) Generate the forecast: Once a model has been selected, the forecast is generated. This can be through any number of software packages, spreadsheets, or even via the use of paper and pencil.

c) Measure accuracy : After actual events have occurred, it is critical to measure forecast accuracy, a step often overlooked. Measuring accuracy provides information as to how well selected methods are performing enabling process improvement. Data patterns change over time, and models that once provided good results may no longer be adequate.

At the simplest level, forecast accuracy is computed as the difference between an actual and forecasted value taken as an average across the measured time periods called average error (AE). Here, both positive and negative errors are averaged. Although they cancel each other out, the net effect provides the direction of the forecast or the bias. There are many other metrics that can provide insights into data characteristics and method selection (Hyndman and Athanasopoulos 2013). One such metric is the mean absolute deviation (MAD), which is computed as the average of the sum of absolute errors, rather than algebraic errors. This provides a measure of average error regardless of sign or direction. Another metric is mean squared error (MSE), which is computed as the average of the squared errors. Through the process of squaring, larger errors are penalized. This is an effective metric in situations where large errors are important. There are many other metrics, and each provides unique insights into the forecasting process. Therefore, it is practical for forecasters to use multiple metrics.

2.2 Factors in Method Selection

Although there are many choices in forecasting methods, not all methods are appropriate for all forecasts. There are four factors to consider in method selection:

a) Amount and type of available data: Different forecasting methods require different types and quantities of data (Makridakis et al. 1983). Sophisticated quantitative methods, for example, may require large amounts of data, whereas simpler methods may not. In some cases—such as when forecasting new products—no historical data may be available. This factor plays a key role in the methods that can be selected. A note of caution, however, must be made here. Forecasts are only as good as the data they are based on, and the data must be credible. The old adage “garbage in, garbage out” is very applicable here. Therefore, all data must be checked for credibility and cleaned before use.

b) Degree of accuracy required: Sophisticated analytical methods may generate good forecasts but they may be costly to develop. The cost of the forecasting method needs to justify the importance of the forecast. For example, forecasting paper clips or rubber washers doesn’t justify a costly method.

c) Length of forecast horizon: Some forecasting methods are better suited for short-term horizons, whereas others are better suited for the long term. It is important to select the method most appropriate for the horizon being forecast. For example, forecasting emergency room visits to a hospital during the month of December is going to require a very different forecasting method than forecasting demand for natural gas over the next ten years.

Page 9: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

50

d) Patterns in the data: As already mentioned, critical to model selection is its appropriateness for the identified data patterns. In fact, a mismatch here virtually guarantees poor forecast performance. For example, it can be mathematically shown that forecasting a trend pattern with a model that does not address trend will result in a lagging of the forecast to the actual data (Makridakis, Wheelwright, and Hyndman 2006).

3 Forecasting Methodologies

There is a broad range of forecasting methodologies. Each one has its own strengths and weak-nesses. In this section, we look at each category in more detail. We then delve into the topic of combining methodologies, a subject that has received a great deal of attention in the forecasting literature (Armstrong 2001).

3.1 Categorization of Forecasting Methods

Forecasting methods can be classified into two broad categories: judgmental and statistical. Judg-mental methods are those based on subjective opinions and the judgment of individuals or groups of individuals, such as managers, sales staff, or customers. For example, asking customers whether they would buy a particular product is a type of judgmental forecasting method called “intention surveys.” Another is called “sales force composite” when the sales staff make a group forecast of upcoming sales.

Judgmental methods are made by people; as a result, these methods are subject to human biases. Statistical forecasting methods, on the other hand, are based on mathematical modeling. These methods are objective and consistent, capable of handling large amounts of data and uncovering complex relationships.

Provided that good data is available, statistical methods are generally more accurate than judg-mental methods (Lawrence et al. 2006). As shown in Table 3.1 however, both judgmental and statistical methods have their strengths and weaknesses. While statistical methods are objective and consistent, they require data in quantifiable form in order to generate a forecast. Often, such

Table 3.1 Comparing Qualitative and Quantitative Methods

I. Qualitative Forecasting Methods

Strengths Weaknesses

Responsive to change Cannot process large data Incorporates “soft” information Inconsistent Compensates for unusual events Inability to process complexity Provides a sense of “ownership” Biased (e.g., optimism, wishful thinking,

political manipulation, lack of consistency)

II. Quantitative Forecasting Methods

Strengths Weaknesses

Can process complexity Accuracy dependent on data and model Objective Slow to respond to change Consistent Costly to model “soft” information Can process large data Requires technical understanding

Page 10: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

51

data is not available, such as in new product forecasting or making long-range strategic forecasts. Also, statistical methods are only as good as the data on which they are based. Judgmental meth-ods, on the other hand, have the advantage of being able to incorporate last-minute “inside” or “soft” information. This may be last-minute notice of a competitor’s advertising campaign, a snowstorm delaying a shipment, or a heatwave increasing ice cream sales. Judgmental methods also offer users a sense of “ownership,” which cannot be discounted, as it is these very users that are responsible for implementing plans to respond to forecasts (Lawrence et al. 2006; Goodwin et al. 2007). Their trust in the forecast is important for implementation.

Therefore, each method has its role in the forecasting process and a good forecaster learns to rely on both. In fact, there is ample evidence that merely relying on statistical forecasts generated via software can create problems. One notable case is a situation with Nike that occurred some years ago (Koch 2004). The company relied on automatically generated statistical forecasts with-out a judgmental check. The result was widespread inventory shortages of high-moving items and too much stock of low-moving items. Next, we will look at each category and discuss ways that they can be combined.

3.1.1 Judgmental Forecasting Methods

Numerous judgmental forecasting methods exist, ranging in degree of formality and structure. These methods can be useful in including information not available to the statistical model, such as information on special events or recent occurrences. However, these methods are fraught with human biases that include optimism, inconsistency, and political manipulation (Kahneman 2011; Hogarth 1987). As a result, they often exhibit large errors and exaggerations (Kahneman and Tversky 1979; Goodwin et al. 2007).

One such methodology is executive opinion where managers, executives, or sales staff meet and collectively develop a forecast. This method is often used to forecast sales, market trends, make strategic forecasts, or forecast new products. It can also be used to modify existing forecasts to account for special events, such as slowed spending during a recession or a special promo-tional campaign. The advantage here is the ability to quickly include the latest information in the forecast. However, the disadvantage is a risk of contaminating the forecast with the many human biases.

Another method is market research, which uses surveys and interviews to determine customer preferences and identify new product ideas. Although market research can be a good determinant of customer preferences, it has a number of shortcomings. One is the potential inadequacy of the survey questionnaire design. For example, subjects may be asked to identify a favorite hobby from a set of choices, such as gardening, fishing, cooking, or sports. The problem here is that the list is not exhaustive, and findings may incorrectly portray customer preferences.

One of the most popular judgmental methods is the Delphi technique designed to develop a consensus among a group of experts on a particular topic ( Landeta and Barrutia 2011 ; Kauko and Palmroos 2014 ). Examples may include forecasting propagation of a disease, climate changes, or technological innovation. The process involves choosing anonymous experts in the field of inquiry. Questionnaires are sent to the experts, the findings summarized, and the process repeated with updated questionnaires that incorporate the initial findings. This process continues until consensus between the experts is reached. The Delphi technique is based on the assumption that agreement of experts on an outcome has a high likelihood of occurrence. The researcher’s job is to extract this information and this role requires significant expertise. Although time consum-ing, the Delphi technique has been shown to be an excellent method for forecasting long-range product demand, technological change, and scientific advances in medicine.

Page 11: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

52

3.1.2 Statistical Forecasting Methods

Statistical forecasting methods can be divided into two broad categories: time series and causal models. Although both are mathematical, the two categories differ in their assumptions and forecast development.

Time series models generate a forecast from an analysis of a “time series” of data. This is a simple listing of data points of the variable being forecast over time taken at regular intervals. For example, data of student enrollment per semester at a university over the past five years is an example of a time series. Time series models assume that forecasts can be made by modeling patterns present in the data. There are many types of time series models, from simple moving averages to models such as Holt-Winter’s exponential smoothing that can address both trend and seasonal data patterns.

By contrast, causal models assume that the variable being forecast is related to other variables in the environment. For example, university enrollment may be related to unemployment rates, recession levels, or salary levels. In this case, the forecasting process involves identifying these relationships, expressing them in mathematical form, and using such correlative information to generate a forecast. A typical example would be regression models.

Time series models are generally easier to use than causal models. They are more readily available as forecasting software and easy to automate. Causal models, on the other hand, can be more complex, especially if relationships between multiple variables are being considered. However, time series models can often be just as accurate and have the advantage of simplicity. Numerous forecasting competitions have been conducted over the years where performance of different forecasting models is compared. Consistently, time series models such as exponential smoothing come out as top performers (Makridakis et al, 1982; Hyndman and Koehler 2006). Furthermore, time series models are easy to use and can generate a forecast more quickly than causal models, which require model building. For this reason, time series models are especially useful in the POM environment where demand for large numbers of SKUs must be forecast on a regular basis.

3.2 Combining Forecasting Methods

Much evidence exits to support the combining of two or more independent forecasts. In one of the earlier studies, Winkler and Makridakis (1983) used 1,001 series in order to show that mathematically combining quantitative forecasts reduces forecast errors. Numerous other studies have supported the finding that combining independent forecasts improves accuracy (Lobo and Nair 1990; Batchelor and Dua 1995; Webby, O’Connor, and Lawrence 2001). A literature review of 209 studies by Clemen (1989) documented the benefits of combining, which is well estab-lished today. The overall conclusion from these studies is that gains from combining are greatest when methods combined differ substantially and are based on different sources of information. Also, the constituent forecasts should be absent of bias or have biases that cancel each other out. Lastly, combining is most effective when the correlation between forecast errors of constituent forecasts is low, meaning that each forecast brings different information to the integration process (Armstrong 2001; Armstrong et al. 2015).

In an organizational setting, judgmental and statistical forecasts are typically based on differ-ent sources of information, making them ideal candidates for combining. The benefits of their mechanical integration have been supported by studies to date. For example, a study of forecasts of quarterly earnings used two judgmental and two quantitative methods for forecast generation (Lobo and Nair 1990). Combining the two judgmental methods reduced forecast error by 0.6%, as

Page 12: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

53

measured by the mean absolute percentage error (MAPE). Combining the two quantitative meth-ods reduced MAPE by 2.1%. However, combining one judgmental and one quantitative method reduced MAPE by 5.2% on average.

3.2.1 Combining Judgmental and Statistical Methods

Historically, the forecasting literature has been divided on the relative value of judgmental versus statistical forecasting methods. A number of authors have suggested that judgment should not be given credibility due to high subjectivity (Bazerman 1998). Many studies have pointed to the shortcomings inherent in judgmental forecasting due to previously discussed limitations of human cognitive abilities (Hogarth 1987; Webby and O’Connor 1996). Other authors, however, have provided evidence to support the use of judgment in forecasting (Edmundson et al. 1988; Lawrence et al. 1985). In addition, studies of forecasting practices in business repeatedly show that practitioners rely heavily on judgmental forecasting methods (Dalrymple 1987; Mentzer and Cox 1984; Mentzer and Kahn 1995; Sanders and Manrodt 1994). As discussed earlier, the primary reason for this is that judgment is privy to the latest information on changes in the busi-ness environment. This may involve rumors of a competitor launching a promotion, a planned consolidation between competitors, or a sudden shift in consumer preferences due to changes in technology. Managerial judgment can be used to rapidly incorporate this information in the forecast. However, biases inherent in judgmental forecasting can create large and volatile swings of forecast errors, which can have serious customer service and inventory implications.

Today, most researchers agree that both judgmental and statistical forecasting methods each have their unique strengths and weaknesses. Statistical methods have the advantage of being objective, consistent, and capable of processing large amounts of data and considering relation-ships between numerous variables. However, they are only as good as the data upon which they are based. When changes occur in the data that are not incorporated in the model, the generated forecasts cannot be accurate. An ideal forecasting methodology is one that incorporates the advantages of both judgmental and quantitative forecasting approaches.

The literature review thus far provides support for the integration of judgmental and quanti-tative forecasts provided that certain criteria are met:

• Combined forecasts should be independently generated. • Combined forecasts should have a low correlation of forecast errors, should be unbiased, or

have biases that cancel each other out. • Judgmental forecasts should be made in the presence of domain knowledge or based on

specific information not available to the statistical model.

One integration methodology is to use judgment to adjust a statistically generated forecast based on contextual factors and is a common practice in business ( Turner 1990 ; Moritz et al. 2014 ). In a survey of ninety-six US corporations, 45% of the respondents stated that they always made judgmental adjustments to quantitative forecasts with only 9% stating that they never made adjustments ( Sanders and Manrodt 1994 ) with the remaining 54% stating that they some-times made adjustments. The primary reason given for the practice of judgmentally adjusting quantitative forecasts is to incorporate the latest knowledge of the environment, product, or past experience into the forecast. The forecasting process is performed sequentially with the statistical forecasts generated first, and then adjusted based on judgment. One advantage of this approach is the ability to swiftly allow judgment to rapidly incorporate the latest informa-tion. Many studies have documented support for judgmental adjustment of statistically derived

Page 13: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

54

forecasts when contextual information is available ( Sanders and Ritzman 2001 ; Fildes and Good-win 1999 ). The recommendation is that judgmental adjustments should only be made when specific contextual information is available, including discontinuities in the data and pattern changes.

Another integration methodology is to mechanically combine two independently generated forecasts. The methods can be combined either objectively, such as with a simple average, or sub-jectively in light of specific contextual information. This type of integration has received much support from literature and has been shown to improve accuracy (Clemen 1989; Armstrong 2001). There is general agreement in the forecasting literature that even a combination forecast generated as a simple average of forecasts outperforms the individual forecasts being combined. Researchers generally prefer combining judgmental and statistical forecasts over judgmental adjustment, as the latter is more subject to judgment’s negative effects (Armstrong 2001). Here, the constituent forecasts are made in parallel with the final forecast generated as a mathematical combination of the two minimizing the negative effect of bias. Still, the final forecast represents a merging of information upon which the constituent forecasts are based.

The simplest approach to combining forecasts is using an equal weighted arithmetic average of the individual methods. Numerous empirical studies have found this method to be as effec-tive as using weighted averages or regression based weighting schemes ( Makridakis and Winkler 1983 ; Clemen and Winkler 1986 ; Conroy and Harris 1987 ; Blattberg and Hoch 1990 ). Clemen’s (1989) extensive literature review concludes that equal weighting provides improvements in accuracy for many forecasts. While using more elaborate weighting schemes has been shown to be successful in other studies ( Lobo 1991 ), its complexity and time requirements render it less desirable for an organizational context.

Yet another way of combining forecasts is to use judgment in the selection and development of the statistical forecast. Judgment is used to select variables, specify model structure, and set parameters. Certainly, some judgmental inputs are required in the formulation of any statistical model. However, the integration methodology discussed here is one where judgment in the presence of domain knowledge can provide specific domain information that cannot be deter-mined through statistical diagnostics. This methodology is often considered the least subject to the negative effects of judgmental biases while also including its benefits.

Bunn and Wright (1991) identified four areas in statistical model building where judgmen-tal inputs can be important. These areas include the selection of variables, model specification, parameter estimation, and data analysis. Bunn and Wright (1991) suggest that while statistical diagnostics can be important in assisting with the identification of key causal variables, ultimately selection should be based on judgment. Specification of model structure and the estimation of parameters can easily be automated, especially when causal information can be quantified. Causal variables, such as the effect of an advertising campaign, can be automated by using tech-niques such as multiple regression. However, problems can arise when there are unaccounted “broken-leg” cues affecting the data ( Kleinmuntz 1990 ). The “broken-leg cue,” a commonly used term in the psychology and bootstrapping literature, describes cues that are rare but highly diagnostic ( Meehl 1957 ). The idea comes from estimating physical performance of an individual but not knowing that suddenly the person has broken their leg—a rare event but one that clearly disrupts mobility. When there are “broken-leg” cues, modeling procedures will not be effective, and judgment becomes critical.

One approach to using judgment as an input to model building is the use of rule-based fore-casting. Rule-based forecasting uses expert judgment to identify characteristics of the data and causal forces affecting change as inputs into the statistical procedure ( Collopy and Armstrong 1992 ). The premise of rule-based forecasting is that forecasting methods need to be tailored to a

Page 14: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

55

specific situation and that expert judgment is best suited for this. Expert judgment can identify unusual patterns, direction of causal forces, and the functional form of the time series. Rule-based forecasting provides rules or instructions on how to weight forecasts from a set of forecasting methods, based on experts and prior research. Rule-based forecasting is especially useful when domain knowledge indicates that recent trends may not persist and that there are changes in the direction of causal forces.

3.2.2 The Role of Domain Knowledge and Contextual Information

While there is substantial support for integrating the above methodologies, simply integrating arbitrary judgment with documented statistical procedures can harm accuracy. Important factors that give credibility to judgment are domain knowledge and contextual information.

Domain knowledge is knowledge that practitioners gain through experience as part of their jobs. Becoming familiar with their environment, practitioners become attuned to many cause-effect relationships and environmental cues. Practitioners with domain knowledge understand which cues are significant and which may ultimately prove unimportant. Specific information available in the forecast environment is called contextual information, such as a price increase, an impending strike, or new policies that may affect forecasts. Domain knowledge enables the practitioner to evaluate the importance of specific contextual information ( Webby, O’Connor, and Lawrence 2001 ).

The value of domain knowledge has been demonstrated in a number of studies. Edmundson et al. (1988) conducted a study to evaluate judgmental forecast performance of managers with three different levels of knowledge. The first level consisted of practitioners with considerable domain knowledge, having industry forecast experience and knowledge of the specific products being forecast. The second level was made up of practitioners with some domain knowledge, such as overall industry forecasting experience, but no knowledge of specific products. On the third level were students with no domain knowledge but with considerable technical knowledge, which is knowledge of forecasting methodologies. The study found that familiarity with specific products being forecast to be the most significant factor in improving forecast accuracy. Other studies have also found support for domain knowledge ( Sanders and Ritzman 1995 ).

Domain knowledge has also been shown to be important in the judgmental adjustment of quantitatively derived forecasts. Studying judgmental revision of quantitative forecasts by indi-vidual experts with domain knowledge, Mathews and Diamantopoulos (1986; 1989) concluded that judgmental revisions of the quantitative forecasts led to improved accuracy. In another study, Mathews and Diamantopoulos (1990) compared judgmental adjustments made by product man-agers of a UK company to forecasts generated by a quantitative model. The study concluded that managers who had a higher understanding of market conditions for their products tended to generate better revisions. Similarly, Huss (1986) found that judgmental adjustment by com-pany experts of trends in electricity sales outperformed econometric methods. However, even in the absence of domain knowledge, judgmental adjustment can be beneficial provided that it can bring information not available to the quantitative model, such as discontinuities or pattern changes in the data ( Armstrong 2001 ; Armstrong et al. 2015 ).

4 The Future of Forecasting

Although the model categories and forecasting methods described in this chapter continue to be the state-of-the-art in forecasting, there are a number of trends that are changing the future. The first is information access, which has become more abundant than ever both in quantity and

Page 15: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

56

quality. Another is big data analytics, which is the computational ability to process and analyze the newly available data, providing unprecedented insights. Lastly, the sharing of information between entities and supply chain partners is improving accuracy and POM performance but is also creating risks such as security. In this section, we discuss these emerging trends and their impact on forecasting.

4.1 Information Access

Today, organizations have access to an unprecedented amount of data. These large volumes of data provide information needed for generating better forecasts. For example, sales data can be collected in real time through point-of-sale (POS) data, shortening the forecasting cycle and enabling companies to respond quickly to latest trends. Similarly, radio-frequency identification (RFID) tags gather data on inventories across supply chains, from quantities and location to ambi-ent temperature and security breaches. These tags will number in the billions with 12 million RFID tags in 2011 projected to rise to 209 billion in 2021 (Manyika et al. 2011).

Increasingly, supply chains are combining data from different sources in order to coordinate activities across the POM environment. Marketing is generating huge volumes of POS data from retail stores that are automatically shared with suppliers for real-time stock level monitoring. RFID tags monitor inventory on shelves and in-transit coordinating with current stock levels for automatic order replenishment. Even greater benefits are achieved when companies are able to integrate data from other sources. This includes data from retailers beyond sales, such as pro-motion data including SKUs sold and prices. It also includes launch data—such as specific items associated with promotional ramp-up and ramp-down plans and inventory consequences.

4.2 Big Data Analytics

The gathering of large amounts of data is part of the trend of big data analytics, which has had a huge impact on forecasting. Big data refers to large datasets whose size is so large that the quantity can no longer fit into the memory that computers use for processing. This data can be captured, stored, communicated, aggregated, and analyzed. Analytics without big data are simply mathematical and statistical tools and applications. Many of these tools—such as correlation and regression analysis—have been around for decades. It is the combination of big data with sta-tistical algorithms—or analytics—fueled by today’s computing power that creates the ability to extract meaningful insights and turn information into intelligence. The availability of big data and the advancements in machine intelligence have created significant new opportunities given the size of memories and fast access to them.

Big data analytics has had a significant impact on forecasting especially in the area of predictive analytics. Predictive analytics is a “coined” name to emphasize that all available resources will be focused on making the desired prediction ( Sanders 2014 ; Evans 2016 ). It creates the ability to foresee events before they happen by sensing small changes over time. For example, IBM’s Watson computer uses an algorithm to predict best medical treatments ( Hempel 2013 ), and UPS uses analytics to predict vehicle breakdown ( Mayer-Schönberger and Cukier 2013 ). Sensors placed on machinery, motors, or infrastructure like bridges can monitor the data patterns they give off, such as heat, vibration, stress, and sound. These sensors can detect changes that may indicate looming problems ahead essentially forecasting an event.

These developments may significantly change our ability to forecast. They may also mean fewer “black swans.” The term “black swans” is used to describe high-impact, low-probability events ( Taleb 2007 ). Historically, we have assumed that these could not be predicted. However,

Page 16: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

57

with big data analytics that is rapidly changing. Big data analytics is improving forecasting so that the number of events that we used to consider unpredictable and purely random is getting smaller. We are now able to identify and spot changes in systems that indicate potential failure. Just con-sider the accuracy of the prediction of Hurricane Sandy provided by the NOAA weather satellite in 2012. Only a few years earlier, this type of event would have been considered a “black swan.”

Spotting the abnormality early on enables the system to send out a warning so that a new part can be installed, preparation before an impending tsunami can be made, or the malfunction fixed. The goal is to identify a good proxy for the event that is being forecast, monitor the proxy, and use the information to predict the future. External events—such as weather or traffic or road construction—can be tracked, and the supply chain can respond. This ability is a game changer for forecasting, POM decision making, and risk management.

4.3 Information Sharing

Sharing this information with supply chain partners has been a recognized key element in mit-igating the impact of forecast errors (Aviv 2007; Cachon 2001; Cachon and Lariviere 2001). As a result, many companies are engaging in efforts to improve forecast performance through information sharing (Chen and Lee 2009; Fildes and Kingsman 2011; Lee et al. 1997). The merits of information sharing along the supply chain are well known (Lee, So, and Tang 2000) and have received considerable attention in the POM literature (Lee et al. 1997). For example, Fildes and Kingsman (2011) report that the management of inventories and the effects of sharing forecast information across the supply chain as being a particularly important finding in their review of forecasting and operations research. Chen and Lee (2009) report that it may be useful for the supplier to know the buyer’s forecast or some other aspect of demand information.

Sharing information has been shown to significantly reduce forecast errors. For example, in an empirical study of a two-stage supply chain Trapero et al. (2012) found information sharing to improve forecasting performance and result in a 6–8 percentage points lower forecast error. Byrne and Heavy (2006) considered a complex supply chain structure and conclude that infor-mation sharing could potentially lead to a reduction in supply chain cost savings of up to 9.7%. Similarly, Ali and Boylan (2010) found forecast sharing to lead to inventory cost savings of up to 49.4%.

This role of information has led to the development of formal sharing processes across the supply chain. One is Collaborative Planning, Forecasting, and Replenishment (CPFR), which is a collaborative process of developing joint forecasts and plans with supply chain partners. The distinguishing feature of CPFR is that members of the supply chain collaborate on business plans and jointly execute the processes of forecasting and replenishment. Trading partners jointly set forecasts, plan production, replenish inventories, and evaluate their success in the marketplace (see www.vics.org). Given the benefits of formal information sharing and the advent of ever more information through big data, we can anticipate an increase in these types of collaborations.

5 Relevance for Managers

Methodological advancements, available technology, and access to information often have made forecasting more confusing for managers. There are, however, some lessons that emerge through our presentation of the state-of-the-art of the field:

• Managers should not confuse forecasting and planning. Understanding and carefully sepa-rating these will provide greater objectivity and understanding of future events.

Page 17: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

58

• Managers should follow the established forecasting process as described herein. Certainly, forecasts are rarely perfect. However, following this process will ensure good results and provide credibility to the forecasts.

• Managers should make sure that the data used is reliable and clean. It should not be assumed that more data is always better and managers should exercise caution with large data sets questioning its reliability.

• Managers should always measure forecast performance using multiple performance meas-ures. This includes not only measuring overall forecast accuracy but also measuring forecast bias and maintaining a history of errors. Following the process described in Figure 3.2, these metrics should be used to reassess models used, better understand the data, and improve performance.

6 Research Opportunities

Numerous research opportunities exist to improve the practice of forecasting and contribute to the current knowledge base. One opportunity is in the role of information in reducing forecast errors. As previously discussed in Section 4, companies have access to an unprecedented amount of information. However, there continue to be many questions with regard to the role of this informa-tion. For example, a number of studies point to problems of sharing information across individual firms (Aviv 2007; Özer 2003; Toktay and Wein 2001). With no incentive for truthful information exchange, these authors conclude that information sharing may result in manipulation. Oh and Özer (2013) offer several examples of deliberately biased forecasts shared across the supply chain in several industries and caution against this practice. Collectively, these studies suggest that forecast sharing may not be a panacea as its benefits may be hindered by lack of truthfulness and incentives. This raises numerous research implications, such as how to share forecast information and ensure truthfulness. Related to that are security implications, as information sharing can also result in data leakage to undesirable parties.

There are numerous other research opportunities. One is studying the role of judgment and “soft information,” especially in risk management. As forecasting systems become increasingly optimized and automated, what is the role of decision makers and how can they best have an input? Another research opportunity comes from looking at ways to optimize both the fore-cast and the appropriate supply chain performance. Research has clearly shown that the POM environment moderates the impact of forecast errors on costs. However, it is not clear which types of POM structures create greater resiliency, the ability to bounce back from disruptions and disasters, such as number and types of suppliers, inventory and postponement strategies, and other POM decisions.

Yet another research area is new product forecasting when there is no historical data. The advent of big data, especially unstructured data such as crowdsourcing, may have uses in fore-casting new product demands and features but require more research. Crowdsourcing, which is defined as using social media to get many opinions and ideas, has gained large popularity, but it is not clear how to leverage it for forecasting.

Also, forecasting intermittent or “lumpy” demand continues to be a problem in POM (Prest-wich et al. 2014) and better ways to address this would be important. Lastly, there are tremendous research opportunities in the area of metrics and forecast measurement. Indeed, measuring fore-cast performance is critical for greater understanding and continuous improvement. The devel-opment of better, more comprehensive, and easy to understand metrics, as well as frameworks to effectively use the metrics available is yet another area that would help evolve the practice of forecasting.

Page 18: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

59

7 Conclusion

It should be evident from the discussion in this chapter that forecasting is as much of an art as it is a form of science. There are critical technical aspects to forecasting such as with data analysis and model selection. There are also behavioral aspects, as with most POM contexts, the managers are the people using generated forecasts for decision making. They must have confidence and belief in the forecast. In addition, forecast decisions are closely intertwined with planning in the form of a myriad of POM decisions. We have outlined many well-established tenets that should guide practitioners in their quest for better forecasts. However, many issues remain unresolved creating a broad spectrum of opportunities for scholars to improve on the theory and practice of forecasting in POM.

References and Bibliography

Ali, M.M. & Boylan, J. (2010) “The Value of Forecasting Information Sharing in the Supply Chain.” Foresight: The International Journal of Applied Forecasting. 18: 14–18.

Armstrong, J.S. (1985) Long-range forecasting: From crystal ball to computer . New York: John Wiley & Sons. Armstrong, J.S. (2001) “Combining forecasting,” in J.S. Armstrong (ed.), Principles of forecasting . Norwell,

MA: Kluwer Academic Publishing. Armstrong, J.S. & Collopy, F. (1989) “Integration of statistical methods and judgment for time series fore-

casting: Principles from empirical research,” in G. Wright & P. Goodwin (eds.), Forecasting with judgment , pp. 1269–1293. New York: John Wiley.

Armstrong, J.S., Kesten, C.G., & Graefe, A. (2015) “Golden rule of forecasting: Be conservative.” Journal of Business Research, 68 (8): 1717–1735.

Arkes, H.R. (2001) “Overconfidence in judgmental forecasting,” in J.S. Armstrong (ed.), Principles of forecasting . Norwell, MA: Kluwer Academic Publishing.

Aviv, Y. (2001) “The effect of collaborative forecasting on supply chain performance.” Management Science , 47(10) : 1326–1343.

Aviv, Y. (2007) “On the benefits of collaborative forecasting partnerships between retailers and manufactur-ers.” Management Science, 54 (5): 777–794.

Batchelor, R. & Dua, P. (1995) “Forecaster diversity and the benefits of combining forecasts.” Management Science , 41 (1): 68–75.

Bazerman, M. H. (1998) “The Impact of Personal Control on Performance: Is Too Much of a Good Thing Bad?” Emerging management realities: proceedings of the 35th anniversary meeting, EAM, Eastern Academy of Management. EAM.

Biggs J.R. & Campion, W.M. (1982) “The effect and cost of forecast error bias for multiple-stage production-inventory systems.” Decision Sciences , 13 (4): 570–584.

Blattberg, R.C. & Hoch, S.J. (1990) “Database models and managerial intuition: 50% model + 50% man-ager.” Management Science , 36 (8): 887–899.

Bowman, E.J. (1963) “Consistency and optimality in managerial decision making.” Management Science, 9 (2): 310–321.

Bunn, D. & Wright, G. (1991) “Interaction of Judgmental and Statistical Forecasting Methods: Issues and Analysis.” Management Science, 37(1): 501–518.

Byrne P.J. & Heavy, C. (2006) “The impact of information sharing and forecasting in capaci-tated industrial supply chains; A case study.” International Journal of Production Economics , 103 (1): 420–437.

Cachon, G.P. (2001) “Managing a retailer’s shelf space, inventory, and transportation.” Manufacturing & Service Operations Management , 3 (3): 211–229.

Cachon, G.P. & Lariviere, M.A. (2001) “Contracting to assure supply: How to share demand forecasts in a supply chain.” Management Science , 47 (5): 629–646.

Chen, L. & Lee, H.L. (2009) “Information sharing and order variability control under a generalized demand model.” Management Science , 55 (5): 781–797.

Chen, Y.F., Drezner, Z., Ryan, J.K., & Simchi-Levi, D. (2000) “Quantifying the bullwhip effect in a simple supply chain: The impact of forecasting, lead times and information.” Management Science , 46 (3): 436–443.

Page 19: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

60

Clemen, R.T. (1989) “Combining forecasts: A review and annotated bibliography.” International Journal of Forecasting , 5 : 559–583.

Clemen, R.T. & Winkler, R.L. (1986) “Combining economic forecasts.” Journal of Business and Economic Statistics , 4 : 39–46.

Collopy, F. & Armstrong, J.S. (1992) “Expert opinion about extrapolations and the mystery of the over-looked discontinuities.” International Journal of Forecasting , 8 : 575–582.

Conroy, R. & Harris, R. (1987) “Consensus forecasts of corporate earnings: Analysts’ forecasts and time series methods.” Management Science , 33 (6): 687–705.

Croson, R., Schultz, K., Siemsen, E., & Yeo, M.L. (2013) “Behavioral operations: The state of the field.” Journal of Operations Management, 31 (1–2): 1–5.

Dalrymple, D.J. (1987) “Sales forecasting practices: Results from a U.S. survey.” International Journal of Fore-casting, 3 (3): 379–391.

Danese, P. & Kalchschmidt, M. (2011a) “The role of the forecasting process in improving forecast accuracy and operational performance.” International Journal of Production Economics, 131 : 204–214.

Danese, P. & Kalchschmidt, M. (2011b) “The impact of forecasting on companies’ performance: Analysis in a multivariate setting.” International Journal of Production Economics, 133 : 458–469.

Edmundson, R.J., Lawrence, M.J., & O’Connor, M.J. (1988) “The use of non-time series information in sales forecasting: A case study.” Journal of Forecasting , 7 : 201–211.

Evans, J. (2016) Business analytics: Methods , models, and decisions . 2nd edition. Upper Saddle River, NJ: Pearson.

Fildes, R. & Goodwin, P. (1999) “Judgmental forecasts of time series affected by special events: Does providing a statistical forecast improve accuracy?” Journal of Behavioral Decision Making , 12 (1): 37–53.

Fildes, R. & Goodwin, P. (2007) “Against your better judgment how organizations can improve their use of management judgment in forecasting.” Interfaces, 37 : 570–576.

Fildes, R., Goodwin, P., & Lawrence, M. (2006) “The design features of forecasting support systems and their effectiveness.” Decision Support Systems, 42 : 351–361.

Fildes, R., Goodwin, P., Lawrence, M., & Nikolopoulos, K. (2009) “Effective forecasting and judgmental adjustments: An empirical evaluation and strategies for improvement in supply-chain planning.” Interna-tional Journal of Forecasting, 25 : 3–23.

Fildes R. & Kingsman, B. (2011) “Incorporating demand uncertainty and forecast error in supply chain planning models.” Journal of the Operational Research Society, 62 (3): 483–500.

Fitzsimmons, J.A. & Fitzsimmons, M.J. (2005) Service management: Operations, strategy and information technology . 5th edition. Boston, MA: Irwin McGraw-Hill.

Forrester, J.W. (1961) Industrial dynamics . Cambridge, MA: MIT Press. Franses, P.H. & Legerstee, R. (2010) “Do experts’ adjustments on model-based SKU-level forecasts improve

forecast quality? ” Journal of Forecasting , 29 : 331–340. Franses, P.H. & Legerstee, R. (2011) “Experts’ adjustment to model-based SKU-level forecasts: Does the

forecast horizon matter? ” Journal of the Operational Research Society , 62 : 537–543. Franses, P.H. & Legerstee, R. (2013) “Do statistical forecasting models for SKU-level data benefit from

including past expert knowledge? ” International Journal of Forecasting , 29 : 80–87. Ginsburg, J. (2009) “Detecting influenza epidemics using search engine query data.” Nature, 457 :

1012–1014. Goodwin, P. (2005) “Providing support for decisions based on time series information under conditions of

asymmetric loss.” European Journal of Operational Research, 163 : 388–402. Goodwin, P., Lee, W.-Y., Fildes, R., Nikolopoulos, K., & Lawrence, M. (2007) Understanding the use of fore-

casting systems: An interpretive study in a supply-chain company . University of Bath, School of Management Working Paper Series, 2007.14, Bath, UK.

Hempel, J. (2013) “IBM’s massive bet on Watson.” Fortune , October 7, pp. 81–88. Hogarth, R. (1987) Judgment and choice: The psychology of decision . 2nd edition, Chichester, UK: Wiley. Huss, W. R. (1986) “Comparative analysis of company forecasts and advanced time-series techniques using

annual electric utility energy sales data.” International Journal of Forecasting , 1 (3): 217–239. Hyndman, R.J. & Athanasopoulos, G. (2013) Forecasting: Principles and practice , OTexts. Available from:

www.otexts.org/fpp. Hyndman, R. J. & Koehler, A.B. (2006) “Another look at measures of forecast accuracy.” International Journal

of Forecasting , 22 (4): 679–688. Kahneman, D. (2011) Thinking fast and slow . Farrar, Straus & Giroux: New York. Kahneman, D. & Tversky, A. (1973) “On the psychology of prediction.” Psychological Review, 80 : 237–251.

Page 20: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Forecasting: State-of-the-Art

61

Kahneman, D. & Tversky, A. (1979) “Prospect theory—Analysis of decision under risk.” Econometrica, 47 : 263–291.

Kauko, K. & Palmroos, P. (2014) “The Delphi Method in forecasting financial markets—An experimental study.” International Journal of Forecasting , 30 : 313–327.

Kleinmuntz, B. (1990) “Why we still use our heads instead of formulas: toward an integrative approach.” Psychological Bulletin , 107 : 296–310.

Koch, C. (2004) “Nike rebounds.” CIO Magazine , July 12, 2004. Landeta, J. & Barrutia, J. (2011) “People consultation to construct the future: A Delphi application.”

International Journal of Forecasting, 27 : 134–151. Lawrence, M.P., Edmundson, J.R.H., & O’Connor, M.J. (1985) “An examination of the accuracy of judg-

mental extrapolation of time series.” International Journal of Forecasting, B(1): 25–35. Lawrence, M.P., Edmundson, J.R.H., & O’Connor, M.J. (1986) “The accuracy of combining judgmental

and statistical forecasts.” Management Science , 32(12) : 1521–1532. Lawrence, M.P., Goodwin, P., O’Connor, M., & Onkal, D. (2006) “Judgmental forecasting: A review of

progress over the last 25 years.” International Journal of Forecasting, 22 : 493–518. Lawrence, M.P. & Makridakis, S. (1989) “Factors affecting judgmental forecasts and confidence intervals.”

Organ. Behav. Hum. Decis. Process , 42 (2): 172–187. Lee, H.L., Padmanabhan, V., & Whang, S. (1997) “The bullwhip effect in supply chains.” Sloan Management

Review, 38 (3): 93–102. Lee, H.L., So, K.C., & Tang, C.S. (2000) “The value of information sharing in a two-level supply chain.”

Management Science, 46 (5): 626–643. Lee, T.S. & Adam Jr., E.E. (1986) “Forecasting error evaluation in material requirements planning (MRP)

production systems.” Management Science, 32 (9): 1186–1205. Lee, T.S., Adam Jr., E.E., & Ebert, R.J. (1987) “An evaluation of forecast error in master production sched-

uling for material requirements planning systems.” Decision Sciences, 18 (2): 292–307. Lee, W.Y., Goodwin, P., Fildes, R., Nikolopoulos, K., & Lawrence, M. (2007) “Providing support for the use

of analogies in demand forecasting tasks.” International Journal of Forecasting, 23 : 377–390. Legerstee, R. & Franses P.H. (2014) “Do experts’ SKU forecasts improve after feedback?” Journal of

Forecasting , 33 : 69–79. Leitner, J. & Leopold-Wildburger, U. (2011) “Experiments on forecasting behavior with several

sources of information—A review of the literature.” European Journal of Operational Research, 213 : 459–469.

Lim, J.S. & O’Connor, M.J. (1995) “Judgmental adjustment of initial forecasts: Its effectiveness and biases.” Journal of Behavioral Decision Making , 8 : 149–168.

Lim, J.S. & O’Connor, M.J. (1996a) “Judgmental forecasting with time series and causal information.” International Journal of Forecasting , 12 : 139–153.

Lim, J.S. & O’Connor, M.J. (1996b) “Judgmental forecasting with interactive forecasting support systems.” Decision Support Systems , 16 : 339–357.

Lobo, G.J. (1991) “Alternative methods of combining security analysts’ and statistical forecasts of annual corporate earnings.” International Journal of Forecasting , 7 : 57–63.

Lobo, G.J. & Nair, R.D. (1990) “Combining judgmental and statistical forecasts: An application to earnings forecasts.” Decision Sciences , 16 : 339–357.

Makridakis, S. & Hibon, M. (2000) “The M3 competition: Results, conclusions and implications.” Interna-tional Journal of Forecasting , 16 (4): 451–476.

Makridakis, S. & Winkler, R.L. (1983) “Averages of forecasts: Some Empirical Results.” Management Science, 29(9): 987–996.

Makridakis, S., Anderson, A., Carbone, R., Fildes, R., Hibon, M., Lewandowski, R., Newton, J., Parzen, E., & Winkler, R. (1982) “The accuracy of extrapolation (time series) methods: Results of a forecasting com-petition.” Journal of Forecasting , 1 : 111–153.

Makridakis S.G., Wheelwright, S.C., & Hyndman, R.J. (2006) Forecasting: Methods and applications , 4th edi-tion. New York: John Wiley & Sons.

Makridakis, S.G., Wheelwright, S.C., & McGee, V.E. (1983) Forecasting, methods and applications . John Wiley & Sons, New York.

Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., & Byers, A.H. (2011) Big data: The next frontier for innovation, competition, and productivity . London: McKinsey Global Institute.

Mathews, B.P. & Diamantopoulos, A. (1986) “Managerial intervention in forecasting: An empirical inves-tigation of forecast manipulation.” International Journal of Research in Marketing , 3 , 3–10.

Page 21: The Routledge Companion to Production and Operations ...

Dow

nloa

ded

By:

10.

3.98

.104

At:

11:4

2 13

Jan

202

2; F

or: 9

7813

1568

7803

, cha

pter

3, 1

0.43

24/9

7813

1568

7803

.ch3

Nada R. Sanders

62

Mathews, B.P. & Diamantopoulos, A. (1989) “Judgmental revision of sales forecasts: A longitudinal exten-sion.” Journal of Forecasting , 8 , 129–140.

Mathews, B.P. & Diamantopoulos, A. (1990) “Judgmental revision of sales forecasts: Effectiveness of fore-cast selection.” Journal of Forecasting , 9 , 407–415.

Mayer-Schönberger, V. & Cukier, K. (2013) Big data, a revolution that will transform how we live , work, and think . Boston, MA: Houghton Mifflin Harcourt.

Meehl, P.E. (1957) “When shall we use our heads instead of the formula?” Journal of Counseling Psychology , 4 (4): 268–273.

Mentzer, J.T. & Cox, J. (1984) “Familiarity, application and performance of sales forecasting techniques.” Journal of Forecasting , 3 : 27–36.

Mentzer, J.T. & Kahn, K.B. (1995) “Forecasting technique familiarity, satisfaction, usage and application.” Journal of Forecasting , 14 : 465–476.

Metters, R. (1997) “Quantifying the bullwhip effect in supply chains.” Journal of Operations Management , 15 (2): 89–100.

Moon, M.A., Mentzer, J.T., & Smith, C.D. (2003) “Conducting a sales forecasting audit.” International Journal of Forecasting , 19 : 5–25.

Moritz, B., Siemen, E., & Kremer, M. (2014) “Judgmental forecasting: Cognitive reflection and decision speed.” Production and Operations Management , 23 : 1146–1160.

Nikolopoulos, K. & Fildes, R. (2013) “Adjusting supply chain forecasts for short-term temperature esti-mates: A case study in a Brewing company.” IMA Journal of Management Mathematics , 24 : 79–88.

Oh, S. & Özer, Ö. (2013) “Mechanism design for capacity planning under dynamic evolution of asymmetric demand forecasts.” Management Science , 59 (4): 987–1007.

Oliva, R. & Watson, N. (2011) “Cross-functional alignment in supply chain planning: A case study of sales and operations planning.” Journal of Operations Management, 29 (5): 434–448.

Özer, Ö. (2003) “Replenishment strategies for distribution systems under advance demand information.” Management Science , 49 (3): 255–272.

Prestwich, S.D., Tarim, S.A., Rossi, R., & Hnich, B. (2014) “Forecasting intermittent demand by hyperbolic-exponential smoothing.” International Journal of Forecasting, 30 : 928–933.

Ritzman, L.P. & King, B.E. (1993) “The relative significance of forecast errors in multistage manufactur-ing.” Journal of Operations Management , 11 (1): 51–65.

Sanders, N.R. (2014) Big Data Driven Supply Chain Management, Pearson FT Press. Sanders, N.R. & Graman, G.A. (2009) “Quantifying costs of forecast errors: A case study of the warehouse

environment.” OMEGA, The International Journal of Management Science , 37 (1): 116–125. Sanders, N.R. & Graman, G.A. (2015) “Bias magnification in supply chains: The mitigating role of forecast

sharing.” Decision Sciences Journal , forthcoming . Sanders, N.R. & Manrodt, K.B. (1994) “Forecasting practices in U.S. corporations: Survey results.” Interfaces,

24 : 91–100. Sanders, N.R. & Ritzman, L.P. (1995) “Bringing judgment into combination forecasts.” Journal of Operations

Management , 13(4) : 311–321. Sanders, N. R. & Ritzman, L.P. (2001) “Judgmental adjustment of statistical forecasts,” in J.S. Armstrong (ed.),

Principles of forecasting . Norwell, MA: Kluwer Academic Publishing. Taleb, N.N. (2007) The Black Swan . New York: Random House. Toktay, L.B. & Wein, L.M. (2001) “Analysis of a forecasting-production-inventory system with stationary

demand.” Management Science, 47 (9): 1268–1281. Trapero, J.R., Kourentzes, N., & Fildes, R. (2012) “Impact of information exchange on supplier forecasting

performance.” Omega , 40 (6): 738–747. Turner, D.S. (1990) “The role of judgment in macroeconomic forecasting.” Journal of Forecasting , 9 : 315–345. Webby, R. & O’Connor, M. (1996) “Judgmental and statistical time series forecasting: A review of the

literature.” International Journal of Forecasting, 12 (1): 91–118. Webby, R., O’Connor, M., & Lawrence, M. (2001) “Judgmental time-series forecasting using domain

knowledge,” in J.S. Armstrong (ed.), Principles of forecasting . Norwell, MA: Kluwer Academic Publishing. Winkler, R.L. & Makridakis, S. (1983) “The combination of forecasts.” Journal of the Royal Statistical

Society (A) , 146 (Part 2): 150–157.