Decision making techniques ppt @ mba opreatiop mgmt

Post on 18-Dec-2014

3720 Views

Category:

Technology

6 Downloads

Preview:

Click to see full reader

DESCRIPTION

Decision making techniques ppt @ mba opreatiop mgmt

Transcript

EBS Decision Making Techniques

1. Decision Analysis

There are two common elements to every decision problem: A choice has to be made. There must be a criterion by which to evaluate the

outcome of the choice. A pattern of decision/ chance event/ decision/

chance event… is the single most important characteristic of problems that are potentially solvable by the technique of decision analysis.

Problem Characteristics

The decision maker seeks a technique that matches the problem’s special characteristics. The two principal characteristics of a problem for which decision analysis will be an effective technique: The problem comprises a series of sequential decisions

spread over time. These decisions are interwoven with chance elements, with decisions taken in stages as new information emerges.

The decisions within the sequence are not quantitatively complex.

Decision Tree

The logical sequence of decisions and chance events are represented diagrammatically in a decision tree, with decision nodes represented by a square () and �chance nodes by a circle (O), connected by branches representing the logical sequence between nodes.

At the end of each branch is a payoff. Although the payoff can be non-monetary, Expected Monetary Value (EMV) is the usual criterion for decision making in decision analysis.

Carrying Out Decision Analysis

Stage 1:Draw the Tree Stage 2: Insert Payoffs

This might include a “gate”, or the cost of proceeding down a particular path: the symbol straddles the branch.

Stage 3: Insert Probabilities

Carrying Out Decision Analysis, 2

Stage 4: Roll-back Procedure Starts with the payoffs and works backwards to

the commencement of the tree. Abandon branches with inferior pay-offs. The

elimination of a branch is denoted by ‖. All but the most favorable branch is eliminated.

This is the Optimal Path.

Stage 5: Summarize the Optimal Path and Draw up the Risk Profile.

Risk Profile

…is a table summarizing the possible outcomes on the optimal path, along with their payoffs and probabilities. This is not the same thing as listing all outcomes since the

act of choosing the optimal path will have eliminated some possibilities.

A decision maker may favor a course with a lower EMV but a more attractive risk profile.

This will reveal whether the EMV incorporates a high-impact, low probability (HILP) event.

Where differences are small, a report might contain multiple profiles.

Value of Additional Information

The Expected Value of Sample Information (EVSI) relates to some specific extra information. It is measured as the amount by which the EMV

of the decision will increase if this information is available at zero cost.

This is a guide to the maximum amount that should be paid for a particular piece of information.

Value of Additional Information, 2

The Expected Value of Perfect Information (EVPI) relates to any piece of information. It is measured as the amount by which the EMV

of the decision will increase if perfect information were available.

Perfect information means that the decision maker is told what will be the outcome at every chance event node.

Perfect information does not guarantee a particular outcome: it only tells what the outcome will be.

Validity of the EMV Criterion

It frequently makes sense to do as well as possible ‘on average’: the foundation of EMV. However, when there are very large payoffs or losses (even with low probability), another criterion may be selected by the decision maker. Maximin: At each decision node, the option is selected

whose worst possible outcome is better than the worst possible outcome of other options.

Expected Opportunity Loss (EOL) incorporates the value of the excluded alternative, calculated by multiplying lost profit payoffs by the corresponding probabilities.

Proceeding to Decision

Recommendations should be tested against other criteria: Is the decision tree an accurate model of the

decision faced? How much faith can be put in the probability

estimates? Is EMV an appropriate criterion for the situation? How robust (sensitive to small variations in

assumptions) is the optimal decision?

2. Advanced Decision Analysis

The most difficult aspect of decision analysis is the derivation of the probabilities of chance events.

The fact that an optimal decision was so highly dependent upon one probability is valuable information: It suggests that the decision problem has no clear-

cut answer. It also indicates the area to which attention should

be given and further work done.

Continuous Probability Distributions

Rather than being comprised of discrete values, a chance node outcome may be a continuous distribution, depicted in a decision tree by a fan.

Probability in a distribution is notionally determined by measuring areas under the curve. Alternatively, one can produce a cumulative distribution –

one that indicates on the vertical scale the probability of a variable taking a value more than, or less than, the value indicated on the horizontal scale.

EMV is calculated upon a selected number of representative values, often bracket medians.

Bracket Medians

1. Decide how many representative values are required.

• Rule of thumb: at least five but not more than ten.

• If calculations are computerized, a large number of values can be used for intricate or sensitive distributions.

2. Divide the whole range of values into equi-probable ranges (usual although not strictly necessary).

Bracket Medians, 2

3. Select one representative value from each of the ranges by taking the median of the range. This value is the bracket median.

4. Let the bracket median stand for all of the values in the range and assign to it the probability of all the values in the range.

• In a normal distribution, the mean is the bracket median for the entire range.

Assessment of Subjective Probability

The inability to objectively measure the probability does not relegate it to guesswork.

An estimate might be produced by interviewing experienced personnel to: Determine the endpoints, shape and relative

probability of outcomes. Subdivide ranges so far as it is useful to do so. Test for consistency.

Revising Probability

Bayes’ theorem is used whenever probabilities are revised in light of some new information (e.g., a sample). It combines experience (prior probabilities) with

sample information (numbers) to derive posterior probabilities.

P(A and B) = P(A) x P(B|A) P(A and B) = P(B) x P(A|B)

The theorem could apply to payoffs as well as to probabilities.

A diagrammatic approach

A Venn diagram – in this case, a square – can be used to represent prior and posterior probabilities: Divide the square vertically to represent prior

probabilities. Thus, rectangles are created those areas are proportional to their prior probabilities.

Divide the rectangles horizontally to represent the ‘sample accuracy’ probabilities. These smaller areas are proportional to the conditional probabilities.

Areas inside the square can be combined or divided to represent the posterior probabilities. A conditional probability is given not by the area representing the event, but by this area as a proportion of the area that represents the condition.

Bayesian Revision

Results are conventionally summarized in a table, showing how the calculations are built up.

The posterior probability is calculated by dividing the term P(V) x P(T|V) in the fifth column by P(T).

Test outcome Variable

Prior probability of variable

Test accuracy

P(T|V) Posterior

probability

(T) (V) P(V) P(T|V) P(V) x P(T|V) P(V|T)

Utility

The EMV criterion can mask large payoffs by averaging them out. A utility function represents monetary payoffs by utilities and allows EMVs to be used universally. A utility is a measurement that reflects the (subjective?)

value of the monetary outcome to the decision maker. These are not necessarily scaled proportionately. If a $2

million loss meant bankruptcy where a $1 million loss meant difficulty, the “disutility” of the former may be more than twice the latter.

The rollback then deals with utility rather than EMV.

3. Linear Programming

Linear programming (“LP”) is a technique for solving certain types of decision problems. It is applied to those that require the optimization of some criterion (e.g., maximizing profit or minimizing cost), but where the actions that can be taken are restricted (i.e., constrained).

Its formulation requires three factors: decision variables, a linear objective function and linear constraints. “Linear” means that there are no logarithmic nor

exponential functions.

Application

The Simplex method (developed by Dantzig) or variations of it are used to solve even problems with thousands of decision variables and thousands of constraints.

These are typically the product of computer applications, although simple problems might be resolved algebraically (simultaneous equations) or graphically.

The Solution

The optimal point (if there is one) will be one of the “corners” of the feasible region: the northeast (maximization) or southwest (minimization), unless:

1. The objective line is parallel to a constraint, in which case the two corner points and all of the points on the line are optimal.

2. The problem is infeasible (there are no solutions to satisfy all constraints).

3. The solution is unbounded (usually an error?) and there is an infinite number of solutions.

Redundancy and Slack

One way to save time is to scan the formulation for redundancy in the constraints. Redundancy occurs when one constraint is always weaker than another. This term can be omitted.

In the optimal solution: A constraint is tight if all the resource to which the

constraint refers is used up. A tight constraint has zero slack.

A slack constraint will have an excess, usually indicated in the form ‘available minus used’ (which, in a minimization problem, may be a negative number).

The slack is reported for each constraint.

4. Extending Linear Programming

Linear programming can be adapted to solve problems for which the assumptions of LP, such as linearity, are not met.

The technique can also generate other information that can be used to perform sensitivity analysis. It can solve problems in which some or all of the variables

can take only whole number values. It can be applied to problems where there is uncertainty,

as where decision variables have a statistical distribution.

Dual Values

Each constraint has a dual value that is a measure of the increase in the objective function if one extra unit of the constrained resource were available, everything else being unchanged. Dual values are sometimes called shadow prices. They refer only to constraints, not to variables.

These measure the true worth of the resources to the decision maker and there is no reason why they should be the same as the price paid for the resource or its market value.

Dual Values, 2

When the slack is non-zero, then the dual value is zero. If this were not so, then incremental use of the

resource could have been made, with an increment in the objective function.

The dual value also works in the other direction: indicating a reduction in the objective function for a one unit reduction in the constrained resource.

Reduced Costs

Dual values can also be found for the non-negativity constraints, but they are then called “reduced costs”. If the optimal value of a variable is not zero, then its non-

negativity constraint is slack, and the dual value (reduced cost) is zero.

If the optimal value is zero, then the constraint is tight and will have a non-zero reduced cost (a loss).

Dual values are marginal values and may not hold over large ranges. Most computer packages give the range for which the dual value holds. This is known as the right-hand side range.

Coefficient Ranges

A coefficient range shows by how much an objective function coefficient can change before the optimal values in the decision variable change. Changing the coefficient value is the same as varying the

slope of the line. If they change by small amounts, the slope of the line will not be sufficiently different to move away from the optimal point.

The same small changes will, however, change the value of the optimal objective function.

These ranges apply to the variation of one coefficient at a time, the others being kept at original values.

Transportation Problem

Some LP problems have special characteristics that allow them to be solved with a simpler algorithm.

The classic transportation problem is an allocation of production to destinations with an objective of minimizing costs. The special structure is that all of the variables in all of

the constraints have the coefficient ‘0’ or ‘1’. (This is what the book says, but I don’t understand what it means.)

An assignment problem is like the transportation problem, with the additional feature that the coefficients are all ‘1’. (Likewise.)

Other Extensions

LP assumes that the decision variables are continuous, that is, they can take on any values subject to the constraints, including decimals and fractions. When the solution requires integer values (or blocks of integer values), a more complex algorithm is required.

Quadratic programming can contend with objective functions have squared terms, as may arise with variable costs, economies of scale or quantity discounts.

Goal programming – an example of a method with multiple objectives – will minimize the distance between feasibility and an objective.

Handling Uncertainty

The coefficients and constants used in problem formulation are fixed, even if they have been derived from estimates with various degrees of certainty (i.e., the model is deterministic). Parametric programming is a systematic form of sensitivity

analysis. A coefficient is varied continuously over a range and the effect on the optimal solution displayed.

Stochastic programming deals specifically with probability information.

Chance-constrained programming deals with constraints do not have to be met all of the time, but only a certain percentage of the time.

5. Simulation

Simulation means imitating the operation of a system.

Although a model may take (scaled down) physical form, it is more often comprised of a set of mathematical equations.

The purpose of a simulation is to test the effect of different decisions and different assumptions. These can be tested quickly and without the expense or

danger of carrying out the decision in reality.

Benefits and Shortcomings

Benefits Simulation is capable of

many applications where the requirements of optimization techniques cannot be met, e.g., linearity of variable inputs.

Management has an intuitive comfort with simulation that other methodologies may not enjoy.

Shortcomings Significant time and expense

is required to develop and then use the model.

The model may determine the “best” of the formulations tested, but it cannot make decisions or test alternatives other than those specified by the operator.

Data may not be readily available.

One must establish the validity of the model.

Types of Simulations

Deterministic … means that the inputs

and assumptions are fixed and known: none is subject to probability distributions.

It is used because of the number and complexity of the relationships involved.

Corporate planning models are an example of this type.

Stochastic … means that the some of

the inputs, assumptions and variables are subject to probability distributions.

The Monte Carlo technique describes a simulation run many times, with the input of particular variables defined by their probabilities in a distribution. The output will often form a distribution, thus defining a range and probability of outcomes.

Flowchart

A flowchart is an intermediate stage that helps transform a verbal description of a problem into a simulation.

Action Question Start/ End Record File

Flow of Events Flow of Information

Controlling the Simulation

Even if inputs are provided by random number generation, they could still (also by chance) be unrepresentative. To test (only) the effect of different policies, a set

of simulations should use the same starting conditions and inputs for each.

A larger number of simulations would support an inference that an outcome was related to the differing policies rather than the chance selection of inputs.

Interpreting the Output

The dispersion of results is a proxy for their risk: the probability that an outcome may be higher or lower than outcomes for other policies.

Risk might also be regarded as the frequency of a result outside certain parameters, e.g., a stock out may also have negative implications for customer service.

Risk Analysis

When a stochastic simulation uses the Monte Carlo technique to model cash flows, it is called a risk analysis. This differs from other stochastic simulations only in terms of its application to financial problems.

A best practice is to plan a series of trial policies structured so that the range of options can be narrowed down until a nearly optimal one can be obtained.

6. Network Planning Techniques

The Critical Path Method (CPM) or Programme Evaluation and Review Technique (PERT) may benefit a project through: Coordination: getting resources deployed at the right time Time management: exploiting the flexibility within the

plan Issue identification: discovering tasks or constraints that

may otherwise be hidden until too late Cost control: idle resources can be costly

Technique Basics

1. A project is broken down into all of its individual activities.

2. The predecessors of each activity are identified.• Some must be completed before others can start; some

can take place at the same time as others. • A common mistake is confuse the customary sequence

with their logical sequence.

3. The sequence of activities is represented graphically, where the lines are activities and circles (called nodes) mark the start and completion of the activities.

Technique Basics, 2

3. (cont.)• The node at the beginning of a task should be the end

node of all activities that are immediate predecessors.• The node at the end should be the beginning node of all

activities that are immediate successors.• A dummy activity, represented by a dotted line,

overcomes the requirement that no task can have the same beginning and end nodes, and avoids depicting a ‘false dependency’ of one task upon another.

4. The critical tasks are studied with a view to deploying resources to speed completion or minimize the impacts of over-runs.

Analyzing the Network

The network is analyzed to determine timings and schedules. There are five stages: Estimate activity durations Forward Pass Backward Pass Calculating float Determining the Critical Path

Activity Duration

The expected duration of an activity may be based on experience or on subjective estimation.

Where there is some uncertainty, a sensitivity analysis can be performed. Alternatively, one can obtain optimistic, most likely and pessimistic durations, then• Expected duration = (Optimistic + (4 x Most Likely) +

Pessimistic) / 6• Variance (for activity) = ((Pessimistic – Optimistic)/6)2

• Variance (for project) = Σ (variance of activities on the critical path)

• Standard deviation = variance (With 95% confidence, expected duration = mean ± 2 std dev)

Methodological Assumptions

1. Activity times follow a beta distribution – a unimodal and non-symmetrical distribution with a shape commonly found in activities in large projects.

2. Activities are independent. This allows the application of the central limit theorem, that would predict that, as the number of variables in the sum increases, the distribution of the sum becomes increasingly normal. In fact, the activities are probably not independent:

• The cause of delay in one activity may be the same cause of delay in other activities.

• The critical path can change. Reductions in durations may make other activities critical.

Forward Pass

A forward pass means going through the network activity by activity to calculate for each an ‘earliest start’ and ‘earliest finish’, using the following formulae: Earliest finish = Earliest start + Activity duration Earliest start = Earliest finish (of previous activity) If there are several previous activities, we use the latest of

their earliest finish times. The notation appears, for example, as (6, 10, 16),

indicating that the activity’s earliest start is at week 6, has a duration of 10 weeks, and has an earliest finish at week 16.

Backward Pass

A backward pass means going through the network activity by activity, starting with the last and finishing with the first, to calculate for each an ‘latest finish’ and ‘latest start’. Latest finish = Latest start (of following activity) Latest start = Latest finish - Activity duration If there are several following activities, we use the earliest

of their latest start times. The notation for backward pass is similar to that for

forward pass, indicating Latest start, Float (discussed next) and Latest finish.

Calculating Float

An activity’s float is the amount of time that it can be delayed without delaying completion of the entire project. The difference between its earliest and latest start

times The difference between its earliest and latest

finish times (the same thing)

Determining the Critical Path

Activities for which a delay in start or completion will delay the entire project are critical. Others are slack. The critical activities define a critical path through the network.

A critical activity has a float of zero. If the total completion time of a project must

be shortened, then one or more activities on the critical path have to be shortened.

Time – Cost Trade-offs

Time and cost are usually, but not always, interchangeable and the planner can probably reduce project completion time by spending more money.

The analysis relating cost to time (cost of shortening by one week, by two weeks, etc.) is known as crashing the network. This is based on a time-cost “curve” for each activity: indicating cost and time for normal operations and the (crash) time and (crash) cost associated with the absolute minimum.

Time – Cost Trade-offs, 2

The ratio of (Cost increase/ Time saved) is, in effect, the crashing cost per unit of time.

Only critical activities are crashed, since they are the only ones that can reduce overall project completion time. An activity is crashed until the greatest possible reduction has been made or until a another parallel path is critical.

If there are multiple critical paths, activities in each must be crashed simultaneously, with the sum of their costs calculated per unit time.

The final result can be graphed in a time – cost trade-off curve.

Stages in Network Analysis

1. List all activities in the project.2. Identify the predecessors of each activity.3. Draw the network diagram.4. Estimate activity durations as average times or, if

uncertainty is to be dealt with explicitly, as optimistic/ most likely/ pessimistic times.

5. Produce an activity schedule by making a forward and then a backward pass through the network.

6. Determine the critical path by calculating activity floats.

Stages in Network Analysis, 2

7. Measure uncertainty by calculating activity variances, the variance of the whole project and finally confidence limits for the duration of the project.

8. Draw cost/ time curves for each activity.

9. Crash the network and construct a cost/ time graph for the project.

Activities-Predecessors-Draw-Duration-Passes-

Path-Variance-Costs-Crashes = APDDPPVCC

= Acting pretty draws dirty passes, pathing very costly crashes.

7. Decisions & Info Technology

A decision support system (DSS) is a computer system that produces information specially prepared to support particular decisions.

Six elements of IT: Computers Software Telecommunications Workstations Robotics Smart Products

History of IT in Business

1960’s: successful mainframe applications for accounting, operations (inventory, customer accounts) and technical operations. MIS was little more than a data dump.

1970’s: recognition of the MIS failure, with various attempts made to convert data to information that managers at different levels could use.

1980’s: introduction of the micro-computer puts computing power in the hands of the end user.

1990’s and beyond: downloading for local analysis, internal linkages (internal resource and information sharing) and external linkages (internet).

Challenges forComputerized Management

Controlling information: more is not necessarily better

Keeping information consistent Developing analytical abilities Avoiding behavioral conflicts (a supportive

business culture?) Aligning business and IT strategy

Benefits

Communications: speed and penetration of internal and external communications; interfaces for data transfer

Efficiency: direct time and cost savings, turnaround Functionality: systems capabilities, usability and

quality Management: reporting provides a form of

supervision; gives personnel greater control of their resources

Strategy: greater market contact; support for company growth; competitive advantage

The Right Attitudes

Commitment: adequate resources, especially management time, are put into the project

Realistic expectations Motivation: the need for a champion or super-users Persistence: detailing the plan, following through on

the details and monitoring performance Fear: balancing respect for IT against fear of it

The Right Approach

Top-down: how can IT help to meet corporate objectives?

Planning based: defining objectives and technologies Substantial user involvement Flexibility: adapting to a new modus operandi;

contingency planning Monitoring: costs and benefits Commitment to continuing development: markets

and technology are not static

Skills and Knowledge

Planning: leads to improved communication, resource management and control

Project planning techniques: critical path IT project stages: Feasibility study Requirements

Specification Programming Testing Running in parallel Install Review

Managing transition: setting objectives, defining tasks, recognizing the impacts of change (especially on personnel), communicating objectives and progress, dealing with unexpected developments

The Future

Technical progress Expert systems, e.g., a medical diagnostic system Artificial Intelligence (AI): can deal with less structured

issue Data mining: looking for patterns Decision mapping: discovering the structure of a problem Knowledge management, especially as a competitive

advantage Greater internal and external integration (especially

within the value chain) IT will be a greater part of corporate strategy

top related