This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Benchmarking Options: Frontier and Component Cost Analysis
26 July 2012
1 Introduction 1
1.1 Background 1
1.2 Benchmarking Approaches 1
1.3 Conclusions 2
2 Data Analysis 5
2.1 Available Data 5
2.2 Quality Adjustments 6
3 Benchmarking Data 7
3.1 Data Presentation 7
3.2 Component Cost Benchmarking 7
3.3 Frontier Analysis 8
4 Discussion and Recommendations 10
4.1 Limitations of the Analysis 10
4.2 Approach to Defining Lower Bound 10
4.3 Recommended Approach 11
Annex 1 Identifying Outliers and the Implications for Benchmarking 13
Annex 2 Calculating Percentiles 29
1
1 Introduction
1.1 Background This note examines a number of implementation issues associated with public transport
tender price benchmarking. The purpose of benchmarking is to obtain an estimate of
what would be an achievable and competitive market price1 for a unit that is procured
via direct negotiation rather than competitive tender. It would also inform cost re‐sets
during the term of contracts. The benchmark price, or price range, is an estimate of the
price that would result if the unit was competitively tendered. Using this estimate to
inform price negotiation is consistent with NZTA’s statutory requirement to ensure it
obtains the best value for the money spent.
The concern expressed by service providers is that, if the benchmark method produces a
single estimate of a competitive market price, this might give little freedom of
movement to negotiations, particularly as the negotiated tender will not be the same in
all respects as the routes used for obtaining a benchmark price. In this context the PTOM
Working Group recommended initially that the benchmark approach produce a range
of market prices between upper and lower bounds, rather than a single price.
This provides the context for this paper of how to (1) provide information to inform
negotiations that pushes towards a price representing best value for the money spent
while (2) retaining some flexibility to take account of route‐specific circumstances
affecting costs.
The other issue raised has been that of the extent to which data outliers will affect the
benchmarking exercise, leading to an estimate of the competitive market price that is too
low or too high.
This note examines:
what set of benchmarking information to provide to decision makers;
whether data outliers are a problem and if so, how to deal with them.
1.2 Benchmarking Approaches The PTOM Working Group proposed that two benchmarking methodologies are used
in combination. These would be used by an independent analyst working with data
from competitive bids to produce a benchmarking report that would inform NZTA and
councils. The methodologies are:
• Component Cost Benchmarking. This approach involves bidders for
competitive tenders providing a breakdown of their bid price into dollars per
kilometre, dollars per hour, dollars per bus (component costs) and total cost
(annual gross cost). Benchmarking could include analysis of averages, extremes
1 We define a “market price” as any conforming tender offered as a bid in the tender market, but a
“competitive market price” as a winning bid price; other bids in the same tender round are, by
definition, not competitive.
2
(eg lowest) and percentiles within the range of component costs.
• Lowest Practically Achievable Cost Benchmarking (also known as “Frontier”
Analysis or Data Envelopment Analysis). This approach uses bid cost data and
the defined “outputs” (kilometres, in‐service hours, bus numbers) to assess the
estimated maximum of each output that can be produced for a given level of
input. Bids can then be compared in terms of their relative efficiency against this
benchmark.
The benchmarking options were analysed in the light of assumed constraints to the
availability of data, which would apply particularly immediately after initial
implementation when there will be few data with which to develop benchmarks.
Analysis included consideration of the use of larger sets of competitive bid data by
including all compliant bids in the analysis rather than winning bids only.
The Working Group initially proposed that the benchmarking report would construct a
“lower bound” and an “upper bound” that would be used to define acceptable contract
costs. In this report we examine approaches to setting bounds but find that, if using a
dataset of bid data with more than just winning bids, it is difficult to constrain upper
bounds in a way that is consistent with value for money for funders.
Data outliers will not generally be a problem and, in most circumstances, can be
identified in initial data screening. The issue is likely to arise largely if:
Some data used for benchmarking are from sites with very different
characteristics from that being tendered; or
Non‐winning bid data are included and are used to construct an upper bound of
acceptable bids. However, we discuss below how the focus on achieving value
for money is likely to render this issue irrelevant.
1.3 Conclusions We recommend that frontier (data envelopment) analysis is the primary analytical tool
but that additional information is provided as context. We recommend that:
1. A dataset is compiled of bid data from competitive tenders for routes (and units)
that are deemed relevant to the route being negotiated. This should include all
compliant bids. It should include:
a. the route information—kilometres, in‐service hours and peak vehicle
requirement;
b. total bid price;
c. components costs for the bid—$/km, $/hour and $/bus.
2. The dataset of bid data should be screened initially to identify any outliers. This
would be done through a “visual” scanning of the data followed by an analysis
of specific data points identified as potential outliers, to better understand
3
whether they should be included or not.
3. Frontier analysis is used to create a lower bound of costs for a contract
negotiation. It should be based on a dataset of:
a. winning bids; and
b. complying bids that are lower cost than the winning bids;
Group tender bids should be included only in aggregate, ie as though they are
representing a single bid for a very large route.2
4. Additional information is provided to funders that allows them to better
understand the range of winning bid data used in frontier analysis. This might
include information on the estimated costs at different percentiles in the results
of frontier analysis of winning bids. For example this would show the price at
which, say, 25% of winning bids would be more efficient (the 75th percentile in
Figure 1 below).
Figure 1 Possible data from frontier analysis
Frontier price Frontier price
75th percentile 75th percentile
Highest price Highest price
$0
$500,000
$1,000,000
$1,500,000
$2,000,000
$2,500,000
$3,000,000
$3,500,000
$4,000,000
Winning bids only Winning bids plus lower‐priced compliant bids
$3,368,056
$3,459,836
3,768,984 $3,749,556
$3,479,294
$3,350,694
5. Information on component costs for all compliant bids should be supplied also,
including averages for component costs and the average mix of costs for all
compliant bids, eg hour‐related costs as a percentage of the total; kilometre‐
related costs as a percentage of the total etc. An example of data presentation
from the component cost analysis is given in Figure 2. It shows the $/km data at
different percentiles and represents the percentage of bids that would have costs
below the amount on the y‐axis. For any new negotiation, it would provide
information on where the component costs might fit within the range of costs
analysed.
2 If a group tender bid won for 4 units (routes) the data (bid price, kilometres, hours and bus numbers)
for all four should be added together as though it was one large route
4
Figure 2 Component Cost Analysis ‐ Percentiles
1.75
1.80
1.85
1.90
1.95
2.00
2.05
2.10
2.15
0 10 20 30 40 50 60 70 80 90 100
$/km
Percentile
5
2 Data Analysis
2.1 Available Data The PTOM Working Group has recommended two benchmarking approaches are used
in combination: (1) Component Cost Benchmarking and (2) Lowest Practically
Achievable Cost Benchmarking, also known as frontier analysis. The recommendations
of the Working Group have been made in the absence of analysis of the implications of
the different approaches using actual tender bid data.
We have used the limited data sets that have been identified to examine the implications
of using winning bids only versus a wider set of data with all compliant bids, and the
associated issue of outliers. The analysis of data is included in Annex 1. It includes:
A limited set of data for commercial bus tender rounds provided by councils.
These data have information on bid prices only and no accompanying data on
kilometre length, in‐service hours or numbers of buses; and
A larger dataset on school bus bids from the Ministry of Education that include
price and kilometres.
Neither set allows an analysis of the full methodologies as they are missing data on in‐
service hours or peak bus requirements. Nevertheless some conclusions can be drawn
on the implications of outliers.
Some tenders are won by bids that do not have the lowest costs because they
include quality premiums – we explore this issue further below (Section 2.2);
Outliers in data can generally be identified relatively rapidly through visual
analysis and excluded through more specific analysis of the circumstances.
Those relating to non‐conforming bids can be identified easily, for example;
Group bids, in which a single bidder has offered a uniform price (eg in
$/kilometre) across several separate units or routes, can result in misleading
benchmark information if the results for some routes only are included in the
benchmark analysis. They may imply that lower costs can be achieved for some
routes than is achievable in practice. Group bids should only be included to the
extent that all the routes included in the group bid are suitable for
benchmarking and they should be included as an aggregate;3
Using percentiles as a way to construct an (upper) bound for possible prices can
be misleading. Analysis of the implications of a 25th percentile approach (25 of
bid prices would be expected to be lower cost) led to results, even using winning
bids only in analysis, in which the suggested prices were 56‐97% above the
lowest price bids.
3 If a group tender bid won for 4 units (routes) the data (bid price, kilometres, hours and bus numbers)
for all four should be added together as though it was one large route
6
Overall the analysis suggests that outliers are not a significant problem and that it is not
possible to define a percentile that might be used to represent a suitable range for bids
prices.
2.2 Quality Adjustments Evaluation of bids takes account of quality as well as price. In achieving best value for
money, funders take account of the trade‐off between the two and will sometimes
choose higher cost bids that offer more quality than is required. It is important to take
this into account in any benchmark analysis. Ideally this is done through stripping out
the quality adjustments so that the benchmark data represent prices for bids that offer
no quality premium. Otherwise, the inclusion of data from winning bids that
incorporate a quality adjustment may suggest that the competitive market price is
higher than it would be to run a service that only just met the quality requirements.
There are two possible approaches to take account of quality adjustments:
To adjust downwards the bid prices that include a quality premium; or
To include data from compliant bids that are lower cost than the winning bids.
Councils can use a supplier quality premium (SQP) in their supplier selection process.
The SQP is the additional amount that the organisation is prepared to pay for a higher‐
quality supplier. But it is a willingness to pay estimate rather than an estimate of the
costs of supply of quality. If a supplier can provide a quality service for a low cost,
subtracting the SQP from the bid price may over‐estimate its cost of supply and result in
an adjusted bid price that is lower than achievable. For example, say a bidder with a
quality premium that the council values at $100,000 bids for a route with a price of $1
million. It may have only cost the bidder $50,000 to achieve this quality premium and
another bidder may have submitted a bid without a premium at $940,000. Using an
adjusted winning bid price ($1 million minus $100,000) results in a lower price estimate
than a competitive market price would be if no bidders had included a quality
premium.
A preferable approach is to use actual bid prices in the form of compliant bid data that
are lower than the winning bids.
The issue of quality adjustments is more problematic early in the process where there
are few data to use. As a larger dataset develops there will be more data without quality
adjustments and quality‐adjusted bid prices are unlikely to be setting the benchmark
price.
7
3 Benchmarking Data
3.1 Data Presentation The data presentation issue is that funders of public transport are seeking to identify a
low benchmark price that is an estimate of the competitive market price for the route, or
group of routes. However, the benchmark methodologies may produce a single price
estimate. Funders are likely to require some room for movement around such a price if a
price is to be agreed with a service provider. This takes account particularly of the fact
that the benchmarking exercise will be using data pertaining to routes that, while
similar, will not be exactly the same as that being negotiated. We discuss these issues
relating to the two benchmarking approaches below.
3.2 Component Cost Benchmarking Component cost benchmarking uses data provided by competitive bidders on the costs
of individual components: $/km, $/bus and $/hour. The key problem with this approach
is that of how to combine these data, ie it is unlikely that any route could achieve the
lowest value in all cost categories.
Given this, information might be provided on the range of component costs and their
averages. Figure 3 is an example which uses the data in Table 5 (excluding outliers) to
shows the $/km data at different percentiles. This represents the percentage of bids that
would have costs below the amount on the y‐axis. For any new negotiation, it would
provide information on where the component costs might fit within the range of costs
analysed. Simple averages might be useful also.
Figure 3 Component Cost Analysis ‐ Percentiles
1.75
1.80
1.85
1.90
1.95
2.00
2.05
2.10
2.15
0 10 20 30 40 50 60 70 80 90 100
$/km
Percentile
Developing relationships between the component costs, eg identifying the way in which
$/km changed with changes in $/bus is likely to require a large dataset to yield any
8
statistically significant relationships. This may become possible as the benchmarking
exercise develops over time but there will not be sufficient data in the short term.
In addition to the question of how to present the data there is an additional question of
the data to include in the analysis. Excluding outliers is an obvious first step, however
the discussion in earlier sections also raised questions relating to group tender bids.
Group tender bids involve averaging over a number of different routes and the data are
only meaningful at this aggregate level, ie by using the multi‐route average component
costs.
3.3 Frontier Analysis Frontier analysis has advantages over component cost analysis because it uses the data
in a way that takes account of trade‐offs between the cost components. The resulting
frontier is expected to be an achievable competitive market price for a route grouping.
However, the result is provided as a single expected price rather than a range.
Ways in which a range might be presented can include changing which data are
included in the analysis plus presenting the results in terms of percentiles. The set of
information might include:
a frontier price using winning bids only;
a frontier price using winning bids plus lower‐priced compliant bids (where
winning bids included quality premiums);
a price based on the winning bid that was furthest from the frontier;
a price based on some percentile of all winning bids.
These are explained further below. We start with a set of hypothetical data as shown in
Table 1.
Table 1 Hypothetical bid data
Route Km Hr PVR Total Cost Efficiency(a) Efficiency (b)