Evaluation and Visualization of Multi-Level Contingencies in Power Systems by Anton Lodder A thesis submitted in conformity with the requirements for the degree of Masters of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto c Copyright 2015 by Anton Lodder
234
Embed
Evaluation and Visualization of Multi-Level Contingencies ...€¦ · Evaluation and Visualization of Multi-Level Contingencies in Power Systems Anton Lodder Masters of Applied Science
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Evaluation and Visualization of Multi-Level Contingencies in PowerSystems
by
Anton Lodder
A thesis submitted in conformity with the requirementsfor the degree of Masters of Applied Science
Graduate Department of Electrical and Computer EngineeringUniversity of Toronto
Figure 2.1: Example of a power-voltage (P-V) curve obtained using continuation power flow. This curvedescribes how the voltage at one bus on the system changes as the loads across the system are scaledup; CPF produces a P-V curve at every bus on the system. This curve illustrates the use of an adaptivestep size, and shows the three phases corresponding to different strategies for choosing the continuationparameter.
conditions where the equations would otherwise be ill-conditioned.
2.3.2 Continuation Power Flow Algorithm
The continuation power flow algorithm uses a predictor-corrector method to traverse the power-voltage
curve4 of the system, which describes how bus voltages droop as the load profile of the system increases.
The predictor step involves using a first-order approximation of the system to move along the P-V curve
a predetermined distance (defined by the step size σ). The corrector step then uses the approximate
solution obtained from the predictor step as an initial point and solves for an exact solution to the power
flow equations (2.6) using the Newton-Raphson method. The prediction step used to obtain xpri , the ith
point5 can be formulated shown in (2.7) and (2.8)
xpri = xi−1 + σ · dx (2.7)
4See Figure 2.1 for an example of a P-V curve.5In the notation xi, i refers to the iteration of the CPF algorithm; in contrast, xk is used to denote the kth element in
the vector x.
Chapter 2. Computations for Contingency Analysis 12
which, substituting for x, expands to: δi
Vi
λi
=
δi−1
Vi−1
λi−1
+ σ
dδ
dV
dλ
(2.8)
where dδ
dV
dλ
=
[JF K
ek
]−1 0...
0
1
(2.9)
Here JF is the Jacobian of the power flow equations in (2.5) evaluated at xi−1, and ek is a vector of
zeros with the kth element equal to one, corresponding with the continuation parameter.
The process of the CPF algorithm is as follows:
1. Solve (2.6) for a nominal scaling value, such as λ = 0, to obtain an initial correct voltage reference
xcorro .
2. Choose a parameter in x to use as the continuation parameter, as well as an appropriate step size
σ.
3. Solve (2.7) to get a prediction xpri for the next point.
4. Using the result of step 3 as an initial point, solve (2.6) to get an exact solution, xcorri , on the P-V
curve.
5. Evaluate the new point to decide which parameter to use as the continuation parameter in the
next step.
6. While the curve has not been explored, return to step 3.
This iterative process is repeated until the P-V curve has been described completely. The P-V curve
usually curves down to the nose and then continues to droop as λ is scaled down, giving two possible
voltage points for each value of λ on the range of operation of the system; this is shown in Figure
2.1. Since the value of interest for contingency analysis is the value of λmax, the computation can be
terminated once the nose of the curve has been identified — this can be accomplished by terminating
the computation once the λ values identified by the corrector step stop increasing and begin to decrease.
2.3.3 Choice of Continuation Parameter
Choice of continuation parameter can have a profound effect on the performance of CPF. In general,
it is desirable to use whatever state variable has the largest rate of change with respect to the other
variables [16] near the point where the approximation is taken for the prediction step. This will ensure
that the values of all other state variables will be changing slower, improving the accuracy of prediction
and allowing the P-V curve to be explored more quickly. For the majority of the CPF computation,
the best choice for the continuation parameter is λ; it changes quickly during the early part of the P-V
curve, where the slope of the voltage curves with respect to λ is low. However, as the CPF algorithm
Chapter 2. Computations for Contingency Analysis 13
nears the nose of the P-V curve, the slope of the curve becomes high and the slope of power-voltage curve
increases, leading to potentially higher prediction error — resulting in a greater number of iterations of
Newton-Raphson to find the exact solution xcorri at each step of CPF . In addition, λ starts to decrease
once the nose of the P-V curve has been reached; beyond the nose of the curve there will be no solution
for the desired value of λ, causing the algorithm to stall when λ is scaled past λmax. For these reasons it
is better to choose a bus voltage as the continuation parameter near the nose of the P-V curve, allowing
λi to be identified in the corrector step.
The mechanism for choosing an appropriate variable to use as the continuation parameter consists of
breaking the CPF computation into three phases. In the first phase, while the slope of the P-V curve is
small, λ is used as the continuation parameter and is scaled up according to the chosen step size. After
each corrector step, the slope dV i
dλ is calculated for all each PQ bus as
V i − V i−1
λi − λi−1
for the ith solution , such that dV i
dλ is of length nPQ6. Phase one ends when the maximum slope crosses
a threshold dV max, such that
max
(dV i
dλ
)> dV max
The threshold dV max defines how far to travel along the P-V curve before leaving phase 2, and needs to
be chosen in such a way as to switch phases at an optimal point. There are two criteria to be considered
in choosing this parameter:
1. A larger threshold will allow more of the curve to be explored during the first phase of CPF, when
λ is the continuation parameter; this usually results in fewer iterations being required to traverse
the P-V curve, especially in applications where the precise shape of the curve is not of interest
away from the point of maximum loading.
2. A smaller threshold will reduce the risk of stepping past the P-V curve nose (along the λ axis) during
phase one, an event which could lead to problems during the corrector step: the corrector may not
converge, requiring the algorithm to abort; alternatively the corrector might converge on a point
further around the curve than the nose, causing the maximum value of λ to be underestimated.
Though it is possible to slightly reduce the total number of iterations of CPF by using a high slope
threshold dV max, the costs of staying in phase one for too long have the potential to be very severe,
causing the computation to be aborted or causing the nose of the P-V curve to be overshot; for this
reason it is better to choose a conservative slope threshold. In this research, the threshold was set to
dV max = 0.5, a value identified through trial-and-error.
In the second phase, a bus voltage is chosen as the continuation parameter; rather than increasing
λ by the step size, the bus voltage Vk (where k indicates the chosen PQ bus) is scaled down and the
associated value of λ is found in step 4 along with the other bus voltages. After each corrector step in
phase two, the bus voltage slopes are re-evaluated and the bus with the highest slope is chosen as the
continuation parameter in the next iteration. Phase two ends when the P-V curve has been traversed
past the nose and the slope of the P-V curve falls back below the slope threshold. In phase three the
algorithm returns to using λ as the continuation parameter.
6In the notation for vector V i, i refers to the iteration of CPF; in contrast, the notation Vk refers to the kth elementin the vector V .
Chapter 2. Computations for Contingency Analysis 14
A detailed implementation of continuation power flow is described in the source code in Appendix B
2.4 Modifications to the Continuation Power Flow
The previous section described the CPF algorithm as it was originally defined. The CPF algorithm
produces an evaluation of a power system that is suitable for use in contingency analysis, however the
computational effort required to execute this algorithm in its original formulation makes it unwieldy
for larger systems. This limitation has significant impact for multi-level contingency analysis, in which
the number of potential fault scenarios grows exponentially with the number of simultaneous faults
involved. To perform contingency analysis with continuation power flow in a reasonable amount of time,
it is necessary to implement some modifications to the algorithm in order to improve performance and
flexibility of the computation for a large set of scenarios. This section will explore two modifications
made to continuation power flow to improve its computational performance.
2.4.1 Adaptive Step Size
The step size σ is a key parameter for controlling the trade-off between speed and precision in the
execution of continuation power flow. The step size dictates how far the next point will be along the
P-V curve, and since the predictor step uses a first-order approximation to obtain xpri , increasing the
distance from the point of approximation (xpri−1) causes the accuracy of the prediction to be reduced —
increasing the number of iterations to solve the corrector step. The step size also affects the number of
iterations of continuation power flow required to traverse the P-V curve, since a larger step size would
allow the curve to be described with fewer points. These two dynamics introduce a design trade off for
optimal computation speed:
1. The accuracy of the prediction step directly affects the number of iterations needed to find an
exact solution in the correction step of the CPF algorithm. Larger step sizes will lead to prediction
points that are further from the P-V curve; this means the starting point for Newton-Raphson
will be less optimal, requiring more iterations to solve. Smaller step sizes will induce the opposite
effect. In addition, a prediction error that is sufficiently large could cause the correction step to fail
to converge, which would cause the CPF computation to stall. From this perspective it is desirable
to make the step size smaller in order to limit the number of iterations and avoid non-convergence.
2. The overhead associated with each iteration of the predictor-corrector scheme — calculating a first-
order approximation and setting up the Newton-Raphson algorithm to converge in one iteration —
means that, holding constant the number of iterations to converge on xcorri , the total time it takes
to describe the P-V curve will increase with the number of points used. From this perspective it
is desirable to make the step size larger.
These two requirements can be balanced by choosing a moderate value for σ with the goal of limiting
the prediction error to within an acceptable range. However, in order to avoid non-convergence of the
corrector step in the vicinity the voltage stability limit — when the P-V curve starts to droop quickly
and prediction becomes less accurate — much of the CPF computation will be run with a step size that
is smaller than is desirable, leading to more points than necessary in the first phase of the computation
and as to increased computation times.
Chapter 2. Computations for Contingency Analysis 15
Within the context of contingency analysis, there is a broader challenge in selecting an appropriate
step size which stems from the variation in the different scenarios which must be analyzed. Each scenario
might benefit from a different step size for maximum performance of the algorithm; it may be impossible
to choose a step size that is big enough to allow contingency analysis to be completed in a reasonable time
while ensuring that the power flow calculations converge for every iteration of CPF, in each scenario.
This is especially true given that the loadability conditions of each scenario are expected to vary widely.
In addition to this, a step size that might work for a large network would likely be too small for a smaller
network, requiring that a step size be identified for each new system to be examined and for different
load profiles as well.
In order to deal with this problem, a variable step size is proposed that would adjust σ to control the
prediction error (xcorr−xpr) [18]. The step size adjustment fits into the CPF algorithm in the following
manner:
1. Solve (2.8) to get a prediction xpri for the next point.
2. Using the resulting xpri as an initial point, solve equation (2.6) to get an exact solution on the P-V
curve, xcorri .
3. Measure the average prediction error in |xcorri − xpri | and compare to an acceptable error range;
if the error is too large, the step size should be reduced to avoid non-convergence and reduce
the number of iterations required for correction; if the error is too small, the step size should be
increased to make the computation progress faster.
The desired prediction error range should be chosen with the goal of achieving convergence of each
Newton-Raphson power flow within two to four iterations on average. It is not necessary to precisely
control the number of iterations since, in addition to the distance from the starting point, other factors
such as network topology and nearness to the voltage stability limit also have a significant effect on the
number or iterations required to converge. Code snippet 1 describes the process for updating the step
Code Snippet 1: Pseudo-code describing how to update the step size after completing the prediction andcorrection steps of an iteration of continuation power flow.
size during an iteration of the CPF algorithm. The use of a logarithmic comparison of error to a desired
error level (line 1) ensures that the adaptive algorithm is not too restrictive; it is enough to ensure that
the error is within an order of magnitude of the desired error. The rate of adjustment (line 2) is also
logarithmic; this serves to dampen the response of step size adaption to large changes in error, ensuring
that the step size is less likely to oscillate between too-small and too-large values.
The adaptive step size modification adds several edge cases to the continuation power flow algorithm:
• It is necessary to implement a maximum and minimum step size (line 4 of code-snippet 1). A
minimum step size ensures that if the CPF algorithm gets stuck for whatever reason, it can fail
Chapter 2. Computations for Contingency Analysis 16
out rather than attempting to continue with unreasonably small step sizes. A maximum step size
ensures that in trivial cases where the prediction error is zero or very small, the step size does not
become unreasonably large.
• With an adaptive step size there is the concern that as the continuation parameter changes, the
tolerance for prediction error may decrease quickly leading the correction step to non-convergence
because the step-size is too large. To overcome this problem, it is necessary to double-check the
step size when the correction step fails; if the step size is greater than the minimum step size, the
iteration should be attempted with a smaller step size rather than allowing the computation to
fail.
The implementation of adaptive step sizing is detailed in Appendix B.1
Summary of Adaptive Step Size
The adaptive step size modification increases the complexity of CPF, but it also greatly improves the
flexibility and adaptability of the algorithm to varying input systems and reduces the computation time
necessary to reach the voltage stability limit of a system. For this reason it is integral to the use of CPF
for voltage stability analysis and contingency analysis of large sets of contingency scenarios.
2.4.2 Lagrange Polynomial for Prediction
The original formulation of the continuation power flow algorithm employed a first-order approximation
of the power flow equations — described in equation (2.8) — with respect to the continuation parameter
in order to make a prediction about what the next value on the curve would be. Li and Chiang [18]
proposed a higher-order non-linear fitting scheme which makes use of polynomial interpolation to obtain
a more accurate approximation of the P-V curve. Polynomial approximation takes into account that
the power-voltage curve is generally concave and well-behaved7 to approximate the P-V curve. This
causes the results of the predictor to be closer to the exact solution, necessitating fewer iterations of the
Newton-Raphson power flow to converge during the correction step (for the same step size).
For a set of known points (x0, y0), (x1, y1), ...(xk, yk) the Lagrange polynomial is defined as
L(x) =
k∑j=0
yj lj(x) (2.10)
where
lj(x) =
k∏m=0,m 6=j
x− xmxj − xm
(2.11)
[19] By choosing {x0, x1...xk} and {y0, y1...yk} to be previous correct samples of the continuation pa-
rameter, (2.10) can be used to predict the value of any other parameter at x = xk +σ. This formulation
fits a polynomial of degree k to the given points. In practice it is necessary to generate the initial points
on the P-V curve using the first order approximation technique, since polynomial interpolation depends
on known points. In addition, the interpolation should only use points that are close by (e.g. the last 6
points found on the P-V curve) to limit the order of the resulting polynomial and the computation time.
7That is, the slope of the curve changes slowly.
Chapter 2. Computations for Contingency Analysis 17
Figure 2.2: Comparison of linear tangential predictor scheme versus Lagrange polynomial interpolationpredictor for continuation power flow.
Chapter 2. Computations for Contingency Analysis 18
30-bus system 118-bus systemlinear 0.468 ms 1.051 msLagrange 0.115 ms 0.157 ms
speedup 75.4% 85.1%
Table 2.1: Performance benchmark comparing execution times for linear approximation versus Lagrangepolynomial approximation in the prediction step of continuation power flow, averaged from 4500 execu-tions of each algorithm.
The advantage of the polynomial formulation of the predictor over the first-order approximation is
a much closer approximation of the P-V curve during the predictor step of the CPF algorithm, allowing
the step size to be increased without increasing the convergence time of the corrector step — resulting
in fewer points to describe the P-V curve. In addition, the evaluation of the Lagrange polynomial
is computationally faster than first order approximation; whereas a first order approximation has at
best sub-quadratic time complexity8, the Lagrange polynomial has linear time complexity9 with the
system size10. Table 2.1 shows a benchmark comparing the average computation time for linear versus
Lagrange approximation for two systems, demonstrating a marked performance improvement of Lagrange
approximation over linear.
Complications arising from Lagrange Interpolation
There are two challenges that arise in implementing the Lagrange polynomial for finding approximate
solutions to the P-V curve:
• The Lagrange polynomial scheme requires two or more known points in order to produce an
interpolant.
• With large changes in sample spacing over the input data, the Lagrange polynomial can quickly di-
verge from the actual P-V curve, giving inaccurate predictions and even leading to non-convergence
in the correction step. This issue arises when adaptive step sizing is also being used.
The requirement to have known points for polynomial approximation can be resolved by priming
the continuation power flow, using a linear approximation in the first iteration of the algorithm. After
solving for an initial correct point of zero loading, the first iteration can use a linear approximation to
identify the next approximate solution point. After two known points have been obtained, subsequent
iterations of CPF can utilize the Lagrange polynomial for the approximation step.
The challenge of using polynomial approximation with a changing step size is more complex to
address, since it involves an interaction with the adaptive step size. Figure 2.3 shows a comparison
of Lagrange extrapolation when the step size changes abruptly (Figure 2.3a) as compared to when the
step size changes gradually over several iterations (Figure 2.3b). In the latter graph, the approximate
function generated by the Lagrange polynomial is close to the actual function well outside of the region
of interest (around λ = 100), and produces a good initial point for the correction step; however, in
8Computing (2.9) requires solving a linear system of equations, which has an upper bound of O(n2.376)[20], where n isthe number of buses in the system.
9The prediction step computes (2.10) once for each variable in x, of length 2 × nPQ + nPV ; since this value is relatedto the number of buses on the system, the computation is 0(n).
10The evaluation of (2.10) and (2.11) is quadratic with k, the number of samples (i.e., it is O(k2)), however k is generallysmall and fixed, meaning that it does not affect how the algorithm scales to larger systems.
Chapter 2. Computations for Contingency Analysis 19
(a)
0 20 40 60 80 100 120 140 160
0
0.2
0.4
0.6
0.8
1
Power
Vol
tage
(p.
u.)
(b)
0 20 40 60 80 100 120 140 160
0
0.2
0.4
0.6
0.8
1
Power
Vol
tage
(p.
u.)
actual PV curveknown pointslagrange curvepredicted value
Figure 2.3: Comparison of performance of the Lagrange polynomial interpolation scheme for continuationpower flow with gradual versus sudden change a) with a sudden change in step size and b) with a gradualchange in step size. This demonstrates that with a rapid change in step size, the Lagrange polynomial canquickly diverge from the real function. The known points have been generated from a cubic polynomialand random perturbations.
Chapter 2. Computations for Contingency Analysis 20
computation time for: four elements 30-bus system 118-bus systembaseline 26.158 s 1:27:08 hrs 43:53:04 hrs
with polynomial approximation 20.340 s 1:10:39 hrs 30:47:11 hrswith adaptive step-size 2.952 s 0:14:17 hrs 5:28:10 hrs
with both 1.791 s 0:09:04 hrs 3:26:21 hrs
Table 2.2: Performance benchmark comparing execution times for continuation power flow with variousmodifications. The column four elements contains results for all possible contingencies (up to n − 4)involving four select branches in the IEEE 30-bus test system, totaling 15 scenarios; the column 30-bussystem displays times to compute all possible n − 1 and n − 2 contingencies on the IEEE 30-bus testsystem, totaling 3240 scenarios; and the column 118-bus system displays times to compute all n− 1 andn− 2 contingencies on the IEEE 118-bus system, totaling 64261 scenarios.
the former graph the approximate function quickly diverges from the actual function; the voltage value
produced at λ = 100 is -2.729 V p.u., which would likely result in non-convergence of the correction
step.
The challenge of utilizing Lagrange interpolation with changing step-size can be overcome by tuning
the behaviour of the adaptive step size algorithm. The rate at which the step size adapts to the error
should be limited, and the threshold for changing the step size (line 1 of Code Snippet 1) should be small
so that the changes in step size are gradual and so that large changes in step size are spread out over
several iterations of CPF. In addition, before executing the Lagrange prediction, the step sizes between
the input points should be evaluated; if there is a large change in step size between the input points
to prediction, the algorithm should fall back to linear approximation. The latter condition is relevant
especially at the boundary between phases, where the step size often changes significantly in both the
fixed step-size and adaptive step-size approaches.
Summary of Lagrange Approximation
The Lagrange approximation offers increased accuracy in the prediction step of continuation power flow,
allowing for larger step sizes and fewer iterations to solve for the voltage stability limit, while also being
faster to compute than the alternative linear approximation. The cost of using the Lagrange method is
increased complexity, requiring a fall-back mechanism in the event that conditions are not suitable for
using it. This trade-off greatly benefits the performance of CPF. The integration of Lagrange polyno-
mial prediction into CPF and the implementation of the Lagrange polynomial interpolation scheme are
detailed in Appendix B.3.
2.4.3 Quantification of Performance Gains in Continuation Power Flow
The goal of developing algorithmic enhancements to continuation power flow is to bring massive compu-
tations of multi-level contingency analysis to a performance level that enables quick turnaround times.
Table 2.2 contains a benchmark comparison of continuation power flow calculations with and without
the techniques described previously; these benchmarks show that adaptive step sizing leads to a bigger
improvement in step size. These performance benchmarks demonstrate significant gains achieved in
computation times for continuation power flow through the use of adaptive step sizing and Lagrange
polynomial approximation. Figure 2.5 gives a summary of how these modifications enhance the perfor-
Chapter 2. Computations for Contingency Analysis 21
mance of continuation power flow.
2.5 Performing Multi-level Contingency Analysis
Beyond the selection of a tool for quantifying the effect of a contingency, multi-level contingency analysis
requires the development of techniques for consistent application of the tool to a wide range of contin-
gency scenarios. This research made use of the Matpower toolbox [21], which had an implementation of
continuation power flow without the modifications described in previous sections of this thesis. In addi-
tion, it did not include functionality to scale more than one load during CPF, or to define participation
factors for loads.
2.5.1 Dealing with Islanding
The issue of islanding is a challenging problem for contingency security, one that becomes even more
difficult when multi-element contingencies are considered. For single-element contingencies to result in
islanding, there must be a single bus or a sub-system connected by only one branch, such that the
outage of that branch or either of its terminal buses is enough to disconnect the bus or sub-system of the
system from the rest of the network. This places a restriction on the number of contingency scenarios
that can lead to islanding and on the size of islanded loads or sub-systems that result from a single
element outage. In contrast, when multiple elements are faulted, there are many more contingencies
that can result in islanding; in addition, larger sub-systems involving multiple branches, buses, loads
and generators can be stranded. This makes islanding an even bigger concern for contingency security
and necessitates accurate tools to identify and measure the severity of instances of islanding.
Regardless of whether or not multi-element contingencies are considered, evaluating power systems
with islanding presents a difficult challenge. Traditional performance indexes (e.g. voltage deviations
and line loading parameters) are ill-suited in scenarios where the topology of the system is significantly
changed, since the incremental effect of a fault on these indexes may vary in accordance with gross
changes in system topology; in addition, these measures may not reflect the significant effect of islanding
on the ability of the system to supply power to specific loads which may be isolated during an islanding
scenario11. Conversely, continuation power flow provides a robust evaluation of a power system even when
islanding occurs, since it explicitly measures the reduction in loadability caused by the contingency —
such a value can be compared irrespective of system conditions under which it was obtained. In addition,
continuation power flow is also able to evaluate an isolated sub-system for its capacity to operate, giving a
quantification of the incremental benefit that can be achieved by implementing equipment and protocols
to maintain operation of stranded systems.
The flow diagram in Figure 2.4 describes the algorithm for traversing a network in order to identify
islanding. The approach used in this algorithm is to explore the network by traveling from bus to bus
around the structure defined by the branches and buses in a network; each step involves either traveling
along an untraveled branch to another bus — which may or may not have been previously visited —
or jumping to a previously unvisited bus, one that is not connected to the current bus by a branch. A
jump would be performed as the initialization step for the exploration, or any time a bus has no more
untraveled branches attached to it. Each time a jump is performed, the newly visited bus exists on a
11Paradoxically, in cases where significant loads are isolated, the isolation of buses or sub-systems could actually resultin improvement in line loading levels and a reduction in voltage deviation on the part of the system which remains active.
Chapter 2. Computations for Contingency Analysis 22
Start
Jump to an unvisited bus.
Create new network listing.
Has thisbus beenvisited?
1. Mark as visited.
2. Create new networklisting.
Merge the network of thisbus with the network ofthe bus you just jumped
Figure 2.4: Flow chart describing the algorithm for island detection. This algorithm travels from busto bus along the branches of the system, avoiding branches that have already been traveled, in order toidentify buses that are connected to the network. By tracking sets of buses that have been visited viabranch traversal, it is possible to identify regions of the network that are isolated by a branch that isdisconnected.
Chapter 2. Computations for Contingency Analysis 23
potentially islanded sub-network; this new sub-network can be explored by traveling from bus to bus
along untraveled branches. When a bus is encountered that has previously been visited but is not on the
same sub-network, it can be deduced that this new bus and all buses connected to it are on the same sub-
network. In this way the system can be explored and all islanded sub-networks identified by traveling
each branch at most once and by performing at most (but in general far fewer than) one jump per
bus. The algorithm is finished when every bus in the system has been visited, after which any branches
which have not yet been traveled must be checked to see if they connect separated sub-networks. A full
implementation of islanding detection and resolution is contained in Appendix A.4.
The inclusion of islanding in contingency analysis does not itself address the significant challenges
associated with coordinating the operation of islanded sub-networks within a power system. Islanded
networks may require special considerations to maintain adequate voltage regulation and stable oper-
ation, and they may still be susceptible to further outages. However, contingency analysis provides a
method to evaluate what gains to system security may be achieved by developing protocols and installing
equipment that allow islanded systems to continue operating. The use of continuation power flow to
evaluate loadability of systems with islanding could provide insight to whether the implementation of
islanded operation capabilities is feasible or economically viable.
2.5.2 Implementing Participation Factors
The implementation of participation factors for loads in continuation power flow is encoded in the
formulation described in equation 2.6. The vector K contains an array of participations factors such
that P o = λoK, where P o defines the load profile — containing power injections at each bus to match
equation 2.5 — and λo is the scaling factor corresponding to the load profile.
This parametrization of continuation power flow in terms of a particular load profile is a key qualifier
to how the resulting maximum loading reflects the severity of a particular contingency scenario. The
results of contingency analysis at one load profile only gives insight into the severity of faults at that
loading profile; this means that in the context of planning, it is necessary to choose an appropriate load
profile in order provide a realistic picture of contingency risk. In the context of operations, this requires
that contingency analysis be re-computed to match changes in loading and generation scheduling over
time.
For this research, it was necessary to extend the existing capability of the Matpower [21] package for
evaluating continuation power flow. The Matpower implementation formulated CPF to include λ as a
scaling factor on the real and reactive power injections at only one bus. This formulation is inadequate
for contingency analysis as it leads to gross distortion of the load profile for well-conditioned systems,
where the power injection at the bus of choice would be increased far beyond any load profile that would
be used in operation — and far beyond a reasonable power injection at that site. The formulation in
equation (2.6) distributes the scaling of power injections among all buses, preserving the load profile of
the system.
2.5.3 Application of Parallel Computing
The application of continuation power flow to multiple contingency analysis presents a significant chal-
lenge because of the sheer number of contingencies that must be considered. The number of contingency
cases grows exponentially with the level of contingencies considered and the number of elements in a
Chapter 2. Computations for Contingency Analysis 24
four elements 30-bus system118-bus system
0
20
40
1.2 1.2 1.5
96
8
15
9.513
19.5
32.7
47.5
Sp
eed
Mu
ltip
lica
tion
Fac
tor
polynomial fittingadaptive step size
combination of aboveincluding parallel computing
Figure 2.5: Graph summarizing performance gains resulting from modifications to continuation powerflow, including polynomial fitting, adaptive step size, the combination of the two algorithms, and thecombination of the two algorithms in addition to the use of parallel processing on a four-core CPU.Performance gain is calculated as a multiplicative increase in the rate of computation of contingencyanalysis; each rate is calculated as a
b , where a is the time to compute before adding the modificationand b is the time to compute afterwards. The contingency sets used for these benchmarks are the sameas those described in Table 2.5.
system, becoming unmanageable for larger networks at higher levels of contingency analysis. One key
improvement that can be made to the speed of contingency analysis is to utilize multiple processes to
speed up the rate at which a set of contingencies can be analyzed. This technique can leverage hardware
capabilities of multi-core processors and the fact that continuation power flow is primarily a CPU-bound
operation to compute individual continuation power flow solutions simultaneously.
The implementation of parallel processing for contingency analysis is made simple by the multi-
threading capabilities of modern computing languages. The continuation power flow algorithm was
implemented using MATLAB, which provides a simple mechanism to distribute the iterations of a loop
to individual workers that each act as a separate process in the operating system. This mechanism
allows a performance speedup in proportion to the number of CPU cores available. Table 2.3 gives
an example of the speedup made possible by parallel computing techniques. The benchmark describes
the performance gains achieved by enabling parallel acceleration on a desktop-class, quad-core CPU for
three different sets of contingency analysis ranging in size from 15 to 64261 elements. The use of parallel
processing allows each CPU core to analyze a contingency, providing a multiplication of computation
speed that is proportional to the number of CPU cores available — Table 2.3 shows that for larger
contingency sets, the total computation time for the contingency analysis is divided approximately by
the number of workers available, with one worker for each core. Figure 2.5 summarizes the performance
gains achieved by using parallel processing in addition to the modifications to CPF described earlier.
Parallel computing techniques provide a simple mechanism for introducing hardware scaling to fur-
ther reduce the computation time for contingency analysis. Since contingency analysis can be discretized
to a single unit for each contingency, the computation of large sets of contingencies can be distributed
on an arbitrarily large number of cores. Contingency analysis can be discretized to a single unit for
each contingency, such that each individual fault scenario can be run independently of others without
Chapter 2. Computations for Contingency Analysis 25
computation time for: four elements 30-bus system 118-bus systemone worker 1.791 s 09:04 min 3:26:21 hrs
two workers 1.4467 s 04:32 min 1:47:50 hrsthree workers 1.3333 s 03:09 min 1:12:14 hrsfour workers 1.3415 s 02:45 min 0:55:28 hrs
Table 2.3: Performance benchmark comparing execution times for continuation power flow with andwithout the use of multi-processing. The one worker case is the equivalent of not using parallel comput-ing, since with one worker all jobs are still completed sequentially. The column four elements containsresults for all possible contingencies (up to n− 4) involving four select branches in the IEEE 30-bus testsystem, totaling 15 scenarios; the column 30-bus system displays times to compute all possible n−1 andn − 2 contingencies on the IEEE 30-bus test system, totaling 3240 scenarios; and the column 118-bussystem displays times to compute all n−1 and n−2 contingencies on the IEEE 118-bus system, totaling64261 scenarios.
considering the number of faults involved or the ordering. This makes the application of computing
clusters to contingency analysis trivial. MATLAB’s Distributed Computing Toolbox makes this adapta-
tion seamless, running individual loop iterations in parallel on whatever processes are available — local
or on a computing cluster — and many other programming languages also provide tools to integrate
computing clusters.
2.6 Chapter Summary
This chapter explored several aspects of how contingencies can be evaluated to produce a robust, re-
peatable and reliable measure of contingency severity that is applicable to multi-element contingencies,
in addition to single-element or n − 1 contingencies. Special attention was given to the development
of algorithms for improving the performance of continuation power flow. While these developments do
not eliminate all concern with respect to the computational burden of performing contingency analysis
that includes multi-element contingency scenarios, they bring it into the realm of feasibility for appro-
priately sized systems at a reasonable depth; that this performance level can be achieved using scalable
commodity hardware is promising for the implementation of multi-level contingency analysis with quick
turn-around.
Chapter 3
Visualizing Multi-Level Contingency
Data
26
Chapter 3. Visualizing Multi-Level Contingency Data 27
Figure 3.1: Connection between tree diagrams and treemaps. a) A simple tree diagram showing fourelements. b) Treemaps of elements at a single level. c) A treemap combining two levels.
3.1 Introduction
A major component of this research is the exploration of techniques for creating meaningful visualizations
of contingency data. There have been few attempts at visualizing contingency analysis in the past, and
none that have considered multi-level contingency data. This research presents ground level exploration
towards effective visualization of such data.
The inclusion of multi-element contingencies complicates summarization and visual display of the
results of contingency analysis. Even the traditional approach of displaying contingencies in an ordered
list [4, 2] is ineffective since n − 2 contingencies cannot be directly compared to n − 1 contingencies;
though they are less likely to occur, they tend to cause bigger reductions in loading capacity of the
system. Beyond this, the sheer number of fault scenarios necessitates special considerations for how
to quickly and intuitively communicate the data contained in the analysis without omitting important
details.
Visualization techniques applied to contingency analyses have the potential to concisely summarize
the information contained in the results, allowing for quicker exploration of the data and gleaning of
new insights from it. Differing approaches to visualizing and interacting with contingency results will
produce different perspectives on that data and identify contrasting aspects of how the system behaves
under load. In addition to accommodating the increased complexity and scale of multi-level contingency
data, it is desirable to identify techniques that enable sophisticated, higher-order observations from
contingency data, in order to maximize the computational investment required to obtain contingency
analysis data for multiple simultaneous elements.
Chapter 3. Visualizing Multi-Level Contingency Data 28
In this chapter, two visualization methods are presented that can be used to summarize and explore
contingency analysis results for multiple levels of faults: tree diagrams and tree-maps. The tree diagram
describes how contingency scenarios can be mapped to a tree structure to describe the relationships
between contingencies of different levels that share elements in common; the treemap diagram builds
on this structural framework by presenting an alternative visual approach that highlights the value of
tree nodes over the structure of the data set as a whole. The mapping of contingency data to each
diagram will be described, accompanied by discussions about how the visualizations are constructed,
how they can be modified to communicate more information about the data set, and how they can be
augmented with other visualization companions and interactive tools to enhance exploration. Figure 3.1
gives examples of the two diagrams and displays an overview showing how the diagrams are related.
3.2 Tree Diagram
3.2.1 Introduction
Tree diagrams have been used in the past to display many different forms of hierarchical organizations
and topologies as well as binary decision processes. Examples of such visualizations include:
• descriptions of file directories [22]
• phylogenetic trees for differentiation of sub-species [23]
• DNA differentiation in population genetics [24]
• decision tree and decision tree classifier [25, 26]
• probability tree [27]
• family tree
Tree diagrams excel at exhaustive description of systems; each entity and its associated relationships
have relatively similar visual size and weight when compared to other entities in the diagram, and the
layout of a tree diagram emphasizes the structural relation between elements over their quantitative
relationships. There is no scale necessary to interpret the features of the graph, allowing the viewer to
explore the structure without interpreting the scale of any visual element — the relative sizes of tree
elements are not central to its structure. In addition, the omission of data from a tree diagram is discrete;
unlike diagrams that give intrinsic structural proportion to the value of each data point, elements in a
tree diagram have comparable visual presence regardless of any associated value1. The absence of an
element suggests not that it is less important but that it does not exist.
The following sections will detail the methodology for mapping contingency analysis results to a tree
structure and will discuss how they can be enhanced by overlaying quantitative information — followed
by an exploration of the strengths and limitations of the tree diagram in this application.
3.2.2 Representing Contingency Data as a Tree Diagram
The key elements of the tree data structure are nodes — representing individual entities — and edges,
which represent the relationship between different entities. Edges describe a parent–child relation be-
1Conversely, in a graph containing continuous values (e.g. a pie graph), insignificant data points would barely appearon the graph and could be lumped together in an other category.
Chapter 3. Visualizing Multi-Level Contingency Data 29
Figure 3.2: Tree representation of a subset of fault cases for the IEEE 30-bus test system.
tween the nodes they connect, and the aggregation of these relationships produces a representation of the
hierarchy of the data. To visualize contingency data in a tree diagram it is necessary to map elements of
the data to these conceptual structures. This mapping can be achieved by representing each contingency
scenario — defined as the combination of a list of elements involved and an associated value of reduced
loadability — as a node, and defining the relationship between different contingency scenarios to exist
where they share common grid elements — represented by an edge. Each relationship held by a node
is with a contingency scenario that has either more or fewer elements involved, causing the diagram to
be separated into groupings by n − k contingency level. These groupings are organized horizontally as
shown in Figure 3.2.
A key distinction of this mapping as compared to a typical application of tree structures is that
nodes can have multiple parents and numerous children, whereas in many traditional tree diagrams
nodes have only one parent and one or two children. There are a multiplicity of relationships between
different fault scenarios, since every element can participate in significant contingencies with a vast array
of combinations of other elements. The tree layout helps to simplify these relationships by containing
them within a per-level structure. In visual terms, this adaptation weakens the structural association
between nodes as compared to other applications of tree diagrams; the significance of each individual
edge is tempered by the fact that there are many relationships between nodes, since each one could have
numerous parent and child relations.
The diagram in Figure 3.2 shows a tree layout of contingency cases. Nodes represent fault scenarios
and have an associated value of reduced load capacity (not shown). Elements are colour coded by type,
allowing the viewer to quickly see what types of elements participate in a particular fault. Each level of
the tree corresponds to how many elements are involved in faults in that layer (i.e. level one contains
Chapter 3. Visualizing Multi-Level Contingency Data 30
single faults, level two contains double faults, etc...). Edges indicate instances where the elements of a
particular fault are a subset of the elements of another fault (e.g. a fault involving branch 1 and branch
2 would be a parent to any triple faults involving branch 1, branch 2 and one other grid element).
3.2.3 Techniques for Overlaying Quantitative Data on Tree Diagrams
Although the layout of a tree diagram is easy to understand and has a straightforward mapping to the
hierarchical structure of multi-level contingency analysis, some modifications are necessary to display
the data contained in contingency analysis. The structure of the tree diagram organizes the relationships
between different contingencies, grouping them automatically by level; however, these relationships are
already known before contingency analysis is performed, so the edges do not reveal any results of the
analysis. Furthermore, the structural layout of the tree diagram does not communicate to the viewer the
quantitative impact of each fault represented. These values are paired with the nodes of the diagram but
have no visual representation and must instead be somehow discovered or superimposed on the diagram.
The structure of the diagram alone provides no mechanism for understanding the relative differences in
severity between each fault.
In order for the tree diagram to be useful for interpreting contingency data, it is necessary that the
quantitative data obtained from continuation power flow be overlaid on the diagram, utilizing visual
cues to indicate the measured severity of each contingency scenario in relation to its neighbors. These
visual cues must be designed to draw the eye, allowing the viewer to quickly identify which contingencies
should be examined further; they should also make it easy for the viewer to see which contingencies
are related by common elements, illuminating patterns in the data and increasing comprehension of the
data set.
Since there are two structures in the tree diagram, there are two main visual techniques that can
be used to display quantities in a tree: edge emphasis and node emphasis. Figure 3.3 shows several
techniques that could be used to show quantitative differences between nodes in a tree diagram.
Edge Emphasis involves weighting the lines that connect nodes in accordance with the values of related
contingencies. In Figure 3.3b the severity of each fault is visualized by the weight of the lines
connected to it. The line weights are proportional to sum of the severity of the nodes at each
end, so the thickest lines represent instances where a fault and its child fault are both severe. This
weighting draws the viewers eye to nodes in the tree that have larger value, and highlights instances
where a particular fault as well as its child fault each cause significant reductions in loadability.
Node Emphasis involves weighting each node by the change in loadability caused by its associated
contingency. In Figure 3.3c fault severity is indicated by the diameter of each node. The diameters
are scaled relative to nodes on the same level, but have no direct relationship to the diameters of
nodes on different levels. This allows viewers to directly compare contingencies that share the same
number of elements using a common scale, and creates loose visual associations between nodes on
on different levels of the diagram.
Edge and Node Emphasis Figure 3.3d shows the application of both node and edge emphasis to
show fault severity. The combination of these two techniques effectively draws the eye to faults
more severe reductions in loadability, however the diagram is visually crowded.
Chapter 3. Visualizing Multi-Level Contingency Data 31
(a) (b)
(c) (d)
Figure 3.3: Screen-shots of a tree diagram illustrating various methods of overlaying quantitative data.a) A tree with no data overlaid. b) Edge Emphasis: a tree with line weightings added, correspondingto value of attached nodes. c) Node Emphasis: a tree with node diameter corresponding to nodevalue. d) Edge and Node Emphasis: a tree with both node and line weight corresponding to nodevalue. Note that in b) and c) the line weights are derived from a combination of the values of both nodes(parent and child).
Chapter 3. Visualizing Multi-Level Contingency Data 32
3.2.4 Normalization of Data in Tree Diagrams
A key aspect to drawing overlays of contingency analysis results on a tree diagram is defining a method
of normalizing the data and scaling the corresponding visual elements to produce visually pleasing and
intelligible diagrams. For any contingency visualization overlay technique it is necessary to specify a
consistent method for scaling the results across different levels so that they can effectively communicate
differences between nodes while not obscuring the overarching structure of the visualization. If all values
are taken on the same scale, faults involving the most elements will dominate the diagram; this is visually
misleading because while multi-element faults tend to have a larger effect on the network, they are less
likely to occur than faults involving less elements. It is desirable that faults can be compared on all levels
easily and, more importantly, that elements participating in multiple severe scenarios across different
levels of contingency can be easily identified.
Tree diagrams can take advantage of an intrinsic grouping of elements by level to normalize each
group separately. Fault values are normalized on each level so that they can be shown on the same visual
scale (e.g. n − 3 faults will have comparable visual weight to n − 1 faults). This can be accomplished
by normalizing each contingency within its level, and scaling each group of contingencies with the same
n− k level so that they fit within a maximum and minimum graphical size.
Code Snippet 2: Pseudo-code describing how to calculate edge weights, taking into consideration theassociated faults. The weighting of each value is chosen to optimize the visual effect of this edge weightingon the readability of the diagram.
In the case of edge emphasis, this entails defining a minimum and maximum line weight and scal-
ing each line based on the values of associated contingencies in their individual contexts. Code snip-
pet 2 describes the calculation of line weight for each edge in a tree such as in Figure 3.3b, d. The
value is a weighted sum of the loadability reductions associated with contingencies other and self,
each normalized with respect to other contingencies involving the same number of elements by calling
getLevelContext() on the contingency. Since other is a sub-fault of self, having one more ele-
ment involved, the value returned by each of these function calls is normalized to a different sub-group
of contingencies.
In the case of node emphasis via node diameter, the normalized values obtained from getLevelContext()
must be evaluated across an entire level in order to determine what diameter should be used for each
node. The constraints for determining the radius of each node in a layer include preserving an appropri-
ate upper and lower bound on the size of any one node and ensuring a consistent spacing between nodes,
while adequately filling the available space. The details of this algorithm are described in Appendix C.4.
3.2.5 Strengths and Limitations of the Tree Diagram
The primary strength of the tree diagram is the straightforward nature in which the structure of the
diagram relates to the contingency data it represents. Fault scenarios are clearly represented by nodes,
and edges show which faults have elements in common; there is very little ambiguity as to what the
structure of the diagram represents. This is a valuable quality, since it reduces the effort required of
observers to comprehend and make deductions about the data represented by the visualization, and
Chapter 3. Visualizing Multi-Level Contingency Data 33
minimizes the training needed to properly understand the diagram. These strengths validate the use of
a tree data structure to organize the contingency results by level and by element.
The major limitation of the tree diagram is that it is unable to represent a large selection of faults.
The examples in Figures 3.2 and 3.3 show tree diagrams representing the combined faults of only five or
six different grid elements, yet the resulting diagrams are very crowded and structurally busy. Because
such a large proportion of the visual information contained in the diagram is devoted to explaining the
structure of the data, the tree diagram is difficult to scale to larger data sets without running out of
space. Even if there existed a method of reducing the data set by eliminating low-impact contingency
scenarios, it may be impossible to pare down the selection of faults for large systems without omitting
vital information.
The issue of limited scaling is compounded by the fact that the structure of the tree does not itself
contain the contingency data; to display the quantities of interest requires adding additional features,
such as the data overlays described — further crowding the diagram and making it more difficult to
read. Although the tree diagram is conceptually easy to comprehend, this benefit diminishes as a viewer
becomes proficient at reading the visualization; what does not diminish is how visually crowded the
diagram is, which makes it harder for the viewer to acquire information and leads to visual fatigue. The
tree format adds to the complexity of the data by directly drawing and focusing on the relationships
between different data points, information which is of secondary importance compared to the value
of each data point. Furthermore, it does not provide any mechanism to selectively omit structural
information or to minimize the presence of elements that are not significant to the user in terms of their
impact on contingency security. This weakness could make the tree diagram more frustrating for viewers
who use it frequently.
3.2.6 Summary
This section explored the visualization of contingency data using tree diagrams, giving attention to how
the tree structure can be used to visualize this data and what measures are necessary to make it effective
at doing so. The tree diagram visualizes the structural framework of a tree as applied to contingency
data. The strengths and weaknesses of this diagram were identified, including its conceptual simplicity
and its tendency to become visually crowded for expanded data sets. The structural simplicity of this
diagram validates the use of the tree data structure to organize the contingency set; however the problem
of scaling underlines a need for a better visualization of the tree structure that can effectively highlight
and summarize the severity of each data point, one that scales visually to large data sets.
3.3 Treemap
3.3.1 Introduction
The second visualization tool presented is the treemap. The treemap diagram is an alternative visualiza-
tion technique for tree data structures that uses a space-filling technique to fit the tree structure within
a specified region, and is one of the few tree visualizations that can naturally communicate quantitative
data about different nodes in the tree. This visualization technique was first developed as a method of
summarizing hard drive usage [22, 28]. Some examples of treemaps in use include:
• Hard drive allocation [22, 28, 29]
Chapter 3. Visualizing Multi-Level Contingency Data 34
Figure 3.4: Treemap of n − 1 contingencies of a set of four branches in the IEEE 30-bus test system,arranged in a hierarchy. Each block represents the outage of a particular branch (indicated by thenumber), with its area representing the net reduction in loadability of the system caused by contingenciesof that element. The layout strategy applies partial ordering to the contingencies by size, from top-left(biggest) to bottom-right (smallest).
• Breakdown of financial activity by region [30]
• Gene ontology [31, 28]
• Software metrics [32].
The treemap structure excels at breaking down complex data, quickly highlighting the largest data
points while illustrating their context within the broader data set. In contrast to the tree diagram, it
provides a transparent mechanism for masking those results that are less significant, since these elements
become extremely small. The diagrams are also visually intuitive, allowing the viewer to understand the
information displayed almost instantly if they understand how the structural elements of the diagram
are mapped to the contingency data.
The following sections will introduce the treemap diagram and how contingency analysis results can
be mapped to a treemap, followed by discussion of techniques for drawing treemaps and a comparison
of the advantages and pitfalls of using treemaps to display contingency data.
3.3.2 Representing Contingency Data as a Treemap
Despite the intuitive layout of the treemap diagram, the way in which contingency data sets can be
mapped to a treemap format is less straight-forward than the case of a tree diagram. Figure 3.4 shows
Chapter 3. Visualizing Multi-Level Contingency Data 35
Figure 3.5: Treemap of n− 1 and n− 2 contingencies for four elements in the IEEE 30-bus test system,arranged in a hierarchy. Each red block represents an n−2 contingency while each blue block representsan n − 1 contingency. The blocks are further divided by thick lines into four groups, each containingthree n− 2 contingencies and one n− 1 contingency.
a treemap summarizing single-element contingencies involving four branches in the IEEE 30-bus test
system, with each contingency numbered according to the branch that is faulted in that scenario. Each
block represents a particular element or set of elements, with its area representing the net reduction
in loadability of the system caused by the fault of those elements. The layout strategy used to build
the treemap in Figure 3.4 applies partial ordering to the contingencies by size; the faults are laid out
from largest to smallest starting in the upper left-hand corner and moving to the lower right. This
diagram demonstrates a very natural comparison between the four different contingency scenarios; it is
immediately apparent which one is the worst and how they are distributed.
To show multiple levels of contingency data in a treemap, a nesting technique is employed. Figure
3.5 visualizes contingencies for the same four elements that are shown in Figure 3.4, but utilizes nesting
to also show n − 2 contingencies; the n − 2 contingencies for each single element are nested inside
the corresponding boxes that were drawn in Figure 3.4. The original layout is preserved with four
major groups corresponding to the blocks in the single-level diagram (outlined with bold lines). Within
each group the area is divided into blocks representing the different contingencies involved; red blocks
represent double-element contingencies and blue blocks represent single-element contingencies.
One of the challenges of treemaps is their hierarchical nature; since n − 2 contingencies are nested
inside n− 1 contingencies, the area they occupy on the diagram is weighted by the severity of the n− 1
contingency block in which they are drawn. In order to avoid having severe contingencies hidden because
their parent contingencies are not severe, it is necessary to use cumulative values of contingency severity
Chapter 3. Visualizing Multi-Level Contingency Data 36
to give each block its relative area. For example, the first level (blue) blocks in Figure 3.4 correspond to
single elements, but their relative area should be based on the sum of all contingencies in the analysis
that involve that element. Using this scheme, each block in a treemap summarizes faults involving the
element(s) it represents. This is the opposite of the normalization for tree diagrams, in which higher
level contingencies were scaled down in order to give them a size comparable to lower level contingencies.
The use of treemaps is helpful in providing a quick-to-read summary of the contingency data. A brief
glance at the diagram is enough for an observer to make several deductions about the contingency data,
provided that they are familiar with the visualization technique.
By looking at Figure 3.4, the following observations can be made about the contingency analysis
data:
• Faults involving branch 3 combine to have the largest effect on system loadability.
• The order of severity for each single element (summarizing all the faults it is involved in) is 3,1,2,4.
•Faults involving branch 3 or branch 1 make up roughly 70% of the reduction in loadability for any
faults in the contingency analysis.
By expanding the treemap diagram to show multiple levels of contingency data, even more observa-
tions can be made without obscuring the information communicated by the single level diagram. Looking
at Figure 3.5 further observations can be made that:
•The combined fault of 3,1 is the most severe double contingency under the sub-grouping for both
element 3 and element 1, indicating that this is the most severe n− 2 contingency.
Chapter 3. Visualizing Multi-Level Contingency Data 37
•
The contingencies (4, 1) and (1, 2) are both less severe than just the contingency of
(1). This is because both double-element contingencies cause the isolation of a load,
leaving the rest of the system better conditioned at the expense of not supplying
power to that load at all.
• The previous observation and the comparable size of faults 3 and 1 in Figure 3.4 suggesting that
the fault of element 1 alone is more severe than faults of other elements on their own.
These detailed observations about the relationships between different scenarios in the contingency
analysis data illustrate how treemaps can enable deeper exploration of contingency data and synthesis
of that data into more nuanced understanding of the different contingencies. To a viewer with basic
understanding of the structural arrangement of a treemap and how it maps contingency data, these
characterizations can be made quickly and with minimal mental effort or visual confusion. These types
of higher-order observations are a marked improvement compared to list-based summaries used for single
element contingencies, which provide no mechanism for pattern visualization.
3.3.3 Normalization of Data in Treemaps
In the case of the tree diagram, it was necessary for the viewer to understand how elements were
normalized in order for the viewer to know which elements could be compared with respect to their size
and what was implied by the scaling of elements. With the treemap, this normalization is naturally
imposed by the structure of the diagram. For a single level of data, the total space of the treemap
diagram is analogous to the sum of all elements in the data set, and the area of each individual block in
the diagram represents that data point’s size relative to the sum of all elements2.
The space-filling nature of the treemap makes the normalization of data in a treemap relatively
intuitive; however, with multi-level treemaps (e.g. Figure 3.5), the comparison of n − 2 contingencies
is less straight-forward. In a two-level treemap, an n − 2 contingency will appear twice, once for each
element involved. This was illustrated in the observations for the two-level treemap in Figure 3.5, where
it was noted that the fault of elements 3,1 appears in the group of faults for element 3 and in the group of
faults for element 1. Within each of these sub-blocks, the normalization concept of a treemap holds, yet
because each sub-block is sized corresponding to a different — but overlapping — set of contingencies,
the two blocks representing the contingency of elements 3,1 are not the same size. This difference in
normalization shows that it is not appropriate to compare two different elements that are nested inside
different sub-blocks within the treemap. Each element is normalized in a different context, so their
relative areas cannot be compared directly. Only within a grouping, such as the one containing faults
of element 3 and one other branch, can the area of different blocks be directly compared.
3.3.4 Different Approaches to Treemap Styling
There are several different geometric approaches to drawing treemap diagrams and allocating space
within them, including
• Square or Rectangular [22]
2This is why treemaps are often described as space-filling diagrams.
Chapter 3. Visualizing Multi-Level Contingency Data 38
• Space-filling curves [33]
• Gosper curves [34]
• Voroni curves [32]
• Circular partitions [35]
• Circular layouts
All of these algorithms are straight-forward space-filling techniques that populate a constrained ge-
ometric space (a rectangle or circle, with the exception of the gosper curve [34] and Voroni curve [32])
with polygons whose area corresponds to the breakdown of individual areas correspond to data points in
a data set in such a way that together they occupy the entire space. In the example of a square treemap
describing hard drive allocation, the area of square space would correspond to the capacity of the storage
device, while individual blocks represent the space occupied by files or the unused space on the disk; the
summed area of all the elements is equal to the capacity of the drive. The different geometric approaches
listed above use varying polygons and layout algorithms to fill the space, achieving the same space-filling
effect with a different look. In this research the rectangular approach was used because it has seen broad
application, because the layout algorithms for this approach are simple and intuitive, and because of the
increased rendering precision afforded by having elements aligned with the display pixel matrix.
3.3.5 Tiling Algorithm for Treemap
Square or rectangular treemap geometries are the most common implementations of treemap diagrams,
and there are several tiling algorithms that have been proposed to lay out the rectangles within a
treemap. These include BinaryTree, Ordered, SliceAndDice, Squarified and Strip [36, 37].
The layout strategy employed to draw a treemap can have a profound effect on the way it is perceived
and how viewers interpret the data it displays. It is desirable that layout algorithms preserve the
ordering of data points from small to large, since this allows layouts to be more predictable — aiding
fast comprehension by the viewer. It is also desirable to maintain the squareness of each element (i.e.
that the aspect ratio of the element be at or near 1:1), since this makes the diagram more attractive and
facilitates direct comparison of elements by area3. These two competing criteria lead to a design trade-
off: a more strict ordering of contingencies can be achieved by relaxing the requirement for squareness;
alternatively, by changing the order of blocks it is often possible to achieve a better average aspect ratio.
This research utilizes the squarified tiling algorithm — described in Figure 3.6 — which enforces a
loose diagonal ordering of the elements from top-left to bottom-right. The squarified tiling algorithm
uses a recursive, edge-at-a-time approach to fill in a given rectangular space, allocating a single column
of elements along one edge with the goal of maximizing the aspect ratio of that column.
The approach for laying out a single column is to choose the shorter edge of the given rectangular
space and to iteratively add elements from the data set along that edge until the optimal number of
elements is found. Each rectangle is constrained so that its area relative to the given space is proportional
to its value as a fraction of the sum of all values in the data set. The column of elements spans the
chosen edge, and the width of the column is adjusted so that the area of each element in it is preserved.
One element is added to the column at a time, and the average aspect ratio of the elements in the
3Research has shown that elongation of shapes can introduce a bias in visual interpretation of their area [38, 39].
Chapter 3. Visualizing Multi-Level Contingency Data 39
Start with asquare/rectangular
space to fill
Pick theshortest
side
Add a block representingan element to the side
aspectratios im-proved?
pick the betterof the last two
Are therestill
elementsto add?
continue withremaining space
and elements
finish
no
yes
noyes
Figure 3.6: Flow chart describing the recursive tiling algorithm for squarified treemap layout. Startingfrom a given rectangle, elements (represented by rectangles having area in proportion to their value as afraction of the sum of all values) are added one by one starting from the largest, such that the elementsform a column spanning the shortest side of the space. As each element is added, the column gets widerand the aspect ratios get larger. This process stops after adding an element brings the average aspectratio above 1:1, and then the better of the last two column layout iterations is chosen for the final columnlayout. After this, the column layout is repeated with the remaining elements in the remaining space,and this happens recursively until all elements are laid out — at which point the given space is filled.
Chapter 3. Visualizing Multi-Level Contingency Data 40
column is calculated once each element is added to the column and the dimensions of the column have
been adjusted to preserve the total area. During the first few iterations, each element in the column will
be tall and narrow, with a low average aspect ratio 4; as elements are added the column will become
wider, each individual element will become shorter, and the average aspect ratio will increase. Once the
average aspect ratio is greater than one, the loop is broken and the last two iterations of the column are
compared to see which one is closer to 1:1.
After a subset of elements have been allocated as a column, the remaining area of the given space
will be a rectangle. The same column-laying technique can be used recursively on that space with the
remaining elements until they are all laid out. It is important that on each recursive iteration the shorter
edge of the given space is chosen as the dimension to be spanned, as this will ensure that the left-over
space of each recursive iteration tends towards being square. The squarified tiling algorithm achieves
a loose sorting effect by sorting the data points before laying them out, and by starting each recursive
column in the same corner of the remaining space; in the examples shown, this is the top-left corner.
The details of this tiling algorithm are described in the source code in Appendix C.3.1
3.3.6 Dealing With Quantization Error
One of the challenges associated with drawing precise graphical shapes on a computer screen is dealing
with precision in the definition of the dimensions of a rectangle. Although it is necessary to define a
treemap diagram so that the dimensions of each rectangle have floating point accuracy, they have to be
reduced to discrete integer sizes for rendering on a screen — a process that can lead to distortion of the
dimensions of various elements of the diagram.
Many graphics packages provide support for anti-aliasing filters, which allow high-frequency visual
content such as hard edges to be filtered so that they can be displayed even when their details are smaller
than the resolution of the screen. However, anti-aliasing filters tend to perform poorly on rectangular
diagrams like treemaps, where the exact positions of borders between different rectangles are vital to the
structure of the diagram. Since the borders in the treemap are aligned parallel with the pixels on a screen
and have dimensions in the range of 1-3 pixels, they tend to respond poorly to anti-aliasing; borders
that fall nicely on the pixel grid will appear dark, while borders that fall between pixels will appear
blurred. This effect can be mitigated by increasing the border sizes, but this increases the minimum
size of smallest elements that can be drawn and is also an inefficient use of screen space; fewer data
points can be displayed and more of the diagram is occupied by visual elements that don’t encode the
severity of a contingency. An alternative approach would be to use high-resolution displays; however,
this may greatly increase the equipment cost associated with displaying and using treemap visualizations
because high resolution displays and accompanying graphics hardware are more rare and expensive in
comparison to standard displays.
Another approach to handling precision errors is to round each shape to integer pixel dimensions, so
that the elements of the treemap can be drawn exactly by the pixels on the screen. This technique has
the advantage of producing better-looking images, but care is required in making sure that rounding
errors are not propagated, as this can lead to a mismatch between the area of the actual diagram and
the cumulative area of the rectangular elements contained within it. In addition, it is necessary to
understand how this rounding strategy might distort the treemap in comparison to the underlying data.
4Aspect ratio is calculated as width/height.
Chapter 3. Visualizing Multi-Level Contingency Data 41
Case 1 — very large elements Case 2 — large elements Case 3 – small elements
block heights h =
227.45
202.46
170.38
150.41
149.30
h =
35.46
31.48
26.49
23.39
23.18
h =
9.48
6.44
4.38
2.39
2.31
block heightsafter rounding
h =
227
202
170
150
149
h =
35
31
26
23
23
h =
9
6
4
2
2
column heightin pixels
900 140 25
column heightafter rounding
898 138 23
column width 160 27 5
avg. aspect ra-tio
1.05:1 1.03:1 1.00:1
percent errorby area
0.22% 1.43% 8.00%
Table 3.1: Comparison of rounding error for elements with large area versus small area in laying out atreemap diagram. The net error in a column as a ratio of the column area can be significant for columnsfilled with small rectangular elements (such as those in case 3); the error adds up to 8% of the totalcolumn area. For columns that span an entire 1080p screen vertically, the error is insignificant (such asin Case 1).
One cause of rounding error is the rounding of each block height within a column layout. For
example, suppose there are five blocks in one column, all of the same width w and having heights
h = [35.46, 31.48, 26.49, 23.39, 23.18] pixels. The height of the column sums to 140 pixels, however when
rounded with a pivot of 0.5 the heights h = [35, 31, 26, 23, 23] summing to 138 pixels. This means that at
the end of the column there will be a two-pixel strip across the column that is not filled by any element
— a discrepancy which makes up 1.4% of the total area of the column. This error is small when the area
of the column is large, but as the columns get smaller this error becomes a bigger factor in the column
layout. Table 3.1 compares this scenario with one where the elements are small, for which the rounding
error is much more significant in proportion to the column area.
The effect of block dimension rounding error on the treemap diagram is that the accumulated round-
ing error of all blocks in the column will appear at the end of each column, leaving either unoccupied
space or overflowing the column. If the column overflows and is truncated, the last box may appear
significantly smaller than is dictated by its value and may even be smaller than the first block in the next
column — a disruption in the layout pattern which will quickly be noticed by the viewer, undermining
Chapter 3. Visualizing Multi-Level Contingency Data 42
their confidence in the information conveyed by the diagram. In the case where there is leftover space in
the column, this space draws the eye and distracts the viewer from the rest of the diagram, contributing
to visual fatigue.
There is no way to eliminate rounding error in treemaps since the input values from contingency
analysis are continuous decimal values and must be mapped to a discrete integer range for display.
Fortunately, the effect of this error on the accuracy of the diagram is small enough that it is unnecessary
to use advanced graphical techniques such as anti-aliasing to compensate for it. For large elements that
are meant to be surfaced by the treemap visualization, the error in area is insignificant (for example,
see column 1 of Table 3.1). In the case of smaller elements, their size alone suggests to the viewer that
in order to make direct comparisons to similarly sized items, they need to be examined more closely
by pulling them out to a different context. The treemap naturally encourages the user to see them
as visually similar to their neighbors, as opposed to highlighting slight differences between them; even
errors that are significant compared to the box size will be considered insignificant in the context of
the entire diagram — provided that the layout strategy appears consistent. However, it is necessary to
eliminate the visual artifacts created by missing or extra space in the diagram, since these gaps draw
eye of the viewer.
The most straightforward approach to hiding rounding error in a column is to distribute the rounding
error evenly among the elements in the column by padding or trimming them. In the example given
earlier, a column made up of blocks of heights h = [35.46, 31.48, 26.49, 23.39, 23.18] rounds to h =
[35, 31, 26, 23, 23], producing a rounding error of 2 pixels. This error can be rolled into the individual
elements by spreading it among the elements such that h = [36,32, 26, 23, 23]. This brings the column
height to 140 pixels, with a net element sizing error of 1.51%. It is important that the padding be done
with the goal of preserving the ordering of elements; if the error was distributed as h = [36, 31, 26, 23,24],
the size order of elements in the column would be changed, which compromises the tiling algorithm rules
and would be noticed by the viewer as an anti-pattern. Instead, the rounding error shows up as a slight
discrepancy in the area of certain blocks as compared to the contingency analysis. The human eye has
very little ability to identify this discrepancy, and so the integrity of the diagram is preserved.
Rounding error can also be propagated between the allocation of columns, since after each column
is laid out, the column width also has to be rounded. For example, suppose a column spans one side
of a 900 pixel square, and after identifying the combination of elements that should be assigned to the
column to produce the best aspect ratio, the column width is 30.45 pixels wide. After each block in the
column is rounded and the resulting error is absorbed within it, the width of the column still has to be
rounded to 30 pixels in order to be drawn; the resulting error is:
0.45 · 900 = 405 pixels
This rounding creates an error of 1.48% in the column area; however, it can create a substantial visual
artifact — the extra space will show up at the end of the treemap diagram, where it could be of
comparable size to the final columns. Furthermore, this rounding happens for each column, and so the
error can accumulate over successive column allocations if, for example, each column is rounded down.
In the case of accommodating error within a column, it was possible to distribute rounding error
evenly between the different blocks in the column. With column width rounding there is no way to go
back and pad columns, since consecutive columns may be perpendicular to each other; rounding error
must be accommodated during the layout of each column. It is possible to avoid propagation of rounding
Chapter 3. Visualizing Multi-Level Contingency Data 43
errors by tracking the accumulated rounding error and factoring it in to the next column. For example, if
the first column is laid out with a width w1 = 30.45 pixels, the rounding error will be err1 = 0.45 pixels.
Suppose that the second column is allocated a width w2 = 28.37 pixels; a straightforward rounding
scheme would round this to w2 = 28 pixels, leading to an accumulated error err2 = 0.82 pixels in
column width. In order to avoid this accumulation, err1 can be added to w2 before rounding, giving
w2 = 29 pixels and an accumulated rounding error of:
The rounding error of the second column is larger (0.63 pixels as opposed to 0.37) but the accumulated
error is much smaller. This scheme causes each column layout rounding step to consider which way the
previous step was rounded, allowing erri to be kept below 0.5 pixels for all recursive iterations of the
tiling algorithm. The implementation of error accommodation is detailed in Appendix C.3.2.
3.3.7 Use of Colour Coding to Overlay Information Treemaps
Unlike the tree diagram, treemaps naturally highlight the central quantitative data in contingency anal-
ysis. This is advantageous because it allows the treemap to much more easily scale to larger data sets
without running out of space on the diagram or having the main structural elements confused by over-
crowding. However, there is still opportunity to overlay additional data on the treemap by modulating
the colour of elements in the treemap.
Figure 3.7 demonstrates the use of colour coding to communicate the distance between elements
involved in a fault. The diagram visualizes n − 1 and n − 2 contingencies, with the distance value for
each n− 2 contingency calculated as the minimum distance between the two elements on a geographical
map of the system. The distance value obtained for each contingency, normalized to the geographical
size of the system, is encoded in the colour brightness of each rectangle, causing faults to appear dark
if the elements involved are close together. If a fault involves two branches that meet at a bus, or that
share a right-of-way, these faults are darker and stand out to the eye. This same mechanism could also
be used to encode another per-contingency metric such as fault probability data, giving the viewer a
better understanding of the risk of each contingency happening, in addition to the data that is already
shown concerning the consequence of each contingency.
Another way that colour coding has been used in the visualizations discussed here is as a method
to differentiate between n− 1 and n− 2 contingencies — for example in Figure 3.5 n− 2 contingencies
are drawn in red and n− 1 contingencies are drawn in blue. This technique allows quick reading of the
structure of the diagram and allows the viewer to identify how much worse double contingencies involving
an element are compared to the outage of just that element, as well as instances where a double element
fault leads to a better result than a single element fault.
Colour can also be used in treemaps simply to improve contrast between different groupings, em-
phasizing the nesting structure. Figure 3.9 demonstrates the use of colour purely to help differentiate
between groupings of contingencies by element. As the first level of contingencies are laid out, each is
given a colour and it and all sub-faults have that hue. This technique increases the contrast between
major blocks on the treemap, without introducing extra information.
Chapter 3. Visualizing Multi-Level Contingency Data 44
3.3.8 Strengths and Limitations of Treemaps
One of the key strengths of treemaps is that they are able to summarize the extent to which a certain
element contributes to loadability reduction in various fault cases in a contingency analysis data set.
This strength is built in to the mapping of contingency data to the treemap, wherein sub-faults are
summed and displayed in groups with their parent fault; this gives a clear picture of each element’s net
effect on security, along with a breakdown of how the effect is distributed among different contingencies.
A second strength of the treemap is that it allows for quick visual acquisition of the central obser-
vations that can be made from the data set. When an observer looks at the image, they internalize it
by identifying the most prominent features — single blocks that are much bigger than their neighbors,
and elements that appear darkest. These features are also the most valuable summary observations that
can be made about the data set, since they direct the observer to contingencies that are of primary con-
cern. This pattern can be seen on multiple scales within a multi-level treemap, providing a repeatable
and natural pattern framework for reading the diagram and requiring little effort on the part of the
viewer to make observations about the data. This reduces visual fatigue, a valuable quality for operators
who must continually explore updated visualizations to identify changing conditions of operation on the
power grid.
In addition, treemaps tend to scale well to larger data sets. Tree diagrams are visually dense, needing
larger drawing space to scale to bigger data sets; they devote much of the visual space in the diagram
to describing the structure of the data, meaning that each additional element requires a relatively large
amount of space to be displayed. In contrast, treemaps scale by reducing the size of less significant
elements — they are by definition space-filling visualizations. Each additional element requires little
visual over-head to fit into the diagram, allowing the treemap to fit a much larger data set in the same
space without becoming unreadable. Figure 3.7 shows a treemap of n − 1 and n − 2 contingencies for
the entire IEEE 30 bus system, comprised of 80 elements and 3129 contingency scenarios. Although
this diagram represents a much larger set of contingencies than Figure 3.4, it is not significantly more
crowded or difficult to read.
These strengths relate back to the concept of visual efficiency, which can be understood as the ratio
of visual information communicating data points compared to the total amount of visual information,
including that which simply gives context to the data points. The tree diagram has limited visual
efficiency, since for every contingency the contextual information of nodes and edges constitutes the
majority of each element; the actual data point is overlaid. There is a high overhead to each data point
in the visualization. The treemap has much better visual efficiency since each element in the visualization
is primarily constituted of its area, which directly represents the value of the data point. Only a small
fraction of the diagram is taken up by extra contextual information in the form of borders.
One negative effect of having a larger set of contingencies to draw is that it becomes more difficult
to compare elements of similar size that are big enough to show up, particularly those that are nested
at the second level. The eye has much more trouble discerning the relative area of these blocks, and
the rounding error in the layout becomes significant for smaller blocks. This reduces the efficacy of the
diagram in facilitating detailed comparisons between n − 2 contingencies in a comprehensive manner.
Despite this, there are still patterns that can be observed from the relative sizes of blocks.
Figure 3.8 shows a diagram of a treemap based on contingency analysis of the IEEE 118 bus test
system, comprised of 55817 n− 1 and n− 2 faults. This diagram demonstrates that even for extremely
large sets of contingencies over multiple levels, it is possible to produce a treemap that concisely sum-
Chapter 3. Visualizing Multi-Level Contingency Data 45
Figure 3.7: Treemap of n − 1 and n − 2 contingencies for the IEEE 30 bus test system, summarizing3240 scenarios. Colour coding is used to indicate distance between elements in multiple-fault scenarios;darker blocks represent faults of elements that are closer to each other geographically. Cross-hatchedareas represent groups of faults that are too small to draw. This diagram shows that the treemap scaleswell to larger data sets; however, in scaling the diagram up it was necessary to remove the numericallabeling of rectangles, which makes the diagram unreadable without interactive features to identify theindividual contingencies.
Chapter 3. Visualizing Multi-Level Contingency Data 46
Fig
ure
3.8:
Tre
emap
ofn−
1an
dn−
2co
nti
ngen
cies
for
the
IEE
E118
bu
ste
stsy
stem
,co
nta
inin
g55817
scen
ari
os.
Colo
ur
cod
ing
isu
sed
toin
dic
ate
dis
tan
ceb
etw
een
elem
ents
inm
ult
iple
-fau
ltsc
enari
os.
Dark
erb
lock
sre
pre
sent
fau
lts
of
elem
ents
that
are
close
rto
each
oth
ergeo
gra
phic
ally.
Cro
ss-h
atch
edar
eas
rep
rese
nt
grou
ps
offa
ult
sth
at
are
too
small
tod
raw
.
Chapter 3. Visualizing Multi-Level Contingency Data 47
Figure 3.9: Treemap demonstrating use of alternating colours to increase visual differentiation.
marizes the data set and highlights the basic structural themes of the data. One characteristic of this
image is that, compared to Figure 3.7, there is a greater amount of space occupied by cross-hatched
regions, representing a multitude of elements that are too small to be draw effectively. As the number
of contingencies in the data set increases, the absolute screen real-estate that any one element accom-
modates will tend to decrease, and more and more elements take up too little area to be drawn. The
result is that more of the diagram is taken up by space that contains little information, decreasing the
visual efficiency of the diagram.
The primary weakness of treemaps is that due to the fixed space and resolution constraints of common
display technology, there is a limit to how many levels of contingencies can be shown at once. For larger
systems such as in Figure 3.7 it would be impossible to show more than two levels of faults simultaneously
without overcrowding the diagram, because each successive layer of faults has to be nested inside the
previous layer. This limitation is offset by the fact that boxes in the treemap can be scaled by sub-faults
as well, summarizing faults in higher levels; however it tends to make the diagrams less intuitive for
observers, since the top-level breakdown of faults is an aggregation of several values as opposed to a
one-to-one relationship with the contingency data. Although there are techniques that could enable
exploration of further depth in contingency analysis, they may not fully overcome this weakness.
In addition to this, there is some redundancy in the treemap since multi-level faults appear more
than once on a diagram; for example, in Figure 3.5 the fault of branches 1 and 3 is shown twice, once
in the block containing faults involving branch 3 and then again in the block containing faults involving
branch 1. This complication is brought about by the fact that unlike many tree based data structures,
nodes in a tree of contingency scenarios can have multiple parents; there is no way to eliminate this
Chapter 3. Visualizing Multi-Level Contingency Data 48
from a treemap diagram. As a result, there is an extra dimension of space inefficiency for multi-leveled
treemaps of contingency data caused by the limitations of the display resolution. This redundancy can
be a pitfall for viewers; furthermore it represents a visual redundancy that reduces the space efficiency
of the diagram, and this effect gets worse for higher-order contingencies such as n− 3 and n− 4.
3.3.9 Summary
This section presented the concept of a treemap and how it can be used to visualize contingency data
about power systems. The mapping of contingency data to a treemap was detailed, along with practical
considerations of methodology for creating treemaps for use on a computer screen. Demonstrations were
given of the scaling capability of treemaps along with opportunities for enhancing treemaps with the use
of colour coding. The treemap diagram builds on the concept of a tree structure of contingency scenarios
by focusing on the values associated with each node, prioritizing this information over the tree structure
itself. In doing so, it is able to provide strong summarization features and scales well to larger trees,
naturally downplaying insignificant elements and always giving a top-level summary that highlights the
most significant data points in the contingency analysis. This makes the treemap diagram well suited
to multi-element contingency visualization.
3.4 Expanding Content of Visualizations
3.4.1 Introduction
Tree diagrams and treemaps are two tools that have the potential to provide a foundational structure for
visualizing and understanding the results of contingency analysis. Both tools offer some capacity to high-
light patterns in the data that occur across multiple levels of contingencies; however, these techniques
have limitations in their abilities to scale to larger data sets and in their effectiveness at communicating
more precise details of the contingency analysis. These limitations, though they have different man-
ifestations in each diagram, stem from a fundamental design conflict between giving concrete details
about individual contingencies at a microscopic level versus enabling macroscopic summarization of the
broader data set. In order to overcome this limitation, it is necessary to introduce supplementary visual-
ization techniques to increase the flexibility of contingency visualizations and give a more comprehensive
representation of the data.
In view of the large amount of complex, structured data involved in multilevel contingency analysis, it
is apparent that static visualization techniques may not be enough to get a full sense of what contingency
analysis reveals about the system. In the case of both the tree and treemap visualization there are aspects
of the results that are not clearly shown. The tree diagram struggles to effectively communicate data
about the effects of each fault case on system operation, and even what is visualized can only be shown
for small systems or small subsets of the full analysis. With the treemap, there is redundant information
and a limitation as to how many levels of contingency analysis can be shown. Beyond these challenges,
it is difficult to extract details about fault scenarios solely from the visual layout of the diagram. Both
diagrams struggle to provide full representation of the measurements obtained from contingency analysis,
making it difficult to substantiate observations gleaned from the visualization with concrete outcomes
of the contingency analysis.
With these factors in mind, it is necessary to consider other dimensions of visualization that may
Chapter 3. Visualizing Multi-Level Contingency Data 49
be helpful in conveying detailed and more complex information about contingency analysis in a way
that is comprehensive and intuitive. There are several techniques, visual and otherwise, that may be
used to enhance the readability and clarity of contingency visualizations. This section will discuss the
inclusion of interactive features in contingency visualizations and how they can be used to bring out
details that are not explicitly displayed by the core diagram. These techniques will take advantage
of mouse or keyboard interaction to visually augment the diagrams, exposing additional details which
cannot effectively be visualized for all elements simultaneously. Companion diagrams will be introduced
which seek to add context and detail to the visualizations, incorporating micro-level details into the
macroscopic visualizations afforded by treemaps and tree diagrams. In addition, the use of a threshold
to limit the size of the data set will be discussed.
3.4.2 Interactive Discovery of Contingency Details
The major weakness of both tree diagrams and treemaps is that they only show an abstraction of the
contingency analysis results. The absolute measurements of severity for each contingency are not directly
represented in the diagram; instead, a stylized visualization or summary is portrayed. For these diagrams
to be effective they need to be more closely connected with the underlying data. This can be achieved
by implementing interaction.
A key enhancement that can be made to static visualization techniques is the implementation of
mouse interaction for discovery of deeper details. Figures 3.10-3.13 show mouse interactions with a
treemap diagram to reveal details about individual contingencies contained in the visualization.
The implementation of this scheme is simple and natural for the user: each rectangle in the treemap
represents a specific contingency scenario and has associated with it a set of elements that are faulted
out for that contingency; an associated value of reduced maximum loadability, in mega-watts (MWs);
and a list of other elements that are directly affected by that contingency (e.g. if a bus is faulted out,
branches that connect to that bus would be directly impacted). When the viewer clicks on a particular
contingency, these details are displayed below the treemap in text format. The three categories shown
give a summary of what the measured effect of each contingency is and which elements are involved.
This summary also details the grid elements that might be directly affected by a contingency, drawing a
connection from these elements to the consequent reduction in loadability. Other measurements such as
voltage deviations or current loadings could also be reported here5 in order to give a broader perspective
on the effects of the contingency.
This interactive approach allows the viewer to take advantage of the visualization to make broader
observations about relationships between contingencies, and then break out fuller details of each contin-
gency as needed. The concise format of the visualization is preserved while giving the viewer access to
details that otherwise could not be displayed. The visualization acts as a vehicle to guide viewers from
general observations about the data set to insights about specific contingencies.
3.4.3 Responsive Highlighting of Diagram Structures
Responsive interaction can also enable highlighting of patterns in the visualizations that would otherwise
be difficult to identify and may require training to see, increasing the level of explorability afforded by
5These measurements could be derived during CPF, but only if the system is solvable at the defined loading level.
Chapter 3. Visualizing Multi-Level Contingency Data 50
Figure 3.10: Treemap diagram demonstrating the use of mouse input to identify details about a spe-cific fault. When a particular contingency is selected on the treemap, details corresponding to thatcontingency are displayed below the diagram. This diagram shows the fault of generator number 4and transformer number 2 in the IEEE 30-bus test system, resulting in a reduction to the maximumloadability of 614 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 51
Figure 3.11: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of bus number 2 and transformer number 2 in the IEEE 30-bus testsystem, resulting in a reduction to the maximum loadability of 537 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 52
Figure 3.12: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of transformer number 2 in the IEEE 30-bus test system, resultingin a reduction to the maximum loadability of 487 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 53
Figure 3.13: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of generator number 4 and bus number 6 in the IEEE 30-bus testsystem, resulting in a reduction to the maximum loadability of 612 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 54
(a)
(b)
(c)
Figure 3.14: Screen-shots of a tree diagram demonstrating the use of mouse hover interaction. Hoveringover a certain node highlights sub-faults associated with that node, helping to cut through visual clutter.a) Base diagram. b) Hovering over a node. c) Hovering over another node.
Chapter 3. Visualizing Multi-Level Contingency Data 55
contingency visualizations. This section will describe two methods of responsive highlighting — one for
the tree diagram, and one for the treemap.
Highlighting of sub-trees in the tree diagram.
One thing that responsive techniques can do is allow the user to focus on sub-structures within the
diagram. By highlighting secondary relationships that are not part of the diagram structure or by
emphasizing relationships which would otherwise not stand out, the visualizations can increase the
breadth of information communicated. Figure 3.14 shows the use of responsive mouse interaction to
highlight sub-tree structures in a tree diagram. When the mouse is moved over a contingency (represented
by a node) that fault and all sub-faults, as well as the edges connecting them, are highlighted in bold.
This action displays a sub-tree of the selected node, enabling the user to more easily identify connections
between specific faults that have significant effect on the system loadability.
The effect of sub-tree highlighting is shown in Figure 3.14. In Figure 3.14b, The sub-tree of branch
number 1 highlights several large nodes involving that branch which are strung together. In contrast,
Figure 3.14c shows the highlighting of faults involving branch number 18 and bus number 12, involving
fewer serious contingencies. This reinforces the observation that faulting branch 1 has a bigger impact
on secure operation of the system.
Highlighting of redundant visual elements in the treemap.
In the case of the treemap, it was noted that there is redundancy because double contingencies appear
twice — once for each single element contingency. This can be visualized using responsive interaction.
Figure 3.15 gives examples of two different n− 2 contingencies under mouse-over. Figure 3.15a shows a
mouse-over of the fault of transformer number 2 and bus number 2 in the top-left corner of the screen;
this contingency is in a group of rectangles that all represent contingencies involving transformer 2 and
one other element. When the mouse is placed on the contingency, a second block is highlighted in black,
half-way down the left side of the treemap. This rectangle represents the same contingency but in a group
representing contingencies of bus 2 and one other element. Figure 3.15b shows a similar highlighting for
the fault of transformer number 2 and bus number 18.
The visualization of these redundancies in treemaps is of limited value in understanding contingency
analysis results; however, it can help viewers to better understand how treemaps are drawn and how
to properly interpret the diagram. In addition it provides a quick method for understanding how two
elements in a contingency have contrasting effects on the system by relating the various other faults
in which they are involved. For example, in Figure 3.15b the secondary block (corresponding to bus
18) is in the group of contingencies involving bus 18, a group that is much smaller than the group of
contingencies involving transformer 2. This underscores the imbalance between the elements in terms
of their contribution to the combined fault. Highlighting the two redundant instances of a contingency
provides a quick — though limited — view of the contrast between them, giving some indication of
which element is a bigger security liability.
3.4.4 Cross-referencing to One-line
Perhaps the most powerful method that can be used to give context to a contingency visualization is
to cross-reference the visualization with a one-line diagram of the system, enabling exploration of the
Chapter 3. Visualizing Multi-Level Contingency Data 56
(a)
(b)
Figure 3.15: Use of mouse interaction to highlight duplicate contingencies in a treemap. a) Contingenciesof bus 2 and transformer 2, represented by two rectangles. b) Contingencies of bus 18 and transformer2, represented by two rectangles.
Chapter 3. Visualizing Multi-Level Contingency Data 57
visualization and system diagrams to identify important features. Mouse input can be used to select a
particular fault, and the elements involved in that fault can be highlighted; alternatively, elements on
the one-line can be selected and faults involving that element can be highlighted on the treemap, giving
the observer a view of all the instances where that fault shows up.
Figures 3.16 and 3.17 give examples of how these interactions could be used to glean useful information
about the system. In Figure 3.16a, a treemap and corresponding one-line are shown, with the darkness
of each block encoding the distance between elements involved in the contingency that block represents.
In Figure 3.16b a particularly dark block, indicating a fault involving elements that are close together,
is selected with the mouse, causing the two elements involved in that fault to be singled out on the
one-line diagram — bus 6 and branch 41. This connection between the two diagrams allows the user to
get details about the elements in an interactive and visually efficient manner.
The connection between the two diagrams can be utilized in reverse to look at each element involved
in a fault that has been singled out. Clicking on an element within the one-line diagram will highlight
on the treemap the faults in which this element participates. For example, in Figure 3.17a branch 41
is selected and numerous rectangles are highlighted in dark gray on the treemap. These rectangles
correspond to specific faults of branch 41 and another element, and their size and position within each
grouping indicates that these faults are all significant. In addition, one whole group of n−2 contingencies
is highlighted, representing the proportional effect of all contingencies involving branch 41 and another
element. This analysis suggests that the selected bus participates in a few serious fault scenarios. The
position of the grouping of contingencies for branch 41 is smaller than groupings for many other groupings
(it is 37th in size out of 80 elements); likewise, in each grouping for other elements, n− 2 contingencies
involving branch 41 are middle-of-the-pack.
Looking at a different element can bring out vastly different observations. In Figure 3.17b, bus 6
is brought into focus with a click. When bus 6 is clicked, the largest grouping of n − 2 contingencies
(top-left) is highlighted in dark gray, indicating that the combined effect of all faults involving this bus
make it the most significant element in the system in terms of reduced loadability. In other groupings
of n− 2 contingencies, the faults involving bus 6 are at or near the top-left of every grouping, indicating
that no matter which other element is involved, an n− 2 contingency including bus 6 is one of the most
serious in terms of reduced loadability of the system. Furthermore, it can be observed that all the n− 2
contingencies in the highlighted major block are of similar size, suggesting that bus 6 is the primary
cause of reduced loadability in these contingencies. Changing the other element has little impact on how
severe a contingency of bus 6 is because it is the primary cause of the reduction in loadability.
The above discussion shows how interaction, combined with a system diagram, can increase the
functionality of a visualization. The treemap excels at summarizing contingency data in a way that
makes it easily explorable with the eye, at the expense of displaying details about individual cases.
These diagrams guide the user by surfacing important information, and with interaction the user can
discover patterns which can aide in prioritizing responses to contingency scenarios. When combined
with auxiliary visualizations, these techniques allow deeper patterns to be identified while also exposing
the contingency analysis results
3.4.5 Drill-down
One of the main limitations of the treemap is that it doesn’t scale easily to greater depths of contingency
analysis. Beyond two levels of faults it becomes impractical to draw nested treemaps because the screen
Chapter 3. Visualizing Multi-Level Contingency Data 58
(a)
(b)
Figure 3.16: Screen-shots of a treemap diagram where elements involved in a fault are highlighted ona one-line diagram when that fault gets a mouse-over. These visualizations contain n − 1 and n − 2contingencies of branches and buses on the IEEE 30-bus test system. a) Treemap and correspondingone-line. b) One fault singled out with the mouse (highlighted in gray) and the corresponding elementsare displayed on the one-line (highlighted in red).
Chapter 3. Visualizing Multi-Level Contingency Data 59
(a)
(b)
Figure 3.17: Screen-shots of a treemap diagram where an element on the one-line diagram is clicked,highlighting the blocks on the treemap that represent contingencies that element is involved in. Thesevisualizations contain n − 1 and n − 2 contingencies of branches and buses on the IEEE 30-bus testsystem. a-b) Single elements singled out with the mouse, with corresponding faults highlighted in gray.
Chapter 3. Visualizing Multi-Level Contingency Data 60
real-estate available to each nested treemap is small; features of the diagram become too small to
compare and rounding error becomes significant. This limitation does not preclude treemaps from
visualizing deeper contingencies; however, there is a need for a systematic approach to expanding the
scaling in order to overcome this obstacle. All of the examples given so far have been of treemaps with
n − 1 contingencies as the top level; however, it is possible to draw treemaps with n − 2 contingencies
as the top layer, with n − 3 contingencies nested inside them. Alternatively, the treemap can display
sub-faults related to a particular contingency (e.g., draw a treemap of n − 3 contingencies involving
branch A and bus B). This allows for secondary views to be opened up to explore more than two levels
of the contingency analysis.
One realization of depth in treemap visualizations of contingencies is to build drill-down capability
for the diagram, where clicking on a square would isolate the corresponding fault and create a treemap
of just that fault and its sub-faults. The effect could be implemented in one of two ways: bringing up
a windowed view of the secondary treemap on click, or using a zoom-in effect to blow up the area of
the fault in question. This technique is demonstrated in Figure 3.18, which shows several levels of faults
involving a single branch. Such a sub-tree functionality would provide a treemap visualization technique
analogous to the mechanism for highlighting subtrees in tree diagrams as described in Figure 3.14.
Another approach is to use scrolling to change the contingency level used for the top-level treemap
layout. In the examples given previously of treemaps such as in Figure 3.5, the first level of treemap
layout is by individual elements, with the areas corresponding to contingencies involving that element;
each area was then broken up as a nested treemap of sub-faults. The depth of the top level of the
treemap could be changed to map to n− 2 contingencies, with each block representing a combination of
two elements and within it a nested treemap of n− 3 contingencies involving the two elements and one
other. This approach would allow the viewer to isolate faults by level, giving a more focused view on a
sub-set of the entire contingency analysis. An example of this approach is shown in Figure 3.19; n − 2
contingencies are shown in orange and n− 3 contingencies are shown in green, with the top-most layout
pattern being a division by n− 2 contingencies.
3.4.6 Thresholding
Visualization of higher order contingency data is hampered by the large number of cases that need to
be compared. Diagrams become cluttered and unreadable for larger systems (particularly in the case
of the tree diagram, but also more generally), and it becomes harder to discern details at a scale that
shows the whole data set. For even moderately sized systems it may be impossible to draw a diagram
that shows all relevant elements of contingency analysis simultaneously and with adequate detail.
To mitigate this problem, a threshold may be applied to the contingency results to filter out contin-
gency scenarios that have limited impact on the network. Removing low-impact contingencies has the
effect of reducing the amount of information that needs to be displayed and increasing the granularity
with which the more important contingencies may be visualized. There are two approaches to applying
thresholding to contingency results:
1. Filter out as a fraction of the results (e.g. show only the worst 50 cases or the worst 15%).
2. Filter by severity as compared to the nominal case (e.g. show only cases with more than 15%
reduction in loadability).
Chapter 3. Visualizing Multi-Level Contingency Data 61
Figure 3.18: Treemap diagram showing n − 1 (purple), n − 2 (orange) and n − 3 (green) contingenciesinvolving branch 2 and four other elements in the IEEE 30-bus test system. This diagram demonstratesthe concept of building a treemap focused on only one element.
Chapter 3. Visualizing Multi-Level Contingency Data 62
Figure 3.19: Treemap diagram showing n− 2 (orange) and n− 3 (green) contingencies for five elementsin the IEEE 30-bus test system. This diagram demonstrates changing the base level of the treemapdiagram, such that the first level layout summarizes n− 2 faults, with child n− 3 faults nested within.
Chapter 3. Visualizing Multi-Level Contingency Data 63
Of these the first approach is convenient in that it allows more explicit control of the size of the resulting
diagram, since the threshold would be chosen in order to achieve the desired number of faults to be shown.
The second approach is more meaningful in the sense that the threshold directly addresses the severity
of the fault scenario; faults are included based on how bad they are, allowing the inference that any
contingency present in the visualization is significant by an absolute definition of how bad a contingency
is. This is valuable since both treemaps and tree diagrams summarize and normalize contingencies with
respect to the entire data set but omit a description of how bad the worst contingencies are in absolute
terms6. The second definition of a threshold more directly addresses the goal of the diagram but requires
a definition of what level of load reduction is considered to be significant.
The problem of scaling multiple contingency analysis to larger systems is compounded for applications
where contingencies should be visualized in real time to accommodate changing loading conditions and
generator availability, to provide situational awareness; in these applications, the time to run continuation
power flow on all contingencies may become prohibitive. One technique for alleviating this problem is
to use off-line analysis to identify a subset of contingencies which are of interest for on-line tracking. In
this scenario a full set of contingencies from a power system would be analyzed and visualizations of the
results could be used to earmark select contingency scenarios for on-line analysis.
3.4.7 Summary
Interaction and expansion of visualizations is integral to providing balanced insights into the results of
contingency analysis with multiple elements, for which the outcomes are diverse and the resulting data
set has many facets. Interaction is also key to increasing the breadth of information that can be com-
municated about contingency analysis. In this section, several techniques were detailed which could be
used to make techniques for visualizing multiple contingencies more usable, flexible and comprehensive.
3.5 Chapter Summary
This chapter introduced two contingency visualization techniques, explaining how they communicate
the structural relationships between elements and the severity of each contingency. In addition, several
interactive mechanisms were introduced that enhance the usability and explorability of these visualiza-
tions. These diagrams provide visual summarization — allowing the viewer to identify patterns and
extract key insights from the data set — and enable closer inspection of the underlying data through
the use of interaction. Details about the technology and methods used to implement these visualizations
are contained in Appendix C.
6A set of contingencies where the worst fault caused a 5% reduction in loadability would look very similar to a setcausing a 25% reduction in loadability, since both sets are normalized to the space alloted to the diagram.
Chapter 4
Conclusions and Future Work
64
Chapter 4. Conclusions and Future Work 65
4.1 Conclusion
The development of visualization tools is a prerequisite for performing multiple contingency analysis,
given the scale and complexity of contingency results. Multiple contingencies present a unique challenge,
because of the vast amount of complex data that is produced, and require more advanced and deliberate
techniques for extracting valuable observations. This research identified the tree structural model as
a viable method for organizing contingency results and used that model to map contingency data to
the treemap diagram. The treemap visualization provides a good proof-of-concept for an interactive
visualization model that allows exploration of contingency data and identification of patterns and trends.
Treemap diagrams attempt to visualize multiple contingencies in power systems by leveraging the
hierarchical structure of the data — organizing contingencies to highlight common elements in order to
surface both the most severe contingencies and the elements that participate in those fault scenarios.
The diagrams provide
• powerful summarization capabilities
• deep exploration
• ability to highlight patterns in contingency data
• opportunity for deep integration with companion visualizations
A key aspect of understanding and utilizing these visualizations is a rigorous understanding of what
questions should be asked of them and what questions they are best suited to answering. These details
inform how contingencies are summarized and grouped; for example, if a users is only interested in
examining n− 2 contingencies, they might achieve a better result by eliminating the visual grouping of
contingencies by single element and instead looking at a flat treemap of n − 2 faults, similar to what
is described in Figure 3.19. Conversely, it is also important that users understand the gains achieved
through incorporating summarization; identifying and fixing one n − 2 contingency security issue may
be of some value for improving security, but identifying and supporting an element that participates in
many severe contingencies can often provide much more value in improving the overall security of the
system with respect to multiple contingencies.
This research identifies treemap visualizations as a viable tool for visualization of contingencies, but
it also identifies several limiting flaws in the visualization technique. The instance of multiple visual el-
ements referring to the same fault can be confusing and frustrating for viewers, as it leads to redundant
elements being displayed. The mapping of treemap elements to contingency data is not transparent
enough that users can view the diagram without some instruction or training, which complicates the
deployment of these systems and could lead to misinterpretation by inexperienced users. The treemap
by nature scales to show the whole data-set, which provides strong summarization; however, this comes
at the expense of minimizing small details about individual contingencies, and this minimization be-
comes more profound as systems get larger. Although these limitations do not exclude the use of such
visualizations, they raise questions about how best to deploy and utilize treemaps in power system
operations.
Although the mapping of contingencies to a visual framework is a necessary and important step in
developing contingency visualization techniques, the importance of real-time visual interaction and in-
tegration with a one-line diagram cannot be overstated. These techniques are necessary to put treemap
Chapter 4. Conclusions and Future Work 66
visualizations in proper context and to facilitate the explanation and proper use of the diagrams. With-
out interaction, the treemap remains an abstraction without practical use for evaluating contingencies.
These visualization techniques are basic tools which can be leveraged to build sophisticated sense-making
applications for system operators [4], and can also be used as tools for gaining insights into planning
and evaluation of new projects to improve the resilience of the power grid to fault scenarios such as
component faults and extreme environmental events.
In addition to visualization techniques, this thesis explored the use of continuation power flow for
running contingency analysis with multiple elements, including measures to improve its performance to
achieve quicker measurement of contingency severity. Continuation power flow has been established as
a robust computational method for identifying the voltage stability margin of a power system, and this
research demonstrates its application for evaluating the significance or severity of contingency scenarios
— particularly multiple contingencies, where there is increased risk that the system may not be stable
under given loading conditions. This research extended an implementation of continuation power flow
from the MATPOWER toolbox, including: expanding the application of bus power scaling from one bus
to all buses in the system; implementing an adaptive step size to improve the flexibility of the algorithm
and allow it to realize performance gains on-the-fly; utilizing polynomial curve fitting to improve the
prediction step; and using CPU-based parallelization. The combination of these techniques provided
a substantial performance gain and allowed for contingency analysis of n − 1 and n − 2 faults to be
brought within a reasonable time frame for planning and real-time applications. The use of scalable
parallel computing techniques means that it would be possible, with dedicated computing resources, to
perform on-the-fly contingency analysis for moderately sized systems — a necessary ingredient to allow
real-world application of multi-element contingency analysis. Nevertheless, there remains a need for
further improvements in performance of CPF in order to enable this type of contingency analysis to be
performed on larger systems; recommendations for further improvements are given in Section 4.2.
The use of continuation power flow marks a shift in focus — with respect to how contingencies are
evaluated — from considering their effects on individual operating limits to an evaluation of voltage
stability. This shift in focus is appropriate since voltage stability becomes a significant concern for
multiple contingencies, not least because of how it affects reliability of traditional measures. However, a
truly comprehensive evaluation of a contingency should also take into account, where possible, the effect
of that contingency on operating levels across the grid, since ensuring compliance with operation limits
is one of the primary concerns of system operators. With this goal in mind, identifying effective ways
to incorporate operational limit evaluation into these contingency visualizations would be a valuable
improvement on the work presented in this thesis.
4.1.1 List of Contributions
• Implemented a new technique for comparison of contingencies across levels — e.g., comparing n−1
and n− 2 contingencies, as well as evaluating systems with islanding.
• Achieved reliable evaluation of severe contingencies for which a solution may not have been attained
using other metrics of severity.
• Improved performance to make multi-element contingency analysis feasible for larger systems.
• Developed a visualization technique that can display a summary of large, multi-level contingency
analysis sets.
Chapter 4. Conclusions and Future Work 67
• Developed interactive techniques for exploring these visualizations and surfacing patterns.
4.2 Future Work
There are several areas in which the research presented in this thesis could be expanded upon. These
extensions fall in to categories of either improving performance of contingency analysis or expanding on
visualizations techniques for contingencies.
4.2.1 Future Work in Improving Contingency Analysis Techniques
This research explores the use of continuation power flow, coupled with high-performance algorithms,
for evaluating multiple contingencies. There are several areas of further research that could be explored
for this application.
High-Performance Computing
One area of future work would be the development of more advanced high-performance techniques for
computing continuation power flow. The modifications to CPF that were discussed in this research
provided some improvement in computation times for contingency analysis; however, substantial im-
provements are necessary to make it suitable for real-time contingency analysis and contingency analysis
of extremely large systems. Several options exist for improving the performance of continuation power
flow:
• Integration of faster power flow techniques such as fast decoupled power flow or the use of a
constant Jacobian to improve the performance.
• Development of alternative formulations of continuation power flow that are suitable to increased
parallelization and implementation of heterogeneous parallel computing techniques.
• Scaling of CPU-bound parallel computing techniques (MATLAB Parallel computing or similar
technologies) for continuation power flow to utilize larger distributed computing clusters.
It remains to be seen what types of high-performance computing techniques could provide the most
effective means of improving continuation power flow techniques. One important constraint to this
process is a desire to avoid any need for purpose built computing clusters or custom hardware to carry
out contingency analysis; it would be better to take advantage of heterogeneous parallel processing on
commodity hardware, which would allow this technology to be implemented at lower cost and with less
expertise required to build and maintain. One particular area of interest may be the development of a
formulation of the continuation power flow to take advantage of massively parallel computing resources
such as a graphical processing unit (GPU). Modern GPUs are powerful commodity hardware that can
be used in a heterogeneous parallel processing scheme to greatly increase the computation speed of
algorithms at an affordable equipment cost — provided the algorithms can be efficiently decomposed
into parallelized computational units. It remains to be seen whether continuation power flow or equivalent
techniques for voltage stability analysis can be adapted to a massively parallel computing architecture in
a way that provides a meaningful improvement in performance, especially considering that using a GPU
resource might eliminate gains achieved by CPU-based multiprocessing; in such a scheme, avaliability of
GPU resources becomes an additional bottleneck to CPU-based multi-threading.
Chapter 4. Conclusions and Future Work 68
Augmentation of Contingency Measures
Another area where further analysis is needed is in the exploration of augmentations to the continuation
power flow — or alternative measures — that are more comprehensive in their evaluation of system
stability or that may provide comparable characterizations of stability with less computational effort.
Some potential areas of further work include:
• Integrating reactive power limits on generators [40, 41, 42].
• Identifying alternative techniques for measuring the voltage stability limit that are less computa-
tionally demanding [43, 44].
• Considering the effects of variation in generator and load participation factors.
• Finding alternative measures of contingency severity [45, 46, 47, 48].
• Include secondary metrics such as fault probabilities in the visualization.
• Identifying a method to extract information about contingency severity from the shape of the PV
curve, beyond the loading level at the PV nose[49].
These areas of study could provide improvements in the speed of computation and may be able to
identify methods of measuring the severity of a contingency which are even better than continuation
power flow in terms of quantifying real risk to the physical components of the system as well as risks
to operational stability and reliability. An important aspect of the latter would also include expanded
discussions towards identifying how continuation power flow may be improved upon in terms of accurate
quantification of contingency severity.
One of the key parameters in continuation power flow is the definition of participation factors for
the various loads on the system; these describe how each load scales up as the algorithm traverses the
power-voltage curve. The outcome of continuation power flow depends on how the participation factors
are defined, since they affect the loading profile of the system. In this research the participation factors
are held constant over the entire PV curve; one area of further research would be to explore mechanisms
for changing the load profile over the course of the PV curve, identifying if such an approach could give
a more appropriate measures of contingency severity.
In addition to the constant definition of participation factors, the scaling of reactive power outputs
of generators and the reactive demands of loads was assumed to stay constant as λ was scaled up; that
is, reactive power levels were scaled 1 : 1 with real power levels as the CPF computation progressed. In
reality, this is not always the case, since limitations on generators and loads may impact how reactive
power scales as the real power of a load or generator is scaled up. Introducing a mechanism to change the
reactive profile of generators or loads may increase the accuracy of outcomes of the loadability analyses.
Finally, many operators keep a record of fault probability data by tracking fault rates for each element
on the system. This historical data could be used to estimate the likelihood of a given contingency.
Considering the probability of a contingency in combination with the severity of that contingency would
render an approximation of the relative risk to secure operations of that contingency. In the past, higher-
order contingencies (such as n− 2 and beyond) have been considered so improbable as to be beyond the
scope of what is necessary for contingency security; this consideration is in part an acknowledgment of
the challenges posed by higher-order contingencies, but it is also in part a judgment that the likelihood of
Chapter 4. Conclusions and Future Work 69
these contingencies occurring is small enough in comparison to the consequences of these contingencies
that it would not be worth the effort to analyze them. This represents an implicit judgment about
the overall risk that these contingencies pose to operation. A multi-element contingency evaluation
that incorporates likelihood of occurrence would be able to more fairly judge this overall risk, helping
operators and planners make better decisions about which contingencies need further analysis and which
contingencies pose the greatest threat to secure operation.
4.2.2 Future Work in Improving Visualizations
This research performs an initial exploration of visual techniques that could be applied to multiple
contingency analysis data. There are several areas where more work is needed to expand and improve
these visualizations, such as:
• Expanded integration with other visualizations and dashboard functions [4].
• Testing functionality of visualization techniques with alternative contingency measures.
• Identifying additional mechanisms for exploration of diagrams and expansion of the system.
• Develop a visual method to integrate the effects of operating limits into the visualization.
• Integration of fault probability data, weighting the consequences of a contingency by its likelihood
of happening and creating a more comprehensive definition of the real risk of each contingency.
• Validation of usability through user testing.
• Validation on larger systems.
The key factor that makes treemap visualizations of contingencies relatable to a real world system is
the use of interaction and tie-in with a one-line diagram. This real-time, explorable association with a
system map allows the user to contextualize the contingency visualization and plays a primary role in
enabling the visualization as a tool for understanding the power system. Because of this, identifying novel
methods of further linking these visualizations with other common tools for monitoring and evaluating
system performance will be key to identifying ways of improving them and increasing their applicability
in power system analysis. By creating data-based or visual linkages with other common dashboards
including system maps, alarms lists and other metrics that are already being used to evaluate different
aspects of a systems performance and reliability, it may be possible to extract new observations from the
different tools. Contingency analysis is an important first step in developing robust protocols for handling
faults in a way that minimizes disruption to the grid; this is a prime example of where integration of
contingency visualization with existing protocols could help streamline an operators workflow and help
them more effectively make control inputs to optimise stability of the system. These techniques require
extensive input and user validation, since they have a direct impact on the workflow of operators. It is
important that these efforts be evaluated for both their functionality and their visual design to ensure
that they are ergonomically suitable and improve operational capability.
Bibliography
[1] Z. Huang, Y. Chen, F. L. Greitzer, and R. Eubank, “Contingency visualization for real-time decision
support in grid operation,” in Power and Energy Society General Meeting. IEEE, July 2011, pp.
1–7.
[2] Y. Sun and T. J. Overbye, “Visualizations for power system contingency analysis data,” IEEE
Transactions on Power Systems, vol. 19, no. 4, pp. 1859–1866, 2004.
[3] U.S.-Canada Power System Outage Task Force, “Final report on the august 14, 2003 blackout in
the united states and canada: Causes and recommendations,” www.nerc.com, April 2004.
[4] C. Mikkelsen, J. Johansson, and M. Rissanen, “Interactive information visualization for sense-
making in power grid supervisory control systems,” in International Conference on Information
Visualisation. IEEE, 2011, pp. 119–126.
[5] B. Shneiderman, “The eyes have it: a task by data type taxonomy for information visualizations,”
in Symposium on Visual Languages. IEEE, September 1996, pp. 336–343.
[6] Z. Jinli, W. Tengfei, J. Hongjie, W. Wei, C. Jing, W. Gang, and H. Wenmao, “Implementation
of power system security check and its visualization system,” in Electric Utility Deregulation and
Restructuring and Power Technologies. IEEE, July 2011, pp. 185–188.
[7] S. Grijalva, “Visualization and study mode architectures for real-time power system control,” in
Industrial Electronics and Control Applications. IEEE, 2005, pp. 1–8.
[8] T. Overbye, E. Rantanen, and S. Judd, “Electric power control center visualization using geographic
data views,” in Bulk Power System Dynamics and Control - VII. Revitalizing Operational Reliability,
2007 iREP Symposium, Aug 2007, pp. 1–8.
[9] B. Xu, C. Yuksel, A. Abur, and E. Akleman, “3d visualization of power system state estimation,” in
Electrotechnical Conference, 2006. MELECON 2006. IEEE Mediterranean, May 2006, pp. 943–947.
[10] P. Chopade, K. Flurchick, M. Bikdash, and I. Kateeb, “Modeling and visualization of smart power
grid: Real time contingency and security aspects,” in Southeastcon, 2012 Proceedings of IEEE,
March 2012, pp. 1–6.
[11] G. K. Stefopoulos, F. Yang, G. J. Cokkinides, and A. P. S. Meliopoulos, “Advanced contingency
selection methodology,” in Power Symposium, 2005, pp. 67–73.
[12] A. J. Wood and B. F. Wollenberg, Power Generation, Operation and Control, 2nd ed. Wiley, 1996.
70
Bibliography 71
[13] G. Ejebe and B. F. Wollenberg, “Automatic contingency selection,” Power Apparatus and Systems,
IEEE Transactions on, vol. PAS-98, no. 1, pp. 97–109, Jan 1979.
[14] S. Sundhararajan, A. Pahwa, S. Starret, and P. Krishnaswami, “Convergence measures for con-
tingency screening in continuation power flow,” in Transmission and Distribution Conference and
Exposition, vol. 1, 2003, pp. 168–174.
[15] Z. Jia and B. Jeyasurya, “Contingency ranking for on-line voltage stability assessment,” IEEE
Transactions on Power Systems, vol. 15, no. 3, pp. 1093–1097, 2000.
[16] V. Ajjarapu and C. Christy, “The continuation power flow: A tool for steady state voltage stability
analysis,” IEEE Transactions on Power Systems, vol. 7, no. 1, pp. 416–423, 1992.
[17] M. Crow, Computational methods for electric power systems. CRC press, 2003.
[18] S. H. Li and H.-D. Chiang, “Nonlinear predictors and hybrid corrector for fast continuation power
flow,” IET Generation, Transmission and Distribution, vol. 2, no. 3, pp. 341–354, 2008.
[19] L. D. Kudryavtsev and M. K. Samarin, “Lagrange interpolation formula.”
[20] D. Coppersmith and S. Winograd, “Matrix multiplication via arithmetic progressions,” Journal
of Symbolic Computation, vol. 9, no. 3, pp. 251 – 280, 1990, computational algebraic complexity
editorial.
[21] R. Zimmerman, C. Murillo-Sanchez, and R. Thomas, “Matpower: Steady-state operations, planning
and analysis tools for power systems research and education,” IEEE Transactions on Power Systems,
vol. 26, no. 1, pp. 12–19, February 2011.
[22] B. Shneiderman, “Tree visualization with tree-maps: 2-d space-filling approach,” ACM Trans.
Graph., vol. 11, no. 1, pp. 92–99, Jan. 1992.
[23] R. Rasteiro and J. Pereira-Leal, “Multiple domain insertions and losses in the evolution of the rab