Advanced military avionic software is regularly updated in response to changing theatre requirements. A key challenge this raises for developers is how new features can be added without degrading performance to the point where expensive hardware upgrades are required. The answer is a three step process: measure timing, identify optimizations and evaluate the results. White Paper Three steps to avoid software obsolescence in your avionic systems On-target software verification solutions
16
Embed
Three steps to avoid software obsolescence in your avionic ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Advanced military avionic software is regularly updated in response to changing theatre requirements. A key challenge this raises for developers is how new features can be added without degrading performance to the point where expensive hardware upgrades are required. The answer is a three step process: measure timing, identify optimizations and evaluate the results.
White Paper
Three steps to avoid software obsolescence in
your avionic systems
On-target software verification solutions
Three steps to avoid software obsolescence in your avionic systems | page 2
Contents
Introduction – the obsolescence problem in legacy systems 3
1. Collecting timing information 5
2. Deciding what to optimize 8
3. Evaluating the results 10
Optimization studies by Rapita Systems 11
Conclusions – breathing new life into your legacy system 13
Rapita Verification Suite (RVS) 15
Free trial version 16
Three steps to avoid software obsolescence in your avionic systems | page 3
Introduction – the obsolescence problem in legacy systems Unlike most critical real-time embedded systems, military avionics systems are
regularly upgraded through software updates over an operational life stretching to
decades. New and emerging capability requirements mean developers are compelled
to keep pace with military demands which can lead to frequent, expensive hardware
upgrades. This is where the “obsolescence problem” begins. What is this problem
and what can you do about it?
Do you really need more software?Through-life software updates gradually
increase the demands placed on the
underlying computing platform, e.g. CPU
utilization. This can lead to decreased
performance capability and intermittent
failures due to timing overruns.
These timing failures almost always go
unnoticed until development is nearly
complete and final tests are underway.
Worse still, timing issue failures may
even fail to be detected during system
testing and go on to cause operational
problems post-deployment.
When timing failures are spotted,
considerable effort is usually spent to
identify and then resolve the cause of
the problem, often to no avail. System
developers then face the difficult choice
of either abandoning planned new
features, leading to capability decay,
or replacing hardware at considerable
cost because of early obsolescence.
A third option would be to increase the
effectiveness of optimization.
Software optimizationWhen avionics software is incrementally
upgraded it suffers from two problems:
1. The original architecture degrades –
the long timescales involved
in avionics projects mean that
changes are inevitably made by
engineers who were not involved in
the original design and may not fully
understand the rationale behind the
original design decisions.
2. As additional capabilities are
introduced, the amount of code
the application needs to execute
increases.
The consequence of 1 and 2 is that the
execution time of tasks or partitions
increases. Eventually the execution time
exceeds the time budget allocated.
When a set of enhancements are
planned, a practical first step is to
create room for those enhancements
by optimizing the performance of the
existing software. This results in the
ability to add new functionality without
hardware upgrades.
In this paper, we
show that a viable
solution to the
“obsolescence
problem” is to
optimize the
performance of
the software as
updates are made.
Three steps to avoid software obsolescence in your avionic systems | page 4
Improving the effectiveness of optimizationSoftware optimization efforts are often
far less productive than they could
be. Sometimes the most obvious
optimizations make no difference to
overall performance. While this has
no effect on the longest execution
time and doesn’t create room for
new functionality, it takes up valuable
engineering time, impacts timescales
and can have a detrimental effect on
software clarity and maintainability.
The key to improving the effectiveness of
optimization lies in targeting the places
where optimization can have the biggest
impact. This can be achieved by taking
detailed timing measurements when the
software is run on the target hardware
and analyzing the measurements.
The steps to improving optimization are:
1. Collecting timing information
2. Deciding what to optimize
3. Evaluating the results
“The key to improving
the effectiveness of
optimization lies in
targeting the places
where optimization can
have the biggest impact.”
Three steps to avoid software obsolescence in your avionic systems | page 5
Safety critical systems and timing confidence
1. Collecting timing information
Measuring execution times
The time an embedded real-time system
takes to respond to some event comes
from several sources, including physical
components (actuators, for example)
and electronic sub-systems. However,
the biggest variable is often the
execution time of the software itself.
Not all real-time systems treat timing
requirements in the same way. On some
systems it may be the case that deadlines
must always be met, on others average
CPU utilization may be more important.
For example, the developers of
safety critical systems such as flight
controls must be able to show to a
very high degree of confidence that
the software will always execute within
its time constraints. In other systems,
if the software occasionally misses
a deadline, then this may not be
disastrous; however, it may still
impact on performance, robustness,
and the customer’s perceptions of
product quality.
Engineers typically measure the
execution time of major software
components such as partitions and
tasks during system tests. Ideally, these
measurements should reveal how
often, if at all, execution time budgets
are exceeded, giving some degree
of confidence that the timing behavior of
the system will be as expected
once deployed.
To obtain these measurements,
the standard approach is to place
instrumentation points at the start and
end of the sections of code being
measured. These instrumentation points
record the elapsed time, either by
toggling an output port monitored via
an oscilloscope or logic analyzer or by
reading an on-chip timer and recording
the resulting timestamps in memory.
Recording end-to-end execution time
measurements in this way allows
engineers to see the distribution of
execution times for each major software
component and also the high-water
mark: the longest execution time
observed. The distribution of execution
times and the high-water mark provide
evidence of, and some confidence in,
the timing behavior of the system.
Three steps to avoid software obsolescence in your avionic systems | page 6
Going beyond the high-water mark
Why more detailed timing information is required
60 (f3)
10
10
10
50 (f2)20 (f1)
5 (f4)
110 140 85
Figure 1: Execution Paths
Adding more features to software may
produce high-water mark execution
times which show software taking too
long to execute. Unfortunately these
measurements cannot identify those
parts of the code which contribute
most to the overall execution time and
therefore offer no guidance on the
code to optimize.
This is a significant problem, because
optimizing code that is not on the
worst-case path does not address
timing issues, and may even
encourage costly attempts to re-write
large volumes of code which risks
introducing bugs into the software.
Many systems will benefit from
detailed timing information which
offers confidence in timing correctness
and accurately targets software
optimization at code which is always
on the worst-case path.
Unfortunately, there are some
fundamental problems inherent in the
simple end-to-end measurements
used to obtain execution time high-
water marks. The lack of detail from
an end-to-end measurement conceals
heavy consumers of execution time.
The high-water marks may not reflect
the longest time that the code could
take to execute. This happens when
the longest path through the code has
not been exercised by the tests. While
code coverage tools can be used to
check whether the available tests cover
all of the code, for example statements,
conditions, decisions and MC/DC, it is
possible, and in fact is often the case,
that the longest overall path has still
not been executed, as illustrated by the
example in Figure 1.
Let us assume that two tests are run,
represented by the green path and the
blue path. This is sufficient to give 100%
statement coverage, 100% decision
coverage, and, depending on the
complexity of the conditions within the
two decisions, 100% MC/DC coverage.
The observed execution
times from the two tests
are 110 and 85.
Of course there is another
valid path through the
code, the red path, which
has an execution time of
140. Because adequate
coverage is achieved with
the green and blue paths, it
isn’t necessary to execute
the red path. This illustrates
why code coverage
analysis alone won’t show
whether the longest path
has been exercised.
Three steps to avoid software obsolescence in your avionic systems | page 7
Adding instrumentation points
Instrumenting commercial scale code
by hand is technically feasible, albeit
extremely laborious. Attempting to
manually correlate trace data with
program structural information increases
the effort requirements by orders of
magnitude beyond that.
Fortunately, the tasks of program
instrumentation, trace processing,
combining trace data with program
structural information, and data mining/
presentation are all amenable to
automation.
Commercial tools such as RapiTime
facilitate:
» Automatic instrumentation of the source
code at various levels of abstraction.
» Automatic combination of structural
information (obtained by static
analysis of the source code at
instrumentation time) with trace data
(obtained by running instrumented
code on the target).
» Reductions in the bandwidth and/
or storage of traces by automatically
limiting the range of values required
to distinguish IDs uniquely.
What information can you gather?TEST COVERAGE INFORMATION
» Placing an instrumentation point at the start of each function
reveals which functions were executed during testing.
» Instrumentation can provide even richer measurements of
coverage, right up to MC/DC.
» Learn how many times each sub-path was executed during
testing and whether you need to address any gaps in testing.
» Determine/check whether the expected maximum number of
iterations of each loop are seen or even exceeded in practice.
EXECUTION TIME INFORMATION
» Determine the worst-case execution time of the software, even if
the worst-case path is not actually executed during testing, but
all the individual sub-paths are.
» Find out which lines of code are on the worst-case path and which
ones are not, identifying or ruling out candidates for optimization.
» See how the end-to-end execution time of each function varies
over the different tests, revealing the correlation between specific
test cases and long execution times.
» Obtain average, minimum and maximum execution times as well
as other useful information, including the number of times each
sub-program was executed by the tests and the number of lines
of code in each sub-program.
Taking advantage of automation developments
An effective way of obtaining
detailed timing information is to add
instrumentation points at each decision
point in the code. Each of these points
needs to have an identifier (ID) that allows
it to be distinguished from the others.
Whenever an instrumentation point is
executed, its ID and a timestamp are
captured, hence running a series of tests
on the system results in the creation of a
timing trace containing instrumentation
point ID – timestamp pairs.
Three steps to avoid software obsolescence in your avionic systems | page 8
2. Deciding what to optimize
Don’t guess – measure!While the temptation to guess where
the the biggest contributors to WCET
(Worst-Case Execution Time) are,
optimize the code and then assess the
results must be strong, you should resist
such an urge. Extensive experience
of software optimization across many
different avionics, telecommunications
and automotive electronics projects tells
us that even highly experienced software
engineers, when unable to access
detailed timing information, struggle to
identify significant contributors to the
worst-case execution time. Without this
information they cannot pinpoint the best
candidates for optimization.
Detailed execution time measurements
taken during extensive testing can
highlight problems where certain
software components overrun or have
the potential to overrun their budgets.
Such problems can occur during initial
system development, and/or as a result
of adding new functionality during
part of an upgrade. Only a systematic
and scientific approach which reveals
accurate timing information about all of
the software components in the system
can resolve the problem. The answer is
simple: don’t guess, measure!
Find the significant contributors to the WCETFigure 2 illustrates how different
sub-programs contribute significantly
different amounts to the worst-case
execution time. This illustration is based
on timing data provided by RapiTime.
For each sub-program, RapiTime
determines the contribution of the code
in that sub-program to the worst-case
execution time.
By inspecting contribution data (see
Figures 3 and 4), engineers can easily
identify a relatively small number of
sub-programs where optimization could
potentially have a large impact on the
overall worst-case execution time of the
software component.
We note that the largest contributors
to the worst-case execution time
often come from code that is seldom Figure 2: Illustration of cumulative contribution of sub-programs to the overall WCET
100%
Cum
ulat
ive
cont
ribut
ion
to th
e lo
nges
t exe
cutio
n tim
e
Number of sub-programs All (100%)0
(1) Most sub-programs contribute nothing to the longest execution time (they are not on the worst-case path)
(2) Many sub-programs contribute a small amount to the longest execution time
(3) A small number of sub-programs contribute a large fraction of the longest execution time
executed and so is not highlighted by
conventional profiling tools that assess
average case performance.
Three steps to avoid software obsolescence in your avionic systems | page 9
WCET optimizations
Figure 3: This shows how
RapiTime presents the
contributions of specific
subprograms to worst-
case execution times
(W-SelfCT%) – the longest
predicted execution time –
and to the high water mark
(H-SelfCT%) – the longest
observed execution time.
Figure 4: Going further it is possible to identify specific blocks of code within these sub-
programs, showing as for this example that 4 lines of code (Ipoint 11: lines 154-156 and
Ipoint 12: line 158) are responsible for 30.4% of WCET (W-OverCT%) and 31.4% of HWM
(H-OverCT%).
The key to any optimization strategy is
to prioritize those optimizations where
the minimum effort, and the minimum
amount of compromise in other factors,
is required to gain the maximum benefit
in terms of execution time reduction.
The best way to meet this objective is
to identify the level at which to perform
optimization. Optimizations can be
classified at three levels: design-level,
sub-program-level and low-level.
Design-level optimizations:
Optimizations at the design-level refer to
changes in the overall design of the system,
for example changes to the API of
a subsystem or other structural changes.
Sub-program-level optimizations:
At this level, the focus is on changes
within a single sub-program, or a set of
tightly coupled sub-programs, without
changing the specification of those
components. Examples here include
substituting one algorithm for another.
Low-level optimizations: Low-level
optimizations focus on the generated
machine code. These optimizations
aim to use the most efficient machine
instructions available for performing
particular operations.
Three steps to avoid software obsolescence in your avionic systems | page 10
3. Evaluating the resultsOnce optimizations have been made to the code, good engineering
practice suggests that we should repeat the measurements to see
whether the desired results have been achieved.
Figure 5: Process for optimization
Even more than in the first step,
measurement automation is a
key benefit to us. Rerunning the
measurements is largely a case of
repeating the tests. Once a process for
collecting and analyzing measurements
has been set up, the incremental cost
of performing the measurements again
is not much higher than the cost of
repeating the tests.
This low cost of repeating measurements
means that it is possible to consider
several candidate optimizations, and to
iterate through a process of evaluating
the effectiveness of each.
Identify Optimization Candidate(s)
Assess Potential
Gains
Sensitivity Analysis
Implement Potential
Optimizations
Evaluate
Three steps to avoid software obsolescence in your avionic systems | page 11
Optimization studies by Rapita Systems
In this section we show the level
of improvements in worst-case
execution times that can be obtained
through a simple process of software
optimization. These results were
achieved using RapiTime to provide
detailed timing information.
The results are from avionics projects
located in the UK and Europe. The results
are given on a per software partition
basis and the partitions are referred to by
the characters X, Y, Z etc. The partition,
project and company names are not
used for confidentiality reasons.
Software Optimization results valuable 10% reductions in the overall