YOU ARE DOWNLOADING DOCUMENT

Please tick the box to continue:

Transcript
Page 1: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

Enforcing Safety and Security ThroughNon-Intrusive Runtime Verification

Ines Gouveia and Jose RufinoLaSIGE, Faculdade de Ciencias, Universidade de Lisboa, Portugal

[email protected], [email protected]

Abstract—The recent extensive development in Cyber-PhysicalSystems (CPSs) has lead to the emergence of new concernsregarding timeliness, safety and security properties. For decades,numerous vulnerabilities have put systems and applications atrisk and CPSs are no exception. Noteworthy recurring issuesare, for example, Buffer Overflows (BOs). We intend to dealwith some types of BOs, other accidental faults and intendedattacks by means of Non-Intrusive Runtime Verification (NIRV),to be accomplished through the design of a black-box observerand monitoring entity. Tackling security hazards can be enforcedat different levels or granularities depending on how detailed ourknowledge of the inner workings of the system and applicationsrunning on it is. We introduce solutions to detect and handleexplicit attacks and accidental faults, focusing on completelynull understanding of the analyzed environment’s specificities,but also discussing scenarios where program mechanics andengineering are completely known.

I. INTRODUCTION

The field of cyber-physical systems has been identifiedas a key area of research and it is part of contemporarytechnologies that are, themselves, receiving attention for theirinnovative nature, such as smart grids, autonomous mobilesystems, medical monitoring and robotics.

Due to their criticality, these systems are a target of multipleconcerns, namely in the timeliness, safety and security domain.Vulnerabilities are relevant security defects, being, as such,an open door for intentional attacks and accidental faults.The large majority of Operating Systems (OSs), libraries andapplications possess plenty of vulnerabilities and their exploitor even just presence may place a system at a risky orerroneous state and jeopardize its operation at multiple levels,for example, forcing it to miss deadlines, writing outside theallocated address space, corrupting data, etc.

At the same time, some of these vulnerabilities have per-sisted for years without being fully dealt with, namely BufferOverflows, which have not ceased being one of security’smajor problems, affecting all kinds of systems. As such, sincecyber-physical systems are not an exception to the rule, wewish to bring forth ways to enforce their safety and security.

Our approach to the problem is plainly naive. We simplyassume most (if not all) applications are made up of functioncalls and from there we elaborate a scheme to verify if each

This work was partially supported by FCT, through LaSIGE StrategicProject 2015-2017, UID/CEC/00408/2013. This work integrates the activitiesof COST Action IC1402 - Runtime Verification beyond Monitoring (ARVI).

function is executing correctly and that no violation to thespecification has occurred, namely with respect to the assignedaddressing spaces, at function-level granularity. The solutionrelies on non-intrusive runtime verification, comprising of:observation, for capturing events; and monitoring, to performtheir analysis and evaluate correctness. We focus on situationswhere practically nothing is known about the system and it’sapplications, zero-knowledge environments, but also providinga subsection on the broader types of faults that could bedetected in more detailed scenarios.

Guaranteeing safety and security properties non-intrusively,at runtime and with no knowledge of the system, librariesand/or applications being monitored presents some challenges,namely in the types of faults and attacks the observer is ableto detect and at which granularity.

Existing non-intrusive solutions, either require changes inthe system architecture, storage of observed events for offlineanalysis (which is not ideal, specially for storage limitedsystems, a characteristic of most CPSs) or are incapableof performing monitoring in, for example, zero-knowledgeenvironments, where access to source code is absent andbinary files are stripped of symbolic information.

The contribution of this paper is the provision of the initialideas to enforce safety and security in zero-knowledge envi-ronments, enabling some common attacks and unintentionalfaults to be prevented, to will be implemented in the nearfuture.

II. SYSTEM MODEL

Our solution to enforce safety and security in cyber-physicalsystems is inserted in the realm of reconfigurable logic. Bytaking advantage of System-on-a-Chip (Soc) architectures, weare able to perform observation directly in hardware, at lowlevel, rejecting techniques such as code instrumentation and,thus, eliminating its associated overhead and effort. Functiongrained code instrumentation is difficult using only binaryfiles, as disassembly operations would be required offline. Onthe other hand, system calls and well defined execution points(such as read/write of input/output ports) could be interceptedonline. Figure 1 depicts a generic SoC architecture.

A. Assumptions

The system model was conceived taking into considerationa set of assumptions.

19

marcus.voelp
Sticky Note
@Proceedings{certs2016, title = {1st Workshop on Security and Dependability of Critical Embedded Real-Time Systems}, year = {2016}, booktitle = {1st Workshop on Security and Dependability of Critical Embedded Real-Time Systems}, editor = {Marcus V{\"o}lp and Paulo Esteves-Verissimo and Antonio Casimiro and Rodolfo Pellizzoni}, address = {Porto, Portugal}, month = {December}, organization = {{IEEE}}, url = {https://certs2016.uni.lu}, note = {co-located with the {IEEE} {R}eal-{T}ime {S}ystems {S}ymposium 2016 } } @InProceedings{Gouveia:NIRV:2016, author = {In{\^e}s Gouveia and Jos{\'e} Rufino}, title = {Enforcing Safety and Security Through Non-Intrusive Runtime Verification}, crossref = {certs2016}, pages = {19--24}, url = {https://certs2016.uni.lu/Program} }
Page 2: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

Processing Element

I/O Interface

Interrupt Controller

Memory Controller

I/O Interface

Timer Unit

Memory Input/Output

SoC Bus

Fig. 1: Generic System-on-a-Chip Architecture

First, as previously mentioned, we assume all applicationsconsist of function calls. Executing function calls pass theirparameters to the called function, save their return addressand allocate memory space for the local variables.

Secondly, we assume the observer can always be insertedin the SoC platform and connected to the SoC bus.

Then, as already mentioned, we assume, for the scope of thispaper, that the observer is inserted within a zero-knowledgeenvironment, where no access to source code or binaries(specially non-stripped ones) is given. However, we brieflydescribe in subsection V-D what other faults could be detectedin case there is more information available in the system.

Moreover, the cache needs to be write-through, meaningdata is written into the cache and the corresponding mainmemory at the same time. If information is only written intothe cache than it is of no interest for the observer since itis not capable of accessing the Processing Element and, thus,will not be able to detect any errors.

Finally, in case the observer is faced with binary-basedsymbolic information that allows finer granularity observation,then we consider the ELF (Executable and Linkable Format)format is used.

B. The Observer Entity

In order to perform the observation and monitoring of thesystem and applications running on it, an Observer Entity(OE) component will be inserted onto the SoC platform. Itwill consist of a black box, connected to the SoC Bus in thesame fashion as the other components present in Figure 1, suchas the Interrupt Controller. Throughout the paper, observer andobserver entity will be used interchangeably.

The observer entity will be specified in VHDL (VHSICHardware Description Language), a language used to describedigital and mixed signal systems, such as Field-ProgrammableGate Arrays (FPGAs), containing an array of re-programmablelogic blocks.

The reason why the observer needs to be a black box,that is, be viewed in terms of its input but not its inter-nal functioning, relates to the necessity of maintaining theobserver entity’s inner workings occult to prevent maliciousentities from hijacking its functioning. Furthermore, it has to

be coupled directly to the SoC bus, since inserting the observerinside the processor would require its modification, which isnot always possible, feasible or desirable. Also, changes in theprocessor would possibly mean changing the observer as well.As it is, we are not concerned about changes in the individualSoC components, only with specificities in the Instruction SetArchitecture (ISA), given that the observer operation will beISA-dependent. The reason behind this matter has to do withthe fact that different ISAs treat function calls, data storingand endianness differently, among others. They also differ inthe set of available instructions. Therefore, the observer needsto be adaptable to the different ISAs.

The Application Binary Interface (ABI) defines the low-level binary interface between two or more pieces of softwareon a particular architecture, that is, how an application in-teracts with itself, with the kernel and with libraries. It alsonormalizes how function calls, argument passing, results returnand stack space allocation are performed, including layout andalignment of data types. So, because some ABI decisions aredone based on the ISA, the observer will possibly need to beABI-dependent as well.

Figure 2 represents, once more, the SoC architecture, butnow with the observer entity connected to the bus. This OEincludes observing capabilities as well as verification againsta specification for the purpose of discovering if any violationoccurred.

Processing Element

I/O Interface

Interrupt Controller

Memory Controller

I/O Interface

Timer Unit

Memory Input/Output

SoC Bus

Observer Entity

Fig. 2: System-on-a-Chip Architecture including theObserver Entity

C. Monitoring and Fault Detection

Faults can be tackled at a coarse or fine grained degreedepending on the knowledge level. By knowledge level wemean how much is known about the system, libraries andapplications we want to monitor. Say, if one was to build hisown system, running its own applications, monitoring couldbe done at a really fine grained degree due to the huge levelof detail, such as access to source code and binaries’ symbolicinformation and even the familiarity with the types of inputand output each function is supposed to receive and return,respectively.

However, such scenarios are unrealistic, usually that level ofdetail is not present or is inaccessible. Binaries are commonly

20

Page 3: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

stripped to reduce storage requirements, to prevent reverseengineering or to resist analysis. Moreover, normally wewould not even be able to access these files in order toperform disassembly operations on them aiming to extractvariables’ memory location, check performed system calls, etc.Therefore, a final assumption dictates the observer architectureis based on the premise that nothing is known, that is, that weoperate in a zero-knowledge environment.

Now, considering such a scenario, the types of errors wecan detect are somewhat limited to generic vulnerabilities andthe set of attacks that can exploit them. Then, the observerwill be able to detect anomalies such as writing to read-only memory and some types of Buffer Overflows, such aswriting outside of the space reserved for a specific application,i.e., outside the data section or, more specifically, outside thebss (Block Storage Space) section and most notably function-grained stack frames. The data segment is a read-write chunkof a binary file, or the corresponding virtual address spaceof a program, that contains initialized global variables andstatic local variables. On the other hand, the bss section, alsoknown as uninitialized data (or content-less section), is usuallyadjacent to the data section and contains all global variablesand static local variables that are initialized to zero or do nothave explicit initialization in the source code.

It is possible to detect this sort of faults since we are dealingwith well delimited memory zones. In sum, the observer willdetect errors at function-level granularity for these describedzero-knowledge environments. If more is known about theapplications being monitored, then detection can be done at amuch finer granularity (and, for example, the great majorityof BO types could be detected).

It is also viable to detect when a malicious entity is tryingto alter some data on the registers. Normally the function callshould be responsible for writing the necessary data, such asinput parameters, in conformity with the ABI specification,which is well know for each system. Some architectures avoidusing the stack and store as much data as possible in theregisters until eventually they run out of registers and theOperating System proceeds to reclaim them. As such, throughSoC bus observation, we can detect when any other instructionis trying to write to them. Readings are less important, sincethey will not alter the data; they might, however, be relevantfor confidentiality reasons. With this, we can detect attackssuch as return pointer substitutions.

1) Buffer Overflows: As verified by [1], Buffer Overflowsare still a major concern nowadays. A BO is an anomalywhere a program, while writing data to a buffer, overruns thebuffer’s boundary and overwrites adjacent memory locations,such as the situations described above. Due to the limitedknowledge scenario and importance of such vulnerabilities,we would like to initially focus our efforts on BOs, even ifrestricted to more generic variants of this vulnerability like,as stated before, writing outside some well-known memoryzones like the data and bss section and, most notably, function-grained stack frames. In a zero-knowledge environment it isnot possible to detect variable-level BOs, since we do not

possess detailed information regarding the specific memoryplacement each one of those variables. We do not refrainfrom reminding that, given more details about the systemand applications, the observer is still capable of detectingapplication specific faults.

Finally, we also intend to protect against Denial of Service(DoS) attacks. Subsection V-B yields greater details on how todetect them. [2] presents an architecture plus a compiler-drivensolution to tackle both Buffer Overflows and DoS, which wewill later analyze.

As for now, we solely intend to address applications runningnatively in the OS. We leave for future work to approachapplications that require the use of Virtual Machines to run,such as the Java Virtual Machine (JVM).

III. OBSERVER ENTITY ARCHITECTURE

Basically, the observer can be divided into two majorcomponents, the System Observer and the System Monitor.The first is responsible for checking points of interest like aread, write or simply access operation to memory. On the otherhand, the System Monitor will analyze those events so as toverify if any deviation from the expected behavior occurred,such as the ones previously described, for example, detectingwrite operations outside the limits of the data section. Figure3 shows a simplified representation of the observer entity ,detailing the relation between these two components.

SoC

Bu

s

System Monitor

System Observer

Fig. 3: Simplified Observer Entity Architecture

The red arrow represents a retroaction, that is, the SystemMonitor’s response in case it came to the conclusion that someerroneous behavior has taken place. The idea is to prevent themalicious action from doing any damage. So, our first idea wasto raise exceptions, catch those exceptions and have handlerstake the appropriate measures. However, handlers representa convenient attack point, they can be replaced or modified.Some techniques can be used to tackle this issue, such asdefining more than one handler, to act in case another getsattacked.

Thus, afterwards, we thought of instantly killing the processto refrain the malicious actions from occurring or propagating,in situations where there would be no serious repercussions indoing so.

21

Page 4: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

The zero-knowledge environment prevents the observerfrom taking a more specific action to correct the incident,its reaction has to be generic. Nevertheless, killing a processdirectly in hardware is hard, which leads to the conclusion thatusing exception handlers is most likely the best option afterall.

A. Extended Observer Architecture

This subsection provides greater details on the observerentity architecture and clarifies its inner workings.

On previous models [3], the observer was fed by the systemclock, provided to all SoC components. However, in order toreduce the number of entry points and consequently increaseits notion of a black box, an internal clock will provide timeto the observer. Synchronization with the system clock maybe required for precise timing of system behavior.

At first, the observer was thought of as being generic,that is, operate equally independently of the architecture. Yet,each processor or processor family has its own ISA andeach ISA works differently. Thus, the observer would needto recognize specific instructions such as function calls andreturns and process information accordingly. As a result, itneeds to be ISA-dependent. With that said, the addition ofanother module to the observer architecture is in place, an ISA-dependent Call/Return Detection component. Some ABIdecisions are made based on ISA specificities, which leadsus to the conclusion that the observer may possibly need tobe dependent on the ABI as well.

System Observer

Bu

s In

terf

aces

System Monitor His

tory

Man

ager

SoC

Bu

s

ISA-dependent Call/Return Detection

Fig. 4: Complete Representation of the Observer EntityArchitecture

In addition, a History Manager needs to be employed inorder to save output from the Call/Return Detection module.Possibly, the Call/Return Detection-History Manager relation-ship may bring scalability issues, namely regarding the numberof processes and chained function calls.

Finally, the System Monitor presented in Figure 3, shouldhave self-learning capabilities incorporated within, so that itneeds no further configuration. The History Manager willprovide these capabilities. The use of configuration files wouldbe a risk, since, again, they may be replaced by maliciousversions aiming to deviate its behavior. Albeit providing moreflexibility, observation point extraction for configuration files

is also relatively time consuming if done online. Hence, theself-learning capability will be advantageous not only from asecurity point of view but also time wise.

Figure 4 defines the complete observer architecture. TheSystem Observer depicted in Figure 3 is an abstraction to theset comprising of the System Observer and the ISA-dependentCall/Return Detection, shown in Figure 4. In a similar way, theSystem Monitor in Figure 3 contains the System Monitor andthe History Manager in Figure 4. Schematically, the observerentity is comprised of:

• System Observer– System Observer– ISA-dependent Call/Return Detection

• System Monitor– System Monitor– History Manager

• Bus Interfaces• Observer Clock

IV. ARCHITECTURAL DIFFERENCES

We considered three distinct architectures for the observerimplementation: Sparc, ARM and Intel x86. A brief analysison the current state of these architectures was one of themotivations for the insertion of an ISA-dependent component.For example, Sparc and ARM are RISC architectures, whilex86 is CISC, meaning they differ, say, in the way argumentsto functions are passed. Therefore, the observer needs to beable to adapt to each of them. Also, we came to the conclusionthat not all of these architectures are technologically ready forthe observer concept. As of today, Sparc is the most suitableplatform for implementing the observer, due to its architecturaldesign and easiness in connecting extra components to thesystem bus [4].

In principle, it should be possible to use the observer entityin ARM as well [5]. Although it cannot cover all ARM-basedSoC devices available, those which include CoreSight [6], aconfigurable validation, debug and trace component, includedon ARM’s SoC are suitable for our purpose. There is thepossibility to place an FPGA connected to a CoreSight outputso that it would filter CoreSight’s results and act accordingly.

As for x86, a built-in Altera FPGA integration with anIntel Xeon processor has already been announced [7]. Theconcept was created with the intent of accelerating algorithmsand taking workloads off the processor(s). The current IntelXeon architecture does not seem to be the most appropriatefor the integration of the observer allowing it to performmonitoring according to our system model. In this Intel andAltera solution, the processors and co-processor (the FPGA)are connected via a dedicated bus. Our approach dependson the observer being connected to the systems bus, or, inthis case where we want to address SoC architectures, theSoC bus. Probably, we will have to wait for technology toevolve towards the observer entity’s needs. Other proposedapproaches rely on a Front-Side Bus (FSB) architecture butrequire the substitution of the processor with a very specializedmodule.

22

Page 5: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

V. FAULT DETECTION

In section II we shortly described what kinds of vulnerabil-ities we were capable of addressing. Due to the environmentcharacteristics, that is, the lack of understanding of the system,libraries and applications’ inner workings, the observer issolely able to perform runtime observation and monitoring ata coarse granularity. Here, we provide greater detail on someof the faults the observer will be detecting and how, as well asa small discussion on other solutions. We also provide a smallsubsection on how to perform monitoring at a finer granularity,provided additional information is accessible.

A. Return Pointer Access Protection

A fault easy to detect in a zero-knowledge environmentis the overwriting of return pointers saved on the stack,for instance, to hijack the path of execution. [8] presentsStackGhost, a hardware facilitated stack protection mechanismimplemented through a kernel modification to OpenBSD 2.8,under the Sparc architecture. StackGhost intends to trans-parently and automatically guard function return pointers. Itprotects against Buffer Overflows and primitive format stringattacks. In order to prevent corrupt pointers from exploiting thecode, the authors suggest the use of a reversible transform to beapplied to the legitimate return address and the result writtento the process stack. The idea is based on the suppositionthat if an attacker has no knowledge of the transform or itskey then execution cannot be affected intentionally. It requiresbit inversion (two least significant bits) or keeping a return-address stack. Additionally, they consider encrypting the stackframe.

This solution has some drawbacks. One, even though theperformance impact is negligible, less than one percent, itexists. Two, the transform can be discovered.

Sparc usually stores the return address in a specific register(%i7) belonging to the register window in use. As suchwe propose detecting return address substitution attacks byverifying if writes to this register are being performed bysomething else rather than the function call instruction. Ourapproach has the advantage of being simpler, less error proneand more secure.

B. Denial of Service

On the other side, [2] presents a solution that isolates codeand data at a function level - the granularity required inour solution - aiming to provide protection against possiblyuntrusted code such as plugins and open-source software. Aslight downside is that it requires identification of untrustedfunctions or groups of functions at compile time (through theproject’s makefile, for example) or load time. In addition, thissolution depends on compiler support and requires process-ing core modification, since the authors devise architecturalalterations that are to sit between the processor and the cache.These modifications also work as a basis for their Denial ofService defense. While being a minimally invasive approach,we are seeking a completely non-invasive methodology.

As such, for Denial of Service prevention, given that theobserver has access to the instructions being fetched, includingfunction calls, it has the information required to establishtime limitations for function execution. This can be donewithout the changes or compiler support required in [2], sinceonly temporal protection is guaranteed, independently of thefunction call hierarchy.

C. Buffer Overflows

Due to the lack of knowledge on applications’ contents,the observer cannot prevent Buffer Overflows such as writingoutside the space allocated for a single variable, array orstructure as a result of, for example, misusing functions likestring copy (strcpy) in C without checking boundaries. Withoutsymbolic information present on binary files or disassemblydata extracted from stripped binaries [9] it is not possible toknow the address space boundaries of a certain variable and,consequently, if that space was overflown. Thus, the observeris only capable of monitoring overflows on well delimitedmemory zones, like the data or bss sections and function-grained stack frames. This is the reason why we keep track ofstack frames in the History Manager component.

With that, the whole stack frame of one function can beprotected from can be protected from being overwritten, forexample, by another function.

Additionally, it is possible to protect memory zones reservedthrough the malloc system call. This primitive has an argumentthe size of the memory buffer to be allocated and returns apointer to the allocated memory space. With this information,the observer entity is able to verify if memory access is withinthe allowed boundaries. The free system call, returns thememory chunk back to the system and forbids further accessfrom the program, which should be verified by the observerentity.

D. Full-knowledge Vulnerabilities

This subsection briefly describes what sorts of other vul-nerabilities the observer could provide protection against, incase there was more information available about the systemand its applications, that is, if we were in a full-knowledgeenvironment. Here the term full-knowledge describes (signif-icantly) more information, it does not necessarily mean weknow everything about the subject under observation.

If the location and sizes of global and local variables wasknown, more specific types of buffer overflows could bedetected and not only section/function-grained violations, asdescribed in the subsection above. Global variables’ addressesand sizes could be directly extracted from the binaries’ symboltables. Local variables, however, require binaries to be com-piled with the right debug options so that their description canbe accessed.

Also, if great details were known about the running appli-cations, the observer would even be able to detect incorrectoutput and input, format string and possibly path traversalattacks, etc. So that the observer is able to perform this sortof monitoring, specific knowledge of the system is required,

23

Page 6: Enforcing Safety and Security Through Non-Intrusive ... · their analysis and evaluate correctness. We focus on situations where practically nothing is known about the system and

which would happen if, for instance, we were the systemand/or application designers or if the source code is available.Nevertheless, the bottom line is that these mechanisms can beused by the observer entity if the situation allows it. Figure5 shows the common placement of fine grained objects ofinterest, like variables.

Stack Frame Pointer

Stack Pointer

Stack

Input Parameters

Local Variables

……

Binary File

textdatabss

Global and static variables

text

data

bss

Fig. 5: Generic Location of Variables

VI. RELATED WORK

Both offline and online NIRV approaches have been pre-viously introduced for embedded systems [10] and cyber-physical systems [11], generally having an external infras-tructure for observed information processing. Offline NIRVmethodologies still have serious limitations such as beingimpracticable to store all the observed data over an arbitraryobservation time due to the high observation data bandwidthand the discrepancy between observation data output andprocessing bandwidth. From these issues, online NIRV wasborn, bringing forth ways to process data on the fly, allowingdebugging and RV without any interference, as presented in[12]. Our approach is inserted within this category and worksas a monitoring and verification infrastructure. The approachin [13] also addresses both concepts. NIRV has already beenused in safety-critical environments as well [14].

NIRV approaches have recently been applied to Time-and Space- Partitioned (TSP) systems to improve safety anddecrease the computational cost of timeliness adaptability [15],[16].

Overall, to the best of our knowledge, OE architectures donot tend to be black boxes and generally do not comprise aself-learning module for automation and adaptation, generallyrelying on configuration files. Additionally, several solutionsrequire changes in the architecture; [14] is an example of onethat does not need modifications to the underlying system.

VII. FUTURE WORK

The observer will be designed in VHDL, using a XilinxXUPV5-LX110T development board as a proof of conceptprototype. Additionally, monitoring methods will be enhancedand new ones introduced, in order to detect as much deviationsas possible in the described zero-knowledge environment.

Enforcing safety and security in applications running onvirtual machines will also be left for future work.

Finally, we intend to convey more attention to the observerentity’s implementation in ARM and, in case technologyeventually becomes ready, in Intel x86.

VIII. CONCLUSION

Guaranteeing safety and security properties non-intrusively,at runtime and with zero knowledge of the system, librariesand/or applications we wish to monitor presents some chal-lenges, namely in the types of faults and attacks the observeris able to detect and at which granularity. Since we assumethere is no specific knowledge available like binaries’ sym-bolic tables or even stripped binaries on which to performdisassembly operations, we are limited to some basic butnonetheless important vulnerabilities, such as some sorts ofBuffer Overflows on well delimited areas, denial of serviceand unwanted register access. For finer grained monitoring,extra information would need to be provided.

Thus, in sum, the observer reacts to information capturedfrom the SoC bus, if it deems it erroneous; and functions as anon-intrusive black box, providing an extra layer of securityfor this reason. We are well aware that security by obscurityis not advised. Nevertheless, by omitting design details we arepreventing the probability of some attacks.

REFERENCES

[1] Dell Software, “2015 Dell Security Annual Threat Report,” 2015.[2] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno,

“Hardware-enforced fine-grained isolation of untrusted code,” in 1stACM workshop on Secure execution of untrusted code, Nov. 2009, pp.11–18.

[3] R. C. Pinto and J. Rufino, “Towards non-invasive run-time verificationof real-time systems,” in 26th Euromicro Conf. on Real-Time Systems -WIP Session, Madrid, Spain, Jul. 2014, pp. 25–28.

[4] The SPARC Architecture Manual, SPARC International Inc., 1992.[5] ARM Architecture Reference Manual, ARM, 2005.[6] ARM CoreSight Architecture Specification v2.0, ARM, 2013.[7] Intel, “Xeon+FPGA Platform for the Data Center,” 2015.[8] M. Frantzen and M. Shuey, “Stackghost: Hardware facilitated stack

protection,” in USENIX Security Symposium. Vol. 112, 2001.[9] L. C. Harris and B. P. Miller, “Practical analysis of stripped binary code,”

in ACM SIGARCH Computer Architecture News 33.5, 2005, pp. 63–68.[10] C. Watterson and D. Heffernan, “Runtime verification and monitoring

of embedded systems,” Software, IET, vol. 1, no. 5, Oct. 2007.[11] X. Zheng, C. Julien, R. Podorozhny, and F. Cassez, “BraceAssertion:

Runtime verification of cyber-physical systems,” in 15th IEEE Real-Time and Embedded Tech. and Applications Symposium, Oct. 2015, pp.298–306.

[12] R. Backasch, C. Hochberger, A. Weiss, M. Leucker, and R. Lasslop,“Runtime verification for multicore SoC with high-quality trace data,”in ACM Transactions on Design Automation of Electronic Systems(TODAES), 2013.

[13] T. Reinbacher, M. Fugger, and J. Brauer, “Runtime verification of em-bedded real-time systems,” Formal Methods in System Design, vol. 24,no. 3, pp. 203–239, 2014.

[14] A. Kane, “Runtime monitoring for safety-critical embedded systems,”Ph.D. dissertation, Carnegie Mellon University, USA, Feb. 2015.

[15] J. Rufino, “Towards integration of adaptability and non-intrusive runtimeverification in avionic systems,” SIGBED Review, vol. 13, no. 1, Jan.2016, (Special Issue on 5th Embedded Operating Systems Workshop).

[16] J. Rufino and I. Gouveia, “Timeliness runtime verification and adaptationin avionic systems,” in Proceedings of the 12th workshop on OperatingSystems Platforms for Embedded Real-Time applications (OSPERT)2016, Toulouse, France, Jul. 2016.

24


Related Documents