IA 1 – OPERATING SYSTEM SECURITY Randy Rose CSEC630, Prevention and Protection Strategies in Cybersecurity Section 9044, Dr. David Bourgeois October 2014
IA 1 – OPERATING SYSTEM SECURITY
Randy Rose
CSEC630, Prevention and Protection Strategies in Cybersecurity
Section 9044, Dr. David Bourgeois
October 2014
Contents Introduction ................................................................................................................................................... 3
MILS ............................................................................................................................................................. 4
Implementation ......................................................................................................................................... 5
Management ............................................................................................................................................. 6
TPM .............................................................................................................................................................. 7
Implementation ......................................................................................................................................... 9
Management ............................................................................................................................................. 9
OS Virtualization ........................................................................................................................................ 10
Implementation ....................................................................................................................................... 12
Management ........................................................................................................................................... 13
Summary and Conclusion ........................................................................................................................... 13
References ................................................................................................................................................... 15
Introduction
The principle component of any computing device that a user directly interacts with is the client
operating system. Since their conception in the mid-20th Century, operating systems have connected the
end user to the underlying resources of the system, namely the processor(s) and memory, while increasing
ease of use over time. As operating systems moved from command line-based to graphical user interfaces,
or GUIs, the average user has become more divorced from understanding the workings of the systems
primary components. The growth of the Internet along with the rapid expansion of mobile devices has
dramatically increased the number of computer users while further obfuscating the logic occurring at the
hardware level. For example, many current operating systems are tile-based, wherein users issue
commands, such as launching particular apps or inputting characters, via touchscreen by poking graphic
icons. Typical users may not understand the processes that follow such an action and are not particularly
likely to notice abnormal or malicious activity on their device. Security of operating systems and the data
they hold is at an increased risk on modern devices that connect to vast networks of unknown and
untrusted devices by way of the Internet. Web browsers and other applications that access the Internet,
along with a wide array of protocols, interface directly with the underlying operating system. When a user
launches an application or requests a function of the operating system, the user assumes that the system is
acting in a trustworthy fashion, however, the user may not have any guarantee of assurance.
Operating system security has been a focus of the industry since the 1970s, and major
improvements in security have been implemented since (UMUC, n.d., p. 3). However, not all systems
function in the most secure fashion, and not all security controls are created equally. Many factors can
play into the reliability and worth of a control. Due to the abundance of computing devices for personal
and business use and the untrustworthy nature of the Internet, it is important for operating systems to have
built-in security controls that help protect the processor, memory, and other essential system resources.
Three modern security mechanisms, MILS, TPM, and OS virtualization, are examined and discussed
below. They are judged on the protections offered, ease of implementation, and manageability.
MILS
Multiple independent levels of security, or MILS, is an operating system specifically designed to
run with the degree of assurance required of the highest Evaluation Assurance Levels (EALs) as
determined by the Common Criteria (CC) (Harrison, Hanebutte, Oman, & Alves-Foss, 2005, p. 20). The
primary philosophy behind MILS is security through separation by way of partitioning and controlling the
flow of data between partitions. By its very nature, MILS relies on “proactive security measures, rather
than the traditional reactive patching” measures (UMUC, n.d., p. 7), such as installing system updates,
disabling services, or relying on log monitoring to determine how an incident occurred. MILS solves a
major operating system security problem—namely that of unauthorized access to the system kernel,
which “has access rights to all domains and components of a system” (p. 3)—through the use of a
separation kernel. The separation kernel is a “microkernel that implements a limited set of critical
functional security policies” which can include controlling the flow of information to, from, and between
system partitions, and isolating data held within the system from unauthorized users and applications
(Kleidermacher & Kleidermacher, 2012, p. 2.2). The separation kernel acts as a supervisor of the system,
controlling the segregation of processes and associated resources, thereby managing data flow into and
out of the various partitions. This prevents direct interaction between the applications, data, and user with
the system kernel and significantly reduces the number of processes that run in kernel mode. Outside of a
small subset of required kernel mode operations, all other system components, including drivers for third-
party apps and devices, protocols and system services, and even the file system itself, run in “user mode”
(Harris, 2013). Additionally, “processes running in different partitions can neither communicate nor infer
each other’s presence” without the explicit authorization of the separation kernel (Harrison, Hanebutte,
Oman, & Alves-Foss, 2005, p. 21).
The controls employed by MILS safeguards previously insecure environments to guarantee that
security mechanisms are non-bypassable, evaluable, always invoked, and tamper-proof (Harrison,
Hanebutte, Oman, & Alves-Foss, 2005, p. 20), commonly referred to as NEAT. MILS use a guard
mechanism as a “middleware security component” which functions completely independently of the other
components, and prevents cross-partition communication unless specific criteria are met (p. 20, 22).
Multiple guards exist within a given system based upon the security requirements of that system. For
example, guards can be protocol specific, device specific, or application specific. The guards (and other
controls) are not able to be manipulated by “malicious adversaries or insider threats” (UMUC, n.d., p. 7),
making the controls non-bypassable. In addition, all controls are engaged by the microkernel “each and
every time the system is operational” (p. 7) ensuring they are always invoked.
MILS solves problems with evaluation by ensuring that instructions processed by the microkernel
are small and simple enough “to be evaluated at the highest assurance level” (Kleidermacher &
Kleidermacher, 2012, p. 2.2.7). Simplicity directly impacts trustworthiness as even the most “well-
engineered code has of the order of several defects per thousand lines of code” (NICTA, 2014). In other
words, a smaller, simpler systems will always have inherently fewer vulnerabilities than larger, more
complicated systems. Additionally, MILS uses recursive policies, which can be used repeatedly across
partitions and systems. Recursion also provides the luxury of scalability in both directions, allowing for
growth through the application of recursive policy to additional systems or partitions, or the removal of
such policies based on a lack of security requirements for a particular partition. Through this simplicity
and the implicit separation of the applications from the system kernel, the underlying infrastructure
remains tamper-proof.
Implementation
The implementation of any high-assurance system can be both intimidating and overwhelming.
The security mechanisms of a MILS system requires careful engineering up front, but is easily
implemented into a wide variety of organizations once developed. As it happens, the MILS model is
specified under the Common Criteria (CC), resulting in little engineering effort for interested
organizations. Additionally, the separation model and the recursive nature of policies allows for the easy
design of a hierarchical scheme wherein higher levels in the hierarchy can use the security controls of
lower levels and build more restrictive policies as need. Furthermore, “each level is responsible for its
own security domain and nothing else” (Alves-Foss, Harrison, Oman, & Taylor, 2006, p. 240).
As mentioned in the previous section, partitioning allows for scalability in either direction, which
also increases ease of implementation by allowing systems, applications, data, and whole partitions to be
added in or removed as needed. This allows administrators to test systems and resources without
negatively impacting existing partitions and without risking significant impact to the overall
infrastructure. Furthermore, the simplicity of the separation kernel increases trustworthiness and
performance while decreasing the risk of failure (NICTA, 2014).
Finally, the principle requirements for hardware support are built-in to the majority of
commercial components already. These requirements include sufficient processing power, atomic
operations, timing controls, input/output access restrictions, instruction traps, as well as the ability to run
instructions in privileged mode. Additionally, a Memory Management Unit, or MMU, which “provides
separation of address spaces between the partitions,” must be present (Alves-Foss, Harrison, Oman, &
Taylor, 2006, p. 242). Depending on the organization’s requirements, much of the system hardware can
be purchased as commercial-off-the-shelf hardware. In some instances, like highly-classified defense
networks, hardware may need to meet additional government compliance requirements, however this is
not specified by the MILS criteria.
Management
For many of the same reasons that MILS can be easily implemented, it can be easily managed.
MILS divides information domains into completely isolated spaces, making security over data, the
applications used to access and maintain the data, and the users who require access to the data easy to
manage. As new components are required or become obsolete, partitions can be added or removed.
Additionally, the recursive nature of policies and the use of a hierarchical design can allow a single
change to be replicated across the partitions. This reduces administrative overhead while increasing
oversight.
MILS increases privacy through separation and can further increase confidentiality through the
use of encryption. Privacy, integrity, and confidentiality controls can be built-in at lower levels or higher
levels of the hierarchy, providing flexibility over their management and oversight. Partitioning also
prevents widespread malware infection, which reduces incident management. Additionally, MILS
“reduces physical hardware” through partitioning, similarly to virtualized environments (discussed in
detail below) (UMUC, n.d., p. 8). Reduction of physical assets reduces logistic overhead and
administrative management of component vulnerabilities, failures, and upgrades. Finally, partitioning
allows management based around the type of information and the communities that may require access to
that information. In this sense, MILS requires less low-level management over the actual data, as
information within a specific community will be kept within that community’s partition. As new members
join that community of interest, whether those members are users, systems, or applications, their
privileges are governed by the pre-established rules for their community’s partition.
TPM
The Trusted Platform Module, or TPM, is a hardware-based encryption solution that allows for an
entire operating system, including the boot portions, to be protected from unauthorized access. A TPM
chip is “designed to be mounted on the motherboard” and functions as a “secure cryptoprocessor that can
securely generate and store cryptographic keys” (Goodrich & Tamassia, 2011, p. 482). Because TPM
chips are built into the motherboard and typically integrated into the computer’s basic input/output system
(BIOS), a high assurance “root of trust” between the components that boot the device the operating
system is formed (UMUC, n.d., p. 10). The root of trust significantly decreases risks against data
confidentiality and data integrity through the dedication of hardware to encryption and hashing. In
essence, TPM is a “system-on-a-chip” or a “computer system running embedded code performing a
specific task to provide security-based functionality” (Kinney, 2006, p. 3.6). As such, TPM is an
autonomous security-focused operating system which functions to allow a less secure operating to run
securely.
The TPM circuit contains nonvolatile memory which holds its keys and configuration information
between power cycles. This memory is divided into persistent, or static, memory and versatile, or
dynamic, memory. Persistent memory contains the Endorsement Key (EK) pair and the Storage Root Key
(SRK) (Harris, 2013). The EK is “the root key in the TPM Key Management scheme” and is actually a
public/private key pair that is “nonmigratable” (Kinney, 2006, p. 4.2, 4.4). It provides authenticity and
identification more than it provides security. However, the EK is the basis for the trustworthiness of a
TPM. The SRK performs the actually secure storage functions for keys stored in the TPM.
The TPM’s versatile memory contains the Attestation Identity Key (AIK), Platform
Configuration Register (PCR) hashes, and various storage keys which are “used to encrypt the storage
media of the computer system” (Harris, 2013). The AIK provides integrity for the EK pairs, while the
PCRs perform multiple functions, including those required for system sealing (described below)
(Goodrich & Tamassia, 2011, pp. 482-483). The keys work together in various hierarchies, depending on
the functional use in a given system, with the EK and SRK at the highest levels, providing overarching
integrity and confidentiality.
There are two principle uses for TPM: binding and sealing. Binding is the more commonly used
and discussed function. Binding involves full disk encryption of a particular hard drive and storage of the
decryption key in the TPM cryptoprocessor (Harris, 2013). The decryption key is not stored in the open,
but further encrypted and storing them in “isolated components of the system” to increase confidentiality
(UMUC, n.d., p. 12). All of the data on the drive becomes ciphertext and is rendered useless without the
decryption key. For this reason, it worthwhile to maintain backups of TPM decryption keys associated
with critical data.
Sealing is used to protect the integrity of the system and is essentially the same practice as
hashing in verifying application integrity. Sealing occurs when the TPM “generates hash values based on
the system’s configuration files and stores them in its memory” (Harris, 2013). The TPM then verifies the
integrity of a hashed system, such as a client operating system, against the sealed hash value. If and only
if the hashes match, the TPM will allow the system to load. The TPM is invoked on boot up prior to other
components, which allows it to capture the state of these components on boot, collect and compare its
initial hash values prior to the various system components being invoked, and continue to compare those
hashes periodically to verify system integrity (UMUC, n.d., p. 10). This prevents unauthorized
modification of system boot components, as might be typical of malicious software.
Implementation
Early on in the development of trusted computing solutions, costs were high and performance
metrics were low (UMUC, n.d., p. 9). Root of trust between hardware and software wasn’t easily
achieved, and where it did exist, it often came at the expense of system usability or availability. However,
the development of public key infrastructure, particularly the RSA cryptosystem, has pushed root of trust
systems into a highly achievable arena. The dipping costs of hardware design and integration over the
years has also driven costs in implementation down. With the increased risk of unauthorized access to
modern mobile devices, a highly reliable hardware assurance solution was needed. Now many, if not
most, motherboards are equipped with TPM modules or the capabilities to support plug-in TPM modules.
Organizations can easily implement TPM into their existing environment by including TPM-
ready devices into their acquisition process. While RSA is the standard cryptographic system used with
TPM, some vendors offer other cryptographic solutions, including DES, 3DES, and the U.S. Government
standard, AES (Kinney, 2006, p. 2.1). Depending on the goals of the organization and their existing or
planned encryption infrastructure, TPM solutions can be vetted and easily integrated during asset
acquisition, implementation, and deployment. Each system will need to be configured manually, which is
relatively standard in mobile operating environments.
Management
Management over TPM can be more cumbersome than other solutions due to the need to
configure each system individually. However, if properly planned, devices can be binded and sealed as
they are deployed. Once configured, TPM requires little management, as it functions as an autonomous
system (Kinney, 2006, p. 3.6). The principle management function for TPM post-deployment is the
escrow of cryptographic keys, which can be stored in a separate encrypted database. Key escrow is
essential for systems that contain mission critical information that would cause extreme harm to the
organization if lost or unrecoverable. As mentioned above, if the TPM keys are lost or somehow
corrupted, the encrypted data is virtually useless. TPM is not subject to common password attacks, such
as dictionary attacks, and, depending on the complexity of the algorithm used, may take an extremely
long time for a brute-force password recovery effort (UMUC, n.d., p. 10).
Perhaps the biggest benefit to management of TPM devices is the significant reduction in risks to
unauthorized access of data, particularly on mobile devices. One of the more commonly referenced
mobile device controls is ensuring that devices can be wiped or erased remotely. If a device is lost, and
not actually stolen, a remote wipe can make it difficult to recover the data should the device be recovered.
With TPM, there is little risk of unauthorized access, therefore, even if a device is stolen of lost, there
may be little required for incident response, tracking, and management. The same can be said for malware
infection with regard to TPM sealed systems. The integrity checking functions of TPM make malware
infection more easily manageable through the denial of launching any software, including the operating
system, that fails to meet the stored states. Lastly, TPM can be used to administer software licensing and
digital-rights management through the use of PCR hash seals (Goodrich & Tamassia, 2011, p. 483).
OS Virtualization
Operating system virtualization involves adding a layer of abstraction between an operating
system and its underlying hardware. In essence, it is accomplished by running an operating system, or,
more often, multiple operating systems, within another operating system. Along with increased security
through isolation, virtualization has the added benefits of increasing efficiency through resource
balancing, portability, scalability, and centralized administration (Goodrich & Tamassia, 2011, p. 129). In
other words, virtualization is the ability for a single set of hardware to run multiple operating system
environments at the same time. In most instances of virtualization today, hardware resources are
“emulated” through a hypervisor, the abstraction layer program that “controls the execution of the various
guest operating systems” (Harris, 2013).
Many of the benefits of virtualization are discussed outside of the realm of security, however, it
can add significantly to the security of an organization as a byproduct of these other benefits. For
example, an organization that keeps their systems up-to-date may have a requirement to run an
unsupported, legacy application. This software may require an older operating system or older hardware
to run properly, which can be emulated in a virtual environment. If implemented properly and backed up,
attacks against the system will be less effective as the hardware is virtual only, the system can be easily
isolated, and, if it is successfully attacked, it can be quickly and easily restored to a known good state.
Restoration of machines to a known good state is also a major benefit of virtualization offered in
environments where devices may operate in a kiosk mode, such as in a hospital or library. Limiting
hardware also reduces the risk of successful physical access-based attacks, such as running malicious
software from removable media. Virtualization can paired with thin-client devices that do not have ports
for devices other than a mouse, keyboard, and monitor.
Additionally, virtualized devices can act as sandboxed environments, wherein the other guest
operating systems are protected in the event that one is compromised (Goodrich & Tamassia, 2011, p.
129). However, if all systems are configured equally, a vulnerability in one is a vulnerability in all.
Additionally, a vulnerability in the underlying hypervisor or one that allows for access to the hypervisor,
such as a virtual machine escape attack, can negatively impact all guest operating systems (Falcon, 2014).
Sandboxed environments are widely used by security professionals, software developers, and attackers as
test environments for exploitation, software fuzzing and debugging, and more (Noble, 2013, p. 1048).
Forensics professionals and incident responders also use virtual machines to open and test malware, and
find forensic evidence without risking damage to the integrity of their own system or a system that is
needed for evidence in court.
Lastly, OS virtualization can be essential to organizations that require anonymity for activities
such as open-source intelligence (OSINT) gathering. Since virtual device hardware is emulated by the
hypervisor, actual hardware details, including MAC address, cannot be easily traced. Virtual device
addressing schemes do not have to be representative of the organization’s actual addressing scheme, and
can be run using nearly operating system or applications, making them ideal for honeypot networks and
other OSINT operations. If the systems become compromised or are suspected of compromise, they can
be easily restored to a known good state or reconfigured as needed.
Implementation
Similarly to TPM, virtualization was discussed but not widely deployed for many years due to
associated costs and difficulty in implementation. While virtualization of systems is much more common
today, particularly for the cost benefits associated with it in the long term, many organizations continue to
operate in the traditional style due to the upfront investment in equipment costs. Virtualization requires
hardware that can handle multithreading across a high number of systems, as well as increased drives,
backups, and more. In most cases, organizations have to completely restructure their existing
environment. However, once completed, the implementation and management of virtualized devices can
greatly reduce the amount of administrative overhead required to keep an organization functioning.
Once the underlying infrastructure is configured, virtual devices can be easily created, deleted,
moved, or modified from the hypervisor or the administrative interface. The ease of deployment of
operating systems even allows administrators to experiment with a wide variety of operating systems that
they may not have had the equipment to test prior. Furthermore, clustering server services or operating
systems can be far easier to implement in a virtual environment than in more traditional, hardware-based
scenarios. For virtualization wherein a host computer runs a virtual application that in turn runs another
operating system, the implementation is as simple as installing and running any other application.
The principle problem with organization-wide virtualization is that the virtualization software
itself can be very complex and administrators may not be experienced in its deployment or function. This
can also be detrimental to security, as complexity and security have an inverse relationship. However, as
virtualization technology continues to grow, more and more vendors are lending support for the
deployment, configuration, and maintenance of virtualized environments (Harris, 2013).
Management
Virtual environments provide many benefits to the world of IT management. The principle ones
include a reduction in IT costs including cooling, reduction in hardware administration, and
administrative centralization. Operating systems, particularly server operating systems, can the services
they provide can be easily managed and administered from a central console that provides event logs,
alerts, performance graphs, and a remote console option into the device. Backups and redundancy are
easily managed and recovery options are relatively simple and speedy. Virtualization can reduce concerns
for product testing and deployment, as well as make security patching simpler.
The principle concern for management over virtualization is that few organizations are wholly
virtualized. Most run a mixed environment, so the needs of both modern and traditional environments
must be met. Furthermore, malware continues to advance, with some malicious software able to detect
systems running in a virtual environment. Some malware even deletes itself upon detection, making
hardware targets a more attractive target. In mixed environments, management may wish to consider
masking hardware-based systems with markers for virtual systems to make them appear as though they
are virtual machines (Noble, 2013, p. 1048).
Summary and Conclusion
MILS, TPM, and OS virtualization are only three of many operating system security mechanisms.
Each one offers unique controls of varying degrees of security as well as unique challenges to
implementation and management. As such, it is difficult to rank them from best to worst. Instead, each
one must be assessed carefully for the technical, functional, and organizational environment for which
they are being considered.
MILS is the best suited for environments that require the highest levels of assurance, such as U.S.
Department of Defense networks or organizations that have extremely valuable proprietary information to
protect. MILS is also the least complex, in terms of underlying architecture, making it the most attractive
for those environments looking to avoid complex solutions. It can be centrally managed and provides for
scalability, making it attractive also for organizations that are expecting significant changes to their size
or infrastructure. Additionally, MILS can be combined with the other mechanisms noted, such as TPM
for full disk encryption or OS virtualization in an embedded hypervisor environment (SYSGO, n.d.).
TPM is the hardest to manage because of a lack of centralized control. However, it is the best
solution for an organization with a high number of mobile devices in their environment, especially those
concerned with data confidentiality. These organizations have a greater risk of unauthorized users
accessing their data, which can be protected best by TPM. Fortunately, TPM hardware has become
readily available for a wide variety of devices, making it much more affordable and easily accessed for
these organizations. MILS and OS virtualization is not likely to work well for an organization that relies
heavily on mobile communications.
OS virtualization is the best option for organizations that are interested in reducing costs and
recovery times, decreasing hardware footprint, and those that have requirements for legacy applications
and systems. As stated above, many of the security benefits of virtualization are byproducts of other
benefits, so an organization that has high assurance requirements will need to ensure that virtualization is
combined with other security mechanisms, such as partitioning, intrusion detection and prevention,
application and service whitelisting, and code-signing, to achieve the baselines they require.
Ultimately, a business risk analysis, wherein the goals of the organization and the information
technology department are defined and assessed along with the potential security mechanisms, is required
to determine the best solution for the organization. Once a risk analysis is completed, management can
decide which option provides the best features to meet the organization’s confidentiality, integrity,
availability, privacy, and other assurance needs. Only after careful evaluation of each proposed solution
against their unique needs can an organization truly rank a solution against others. Like nearly everything
else in business, there are a lot of gray areas. The successful organizations are those that can navigate
through the gray the best.
References
Alves-Foss, J., Oman, P. W., Taylor, C., & Harrison, W. S. (2006). The MILS architecture for high-assurance
embedded systems. International Journal of Embedded Systems, 2(3/4), 239-247. Retrieved from
http://www.deepdyve.com/lp/inderscience-publishers/the-mils-architecture-for-high-assurance-embedded-
systems-HBRy0fIOit
Falcon, F. (2014). Breaking out of VirtualBox through 3D acceleration [PowerPoint presentation]. Retrieved from
http://corelabs.coresecurity.com/index.php?module=Wiki&action=view&type=publication&name=oracle_
virtualbox_3d_acceleration
Harris, S. (2013). CISSP all-in-one exam guide (6th ed.) [Books24x7 version]. Retrieved from
http://common.books24x7.com.ezproxy.umuc.edu/toc.aspx?bookid=50527
Harrison, W. S., Hanebutte, N., Oman, P. W., & Alves-Foss, J. (2005). The MILS architecture for a secure global
information grid. CrossTalk: The Journal of Defense Software Engineering, 18(10), 20-24. Retrieved from
http://www.crosstalkonline.org/storage/issue-archives/2005/200510/200510-Harrison.pdf
Kinney, S. (2006). Trusted platform module basics: Using TPM in embedded systems [Books24x7 version].
Retrieved from http://library.books24x7.com.ezproxy.umuc.edu/assetviewer.aspx?bookid=47151
Kleidermacher, D., & Kleidermacher, M. (2012). Embedded systems security: Practical methods for safe and secure
software and systems development [Books24x7 version]. Retrieved from
http://common.books24x7.com.ezproxy.umuc.edu/toc.aspx?bookid=49259
NICTA. (2014). Secure microkernel project (seL4). Retrieved from http://ssrg.nicta.com.au/projects/seL4/
Noble, K. (2013). Security through diversity. In Vacca, J. R. (Ed.), Computer and information security handbook
(pp. 1041-1051). Boston, MA: Morgan Kaufmann Publishers.
SYSGO. (n.d.). The EURO-MILS project. Retrieved from http://www.sysgo.com/company/about-sysgo/rd-
projects/the-euro-mils-project/
UMUC. (n.d.). Operating system protection [computer-based training module]. Retrieved from
https://leoprdws.umuc.edu