Basics of Computer Security Gang Tan Penn State University Spring 2019 CMPSC 447: Software Security
Basics of Computer Security
Gang TanPenn State UniversitySpring 2019CMPSC 447: Software Security
Readings
Reflections on Trusting Trust Ken Thompson Turing Award Lecture
The Protection of Information in Computer Systems Saltzer and Schroeder
2
Goal of Computer Security
Goal: Prevent information “mishaps”, but don’t stop good things from happening Good things include functionality or legal information access
Tradeoff between functionality and security is the key
E.g, E‐Voting Good things: convenience of voting; fast tallying; voting for the disabled; …
The convenience comes with risks• Buggy voting software• Changed e‐voting software by insiders• Corrupt ballot definition• …
3
The Sad Reality …
People are obsessed with providing more functionalities Security is secondary Security is an after‐thought
• “We’ll write the software with the required functionalities, then our security team will make it secure.”
Security perspective: integrate security design into the system design process Managing the trade‐off between functionality and security from the beginning
4
Security as Risk Management
Risk Analysis Should we protect something? How much should we protect this thing?
Cost‐Benefit Analysis Weigh the cost of protecting data and resources with the costs associated with losing the data
Weigh between alternatives• Is it cheaper to prevent or to recover?
Questions to ask What to trust? What is the threat? What are the security goals? How should we achieve the goals?
5
Concepts to Discuss
Trust Threat models Policy Enforcement
6
Trust
7
Trust
Trust refers to the degree to which a principal is expected to behaveWhat the principal is expected to do?
• E.g., refresh password every 3 months
What the principal not expected to do?• E.g., not expose password
8
Thompson: Reflections on Trusting Trust
Ken Thompson’s Turing Award lecture describes the importance of the making clear what should be trusted
Do you trust your compiler? He describes an approach whereby he can generate a compiler that can insert a backdoor• e.g., insert a backdoor when recognizing a login program
But you can examine the compiler source code But, what program compiles the compiler? He puts the malicious code in that program
Thompson: Reflections on Trusting Trust Methodology Generate a malicious binary that is used to compile compilers
Since the compiler source code looks OK and the malice is in the binary compiler compiler, it is difficult to detect.
Such a program is an example of a Trojan horse malware
Turtles all the way down …
Take away: Thompson states the “obvious” moral that “you cannot trust code that you did not totally create yourself” Creating a basis for trusting is very hard, even today
11
A well‐known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: "What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise." The scientist gave a superior smile before replying, "What is the tortoise standing on?" "You're very clever, young man, very clever," said the old lady. "But it's turtles all the way down!“
‐‐‐ Stephen Hawking (1988), A Brief History of Time
Trusted Computing Base (TCB) A set of hardware, firmware, and software that are critical
to the security of a computer system Bugs in the TCB may jeopardize the system’s security E.g., a conventional e‐voting machine: voting software +
hardware Components outside of the TCB can misbehave without
affecting security In general, a system with a smaller TCB is more trustworthy A lot of security research is about how to move
components outside of the TCB (i.e., making the TCB smaller) E.g., Proof‐Carrying Code removes the compiler outside of
the TCB E.g., voter‐verified paper ballots in E‐voting
12
Threat Models
13
Threat Models
Threat models: understanding of the adversary what is and is not trusted motivations; resources; capabilities
Examples Casual thief Technical whiz Insider attacks Major nation state
• willing to use force
14
Threat Modeling: Malicious Software
Example: a piece of code as an attachment in an email What is trusted: hardware + system software (e.g., OS)
What is untrusted: the attached code• Not sure what harm it will do
Security objective: protect the computer system against possibly malicious code
Defenses• virus scanning; check for digital signatures; or just run it at your own risk;
15
Threat Modeling: Systems Software
Example: a web server that accepts requests from clientsWhat is trusted: the web server is benignWhat is untrusted: clients
• Clients may send malicious input• Denial‐of‐service attacks• Take over the web server: steal sensitive info; install malware
I.e., trusted software, but untrusted input
16
Threat Modeling: Software Piracy
Software vendors Produce software in source code (e.g., C); sell software in the form of object code
Insert copyright‐protection code Legitimate users
Buy software; install and use it Software pirates: malicious tampering
Buy software Tamper with it: remove the copyright‐protection code
Sell the pirated version
17
Threat Modeling: Software Piracy
Software pirates Motivation: large financial gains Capability: almost complete control over the software
• Observe the software• Modify the software• Modify system software such as OS• Monitor/observe the hardware to help hacking
But we can safely assume hackers have no source code• Unless some insider leaks it• E.g., Windows source code is a trade secret of Microsoft
18
Two Sides of Software Security
Protect a computer system from malicious code or malicious input The system is trusted
Protect software against malicious tampering Code is trusted But it is running in an untrusted system
Bottom line we need to carefully figure out the threat model: what to trust? …
19
Policy and Enforcement
20
Policy vs. Enforcement
Policy What is (what is not) allowed Who is allowed to do what
Enforcement: what we do to cause policy to be followed Means of enforcement
• Persuasion• Monitoring & deterrence • Technical prevention (what we are mostly interested in)
• Incentive management
21
Security Policies (CIA Model) Confidentiality
Info becomes known only to authorized people Example: Bob buys 1,000 shares of Microsoft stocks and
the info should be confidential Integrity
Info stored in a system is correct (not modified) Example: Bill Gates sells 1M shares of MS stocks; the info
is public• But nobody should change the number from 1M to 10M
Availability Info, or service, is available when you need it Example: Bob wants to buy 1,000 shares of MS stocks from
his broker, who happens to be unavailable (being ill)
22
Exercise
Classify each of the following as a violation of confidentiality, of integrity, of availability, or of some combination. Carol changes the amount of Angelo's check from $100 to $1000
John copies Mary's homework Eve registers the domain name “psu.edu" and refuses to let PSU buy or use that domain name.
23
Other Security Objectives
Privacy E.g., voter privacy
Non‐repudiation, or accountability E.g., digital signatures
…
24
Another Way of Classifying Security Policies [Alpern & Schneider 1985]
Safety: something “bad” won’t happen E.g., if Bob accesses my private files without my
permission, that is something bad Liveness: Something good will eventually happen
E.g., if I have paid money for a service, then that service should be eventually delivered• The good thing is the service
E.g., a program should eventually terminate• The good thing is termination
Any security property can be decomposed into a safety property and a liveness property
Note: There are security policies outside of properties Information flow policies
25
Connecting Threat Models, Policy, and Enforcement Enforcement should aim to enforce a policy given threats in the threat model
26
E‐voting: Threat Model
Who are the principals? Voters, Admins, Talliers, Others
Who are adversaries? What may be threatened (attack surface)? Voting software (by voter) Vote‐counting software (by tallier)
E‐voting: Security Policy
Confidentiality What voters voted should be confidential
Integrity Votes cannot be changed
Availability An eligible voter can vote
Other desirable properties Prevent vote bribery and voter intimidation
• Even voter herself shouldn’t be able to show how she voted
…
E‐voting: Security Enforcement
Voter‐verified paper ballots in E‐voting Voter can vote on an electronic voting machine But the machine prints out a paper ballot Voter checks the paper ballot and sends the paper
ballot to an optical scan machine Integrity
No need to trust the electronic voting machine Need to trust the optical scan machine
Can check its integrity through random auditing
Prevention of vote bribery Paper ballots cannot be taken away by voters
29
Design Principles of Computer Security
* Slides from Matt Bishop: Introduction to Computer Security, Chap 12
30
The Protection of Information in Computer Systems [Saltzer and Schroeder 1975]
Sec 1.A laid out a set of common‐sense principles for computer security
31
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-32
Overview
Simplicity Less to go wrong Fewer possible inconsistencies Easy to understand
RestrictionMinimize access Inhibit communication
Principle of Open Design
Security should not depend on secrecy of design or implementation “Security through obscurity” Does not apply to information such as passwords or cryptographic keys
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-34
Principle of Least Privilege
A subject should be given only those privileges necessary to complete its task Rights added as needed, discarded after useMinimal protection domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-35
Principle of Fail‐Safe Defaults
Default action is to deny access If action fails, system as secure as when action began If you take untrusted data (such as input) that may contain meta‐characters. The rule of thumb is to specify the LEGAL characters, and discard all others, rather than to specify the ILLEGAL characters and discard them.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-36
Principle of Economy of Mechanism
Keep it as simple as possible KISS Principle
Simpler means less can go wrong And when errors occur, they are easier to understand and fix
Interfaces and interactions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-37
Principle of Complete Mediation
Check every access Lack of complete mediation: done once, on first action UNIX: access checked on open, not checked thereafter
If the owner changes permission bits, then the process can still access it.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-38
Principle of Separation of Privilege
Require multiple conditions to grant privilege Separation of duty
• The person who prints a check cannot be the person who signs the check
Defense in depth
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-39
Principle of Least Common Mechanism Mechanisms used to access resources should not be shared Information can flow along shared channels Covert channels; side channels
Isolation Virtual machines Sandboxes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-40
Principle of Psychological Acceptability Security mechanisms should not add to the difficulty of accessing resource Hide complexity introduced by security mechanisms
Ease of installation, configuration, use Human factors critical here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #12-41
Key Points
Principles of secure design underlie all security‐related mechanisms
Require: Good understanding of goal of mechanism and environment in which it is to be used
Careful analysis and design Careful implementation