Trusted CI Webinar Series "Is Your Code Safe from Attack?" with Barton Miller and Elisa Heymann. Host: Jeannette Dopheide. The meeting will begin shortly. Participants are muted. Click the chat button to ask a question. This meeting will be recorded. The Trusted CI Webinar Series is supported by National Science Foundation grant #1920430. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF.
58
Embed
Is Your Code Safe from Attack? Trusted CI Webinar Series
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Trusted CI Webinar Series
"Is Your Code Safe from Attack?" with Barton Miller and Elisa Heymann.
Host: Jeannette Dopheide.
The meeting will begin shortly.
Participants are muted. Click the chat button to ask a question.
This meeting will be recorded.
The Trusted CI Webinar Series is supported by National Science Foundation grant #1920430.
The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF.
We know that companies and governments need to worry about cybersecurity, but whydoes the science community?
We need our science to be:
Trustworthy: Are our data, computations and results free from tampering?
Productive: Can we operate our systems free from interference.
Safe: Can we prevent damage to people and property?
3
Cyberinfrastructure and Security
Above all other things, security is an economic issue.
You have to balance the cost of providing a certain level of security against your ability to absorb the costs of a successful attack:
• Loss of data
• Stopped progress
• Damaged reputation
• Loss of funding
• Harm to staff
• Harm to the public
4
Cyberinfrastructure and SoftwareSecurity
Cybersecurity is a huge field, with systems security, identity management, network security, configuration security. So, why are we concerned about software development?
• The science community is developing and deploying a steady stream of bespoke software: services, infrastructure, data management, and computation.
• Most of that software comes from groups with no formal security team and little security training.
5
Thinking Like an Analyst
6
Things That We All Know
• All software has vulnerabilities.• Critical infrastructure software is complex
and large.• Vulnerabilities can be exploited by both
authorized users and by outsiders.
7
The Unpleasant Asymmetry
Attacker chooses the time, place, method, …
Defender needs to protect against all possible attacks (currently known, and those yet to be discovered).
8
Key Issues for Security
Need independent assessment.
Software engineers have long known that testing groups must be independent of development groups.
Need an assessment process that is NOT based on known vulnerabilities.
Such approaches will not find new types and variations of attacks.
9
Key Issues for SecurityYou should use software scanning tools like Spotbugs, Coverity, or Infer, but …
… these tools have limitations:
– While they help, find some local errors, they• Miss significant vulnerabilities (false negatives).
• Produce voluminous reports (false positives).
Programmers must be security-aware
– Designing for security and the use of secure practices and standards is important but does not guarantee security.
10
What about Penetration Testing?Bringing in an outside team to scan your environment and look for common mistakes is a great investment, but
• Won’t find previously unknown vulnerabilities.
• Won’t find application-specific issues.
• Won’t find design problems.
11
Addressing these Issues• We must evaluate the security of our code.
– The vulnerabilities are there, and we want to find them first.
• In-depth code analysis isn’t cheap.
– Automated tools are only a partial solution.
• You can’t take shortcuts .
– Even if the development team is good at testing, they can’t do an effective assessment of their own code.
12
Addressing these Issues
Try First Principles Vulnerability Assessment.
– A strategy that focuses on critical resources.
– A strategy that is not based on known vulnerabilities.
The goal is to integrate in-depth code reviews and remediation into the software development process.
– We have to be prepared to respond to the vulnerabilities we find.
13
Goal of FPVA
Understand a software system to focus search for security problems.
Find vulnerabilities.
Make the software more secure.
“A vulnerability is a defect or weakness in system security procedures, design, implementation, or internal controls that can be exercised and result in a security breach or violation of security policy.”
- Gary McGraw, Software Security(i.e., a bad thing)
14
First Principles Vulnerability Assessment
Step 1: Architectural Analysis.
Step 2: Resource Identification.
Step 3: Trust & Privilege Analysis.
Step 4: Component Evaluation.
Step 5: Dissemination of Results.
15
Step 1: Architectural Analysis
• Attack Surface: Interactions with users.
• Functionality and structure of the system:–Major components:–Hosts.–Processes.– Threads. –Communication channels.
• Interactions among components.
16
User Supplied Data
All attacks ultimately arise from user/attacker supplied input.
Key term: Attack surface
The interfaces available to the attacker.
Important to know all the places where the system gets user supplied input.
Step 1: Architectural Analysis
• Create a detailed big picture view of the system.
• Document and diagram:
o What host/processes exist and their function.
o How users interact with them.
o How processes/threats interact with each other.
17
Step 1: Architectural Analysis
Need to understand:• What the system does.
• How does it work.
• What documentation exists:o End-user.
o Internal design documents.
o Often incomplete, out-of-date, wrong.
• The software supply chain (SCRM): What frameworks, libraries, and packages are used.
18
Step 1: Architectural Analysis
OS privileges master
Condor Submit Host
schedd
submit
4. submit job
condoruser
root
procd
3. create procd
startd
starter
user job
6. exec user job
Condor Execute Host
master
switchboard
procd
1. create sw
2. exec procd
switchboard
procd
4. create starter3. create sw
4. exec procd
2. create startd
switchboard
5. create sw
procd
1. create procd 2. create schedd
processcreation
commthroughnamedpipes
19
Step 2: Resource IdentificationA resource is an object that is useful to a user of the system and is controlled by the system:
Key term: Impact surface:Resources reached by an attacker.
20
Documenting Resources
What resources exist in the system.
What executables/hosts control the resource.
What operations are allowed.
What does an attacker gaining access to the resource imply.
21
Step 2: Resource Identification
SwitchboardConfig File
generic Condor daemon
Condor Execute Host
Condor Libraries
etcOperational
Data &Run-time
Config Files
spool logCondor Binaries
ProcdNamed pipes
Procd LogFiles
Condor log files
User 1 dir User N dir
…
execute Job Executionroot directory
OS privileges condor user 1root user N
CondorConfig Files
22
23
Step 3: Trust & Privilege Analysis
Privilege level at which each component runs.
How resources are protected and who can access them.
Trust delegation.
PrivilegePrivilege is the authorization for a user to perform an operation on a resource:
• What privileges exist in the system?
• Do they map appropriately to operations on resources?
• Are they fine grained enough?
• How are they enforced?
• Does the least privilege principle apply?
24
25
Step 4: Component Evaluation
Examine critical components in depth.
Guide search using:
• Diagrams from steps 1-3.
• Knowledge of vulnerabilities.
Focusing the SearchIt's impossible to completely analyze a system for vulnerabilities.
From critical resources and try to think of ways an attack can be realized.
From vulnerabilities can occur in the code to resources.
Look for similar problems to prior security problems.
26
Categories of VulnerabilitiesDesign Flaws:
Problems inherent in the design.
Hard to automate discovery.
Implementation Bugs:
Improper use of the programming language, or of a library API.
Localized in the code.
Operational vulnerabilities:
Configuration or environment.
Social Engineering:
Valid users tricked into attacking.
Occur aboutequally
27
Many Types of VulnerabilitiesBuffer overflowsInjection attacks
Command injectionSQL injection
Cross-site scripting or XSS
Directory traversal
Integer vulnerabilities
Race conditionsNot properly dropping
privilegeInsecure permissions
Denial of service
Information leaksLack of integrity checks
Lack of authenticationLack of authorization
28
Step 5: Dissemination of Results
• Report vulnerabilities.• Interaction with developers.
• Re-assessment of the problematic code.
• Disclosure of vulnerabilities.
29
Vulnerability Reports
One report per vulnerability.
Provide enough information for developers to reproduce.
Suggest mitigations.
Written so that the abstracted report is still useful to users without revealing too much information to easily create an attack.
30
Dissemination of Results
31
Vulnerability Report ItemsSummary.
Affected version(s) and platform.
Fixed version(s).
Availability - is it known or being exploited.
Access required - what type of access does an attacker require: local/remote host? Authenticated? Special privileges?
Effort required (low/med/high) - what type of skill and what is the probability of success.
32
Vulnerability Report ItemsImpact/Consequences (low/med/high) - how does it affect the system: minor information leak is low, gaining root access on the host is high.
Only in full report:
Full details - full description of vulnerability and how to exploit it.
Cause - root problem that allows it.
Proposed fix - proposal to eliminate problem.
Actual fix - how it was fixed.
33
Vulnerability Disclosure Process
First disclose vulnerability reports to developers.
Allows developers to mitigate problems.
Here’s the really hard part:
• Publish abstract disclosures in cooperation with developers. When?
• Publish full disclosures in cooperation with developers. When?
• More complex when dealing with open source software.
34
35
The Assessee Experience
The Assessee SideDon’t panic! Have a plan.
Plan what, how, when and to whom announce.
Plan how to fix, and what versions.
Separate security release or combine with other changes?
Allow time for users to upgrade.
36
37
Session Objectives
What to expect:
• Getting started – there are many reasons to say “no”.
• The vulnerability assessment process –what makes our life easy or difficult.
• When the first vulnerability reports come in – what do you do?
Remember that we’re on your side.
38
Just say “no”.(Nancy Reagan)
39
There are Lots of Reasons to Say No
Even the best programmer makes mistakes.
• The interaction between perfect components often can be imperfect: falling between the cracks.
• Even in the best of cases, only works with formal specification and verification.
“We use best practices in secure software design, so such an effort is redundant.”
There's many a slip ‘twixt cup and lip…(old English proverb based on Erasmus)
40
There are Lots of Reasons to Say No
Yes, it is expensive.
And, yes, if you are successful, you will only see an expense.
However the cost to recover after a serious exploit is prohibitive.
The best defense is a good offense.
(old sports adage)
The only real defense is active defense.
(Mao)
“It’s too expensive.”
41
There are Lots of Reasons to Say No
Tools like Coverty and Spotbugs are worthwhile to use…
…however, don’t let them give you a false sense of security. Our study demonstrates their significant weaknesses:
J.A. Kupsch and B.P. Miller, “Manual vs. Automated Vulnerability Assessment: A Case Study”, First International Workshop on Managing Insider Security Threats, West Lafayette, IN, June 2009.
The era of procrastination, of half-measures, of soothing and baffling expedients, of delays is coming to its close. In its place we are entering a period of consequences.
(Winston Churchill, August 1941)
“I’ll just run some automatic tools.”
42
There are Lots of Reasons to Say No
All software has bugs.
If a project isn’t reporting the bugs, either they are not checking or not telling.
Our experience shows that users (and funding agencies) are more confident when you are checking and reporting.
A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.George Bernard Shaw (1856 - 1950)
“If we report bugs in our software, we will look incompetent.”
43
And the assessment team arrives…
44
During the Assessment
What makes our job harder:
• Incomplete or out-of-date documentation.
• Complex installation procedures, especially ones that are not portable and require weird configuration file magic.
• Lack of access to full source code.
• Lack of access to development team.
45
During the Assessment
What you can expect from us:
• Our assessments follow the FPVA methodology.
• We work independently: crucial for an unbiased assessment.
• We will ask you lots of questions.
• It will take longer than you think…
… we don’t report a vulnerability until we can construct an exploit.
46
And then the vulnerabilities arrive…
47
How do You Respond?
When In Danger, When In Doubt, Run In Circles, Scream And Shout
48
How do You Respond? (really)• Denial: “That’s just not possible in our code!”
• Anger: “Why didn’t you tell me it could be so bad?!”
• Bargaining: “We don’t have to tell anyone, do we?”
• Depression: “We’re screwed. No one will use our software and our funding agencies will cut us off.”
• Acceptance: “Let’s figure out how to fix this.”
49
How do You Respond?Identify team member to handle vulnerability reports.
Develop a remediation strategy:
Study the vulnerability report.
Use your knowledge of the system to try to identify other places in the code where this might exist.
Study suggested remediation and formulate your response.
Get feedback from the assessment team on your fix – very important for the first few vulnerabilities.
• Develop a security patch release mechanism.
This mechanism must be separate from your release feature/upgrade releases.
You may have to target patches for more than one version.
50
How do You Respond?Develop a notification strategy:
Who and what will you tell and when?
Users are nervous during the first reports, but then become your biggest fans.Often a staged process:
1. Announce the vulnerability, without details at the time you release the patch.
2. Release full details after the user community has had a chance to update, perhaps 3-6 months later.
Open source makes this more complicated!The first release of the patch reveals the details of the
vulnerability.
51
How do You Respond?A change of culture within the development team:
When security becomes a first-class task, and when reports start arriving, awareness is significantly increased.
This effects the way developers look at code and the way that they write code.
A major landmark: when your developers start reporting vulnerabilities that they’ve found on their own.
Study of scientific data security concerns and practices
The Trustworthy Data Working Group invites scientific researchers and the cyberinfrastructure professionals who support them to complete a short survey about scientific data security concerns and practices. Accepting responses until May 31st.
trustedci.org/trustworthy-data-survey
About the Trusted CI Webinar series
To view presentations, join the announcements mailing list, or submit requests