Top Banner
A Logic of Secure Systems with Tunable Adversary Models Jason Franklin With Anupam Datta, Deepak Garg, Dilsun Kaynar CyLab, Carnegie Mellon University
22

A Logic of Secure Systems with Tunable Adversary Models

Feb 22, 2016

Download

Documents

mio

A Logic of Secure Systems with Tunable Adversary Models. Jason Franklin With Anupam Datta , Deepak Garg , Dilsun Kaynar CyLab, Carnegie Mellon University. Motivation: Secure Access to Financial Data. Goal: An end-to-end trusted path in presence of local and network adversaries. Network. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A Logic of Secure Systems with Tunable Adversary Models

A Logic of Secure Systems with Tunable Adversary Models

Jason FranklinWith Anupam Datta, Deepak Garg, Dilsun Kaynar

CyLab, Carnegie Mellon University

Page 2: A Logic of Secure Systems with Tunable Adversary Models

Motivation: Secure Access to Financial Data

2

Network

Goal: An end-to-end trusted path in presence of local and

network adversaries

Page 3: A Logic of Secure Systems with Tunable Adversary Models

Secure System DesignsSecurity Property

System maintains integrity of OS and web server code.

Secure System

BIOS

OS

Web Server

Adversary

Malicious Thread

Communication between frames remains

confidential.

Malicious Frame & ServerMalicious Virtual Machine

The VMM maintains the confidentiality and integrity

of data stored in honest VMs.

Page 4: A Logic of Secure Systems with Tunable Adversary Models

Logic-based Analysis of System Security

Security Property

Secure System

Formal Model of Adversary Analysis Engine

Proof of security

property or identify attack

A. Datta, J. Franklin, D. Garg, D. Kaynar, A Logic of  Secure Systems and its Application to Trusted

Computing,Oakland’09

Adversary defined by a set of capabilities

Page 5: A Logic of Secure Systems with Tunable Adversary Models

MethodSecurity PropertySecure System

Adversary Model

Modeled as a set of programs in a concurrent programming language containing primitives relevant to secure systems

Cryptography, network communication, shared memory, access control, machine resets,

dynamic code loading

Specified as logical formulas in the Logic of Secure Systems (LS2)

Any set of programs running concurrently with

the systemAnalysis Engine

Sound proof system for LS2

Page 6: A Logic of Secure Systems with Tunable Adversary Models

Adversary ModelAdversary capabilities:

Local process on a machineE.g., change unprotected code and data, steal secrets, reset

machines In general, constrained only by system interfaces

Network adversary:E.g., create, read, delete, inject messagesMore later in Arnab Roy’s talk

These capabilities enable many common

attacks: Network Protocol Attacks: Freshness, MITMLocal Systems Attacks: TOCTTOU and other race

conditions, violations of code integrity and data confidentiality and integrity violation

Combinations of network and system attacks, e.g., web attacks

Page 7: A Logic of Secure Systems with Tunable Adversary Models

ApplicationCase study of Trusted Computing Platform

TCG specifications are industry and ISO/IEC standardOver 100 million deployments Applications include Microsoft’s BitLocker and HP’s

ProtectToolsFormal model of parts of the TPM co-processorFirst logical security proofs of two attestation

protocols Results of analysis:

Previously unknown incompatibility between protocolsCannot be used together without additional protection

2 new weaknessesPreviously known TOCTTOU attacks

[GCB+(Oakland’06),SPD(Oakland’05)]Principled source code audit

Page 8: A Logic of Secure Systems with Tunable Adversary Models

TCG Remote Attestation

Describe your software stack!

Remote Verifier

Client

Why should the client’s answer be trusted?

Page 9: A Logic of Secure Systems with Tunable Adversary Models

Trusted Platform Module (TPM)

PCRRemoteVerifier

Client

Trusted Computing Platform Components

BOL

Co-processor for

cryptographic operations

Protected private key

(AIK)

Append only log;

Set to BOL on reset

Industry standard developed by Trusted Computing Group

Check

Page 10: A Logic of Secure Systems with Tunable Adversary Models

APP

Trusted Platform Module (TPM)

Dynamic PCR

Client

Check

Dynamic Root of Trust for Measurement (DRTM)

LL

PBOL

……latelaunch

PNonce, EOL

Nonce

Isolated Environmen

t

EOL

RemoteVerifier

Page 11: A Logic of Secure Systems with Tunable Adversary Models

Security Skeleton of DRTM in LS2

Remote Verifier

Operating Sys.

Co-processor

Late Launch

Protected Program

Abstraction: Security skeleton only models security relevant operations

Page 12: A Logic of Secure Systems with Tunable Adversary Models

Challenge: Dynamic Code Loading

Remote Verifier

Operating Sys.

Co-processor

Late Launch

Protected Program

Abstraction: Security skeleton only models security relevant operations

Typically programs proved correct assuming code known at time of invocation

Reasoning about security of dynamically loaded unknown code requires separate technique to identify code of P

What is P?

Page 13: A Logic of Secure Systems with Tunable Adversary Models

Proof of DRTM Security Property

13

tN tC tE

NonceGenerated

Jump P Eval f VerifierFinishes

te

Page 14: A Logic of Secure Systems with Tunable Adversary Models

APP

Trusted Platform Module (TPM)

Dynamic PCR

RemoteVerifier

Client

Check

Refining Trust Requirements between Systems

LL

PBOL

……latelaunch

PNonce, EOL

Nonce

Isolated Environmen

t

EOL

• P is provided by application (APP)• P has full access to the machine• What if P is malicious?

Page 15: A Logic of Secure Systems with Tunable Adversary Models

BLOS

APP

Trusted Platform Module (TPM)

PCRBIOS RemoteVerifier

Client

Signature

Check

Backwards incompatibility

BLOSAPP H(APP)

H(OS)H(BL)BOL

SLBP

Isolated Environmen

t

H(APP1)H(APP2)

Verifier believes

(incorrectly) that APP1 was

loaded on Client

Insecure composition of DRTM and SRTM (not modular)

Page 16: A Logic of Secure Systems with Tunable Adversary Models

Principled Source Code Auditing

int slb_dowork(unsigned long params) { unsigned char buffer[34],

buffer2[34]; if(slb_prepare_tpm() < 0) { goto

tpm_error; }

pal_enter((void *)params); memset(buffer2, 0x00, 20); /* Extend("bottom") */ slb_TPM_Extend(buffer, 17, buffer2);

tpm_error: build_page_tables(); return 0; }

Toward secure refinement from design to code

•Correspondence between system design and implementation•Small TCB aids correspondence check (Flicker ~ 250 LOC) + abstractions

Page 17: A Logic of Secure Systems with Tunable Adversary Models

In-progress work…Towards an interface-based theory of

system securityAdversary

Trust Boundary

Hardware

Operating System

Page 18: A Logic of Secure Systems with Tunable Adversary Models

Tunable Adversary Models

LS^2 has fixed adversaryLocal concurrently-executing malicious thread

Can we extend LS^2 with tunable adversaries?Consider adversary that is Constrained to System

Interfaces (CSI) Adversary can interleave interface calls, combine outputs,

compose interfaces to produce new interfacesDRTM-adversary has interface <skinit, extend, …,write>

AdversaryTrust Boundary

Hardware

Operating System

Page 19: A Logic of Secure Systems with Tunable Adversary Models

Qualitative Comparison of SecurityComparing Adversary Models: Given two S-system interface-specified

adversaries S-Adv1 and S-Adv2 is S-Adv2 more powerful than S-Adv1?

S-Adv1Trust Boundary

Hardware

Operating System

S-Adv2

Page 20: A Logic of Secure Systems with Tunable Adversary Models

Other Scientific QuestionsComparing Systems: A system S1 is at least as secure as a

system S2 … if S2-Adv S2 is secure S1-Adv S1 is

secure

Modularity: If system S1 is secure against adversary

S1-Adv and system S2 is secure against adversary S2-Adv, how can we reason modularly about the security of system S1||S2 against adversary S1-Adv||S2-Adv?

Page 21: A Logic of Secure Systems with Tunable Adversary Models

ConclusionA logic for reasoning about secure systemsAnalysis of trusted computing attestation protocols

Formal model of parts of the TPM co-processorFirst logical security proofs of two attestation protocols (SRTM

and DRTM)Analysis identifies:

Previously known TOCTTOU attacks on SRTM [GCB+(Oakland’06),SPD(Oakland’05)]

Previously unknown incompatibility between SRTM and DRTM(Cannot be used together without additional protection)

In-progress work includes interface-based modeling and analysis

ThemesAdversary models, modular verification, secure

refinement, design for verification

Page 22: A Logic of Secure Systems with Tunable Adversary Models

Work Related to LS2

Work on network protocol analysisBAN, …, Protocol Composition Logic (PCL)

Inspiration for LS2, limited to protocols onlyLS2 adds a model of local computation and local adversary

Work on program correctness (no adversaries)Concurrent Separation Logic

Synchronization through locks is similarHigher-order extensions of Hoare Logic

Code being called has to be known in advanceTemporal, dynamic logic

Similar goals, different formal treatmentFormal analysis of Trusted Computing Platforms

Primarily using model checking