Top Banner
449

Foundations of Cryptography Volume 2, Basic Applications

Oct 12, 2015

Download

Documents

Amit K Awasthi

This book offers an introduction and extensive survey to each of the three
areas mentioned above. It present both the basic notions and the most im-
portant (and sometimes advanced) results. The presentation is focused on
the essentials and does not ellaborate on details. In some cases it offers a
novel and illuminating perspective.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Foundations of Cryptography

    Cryptography is concerned with the conceptualization, denition, and construction ofcomputing systems that address security concerns. The design of cryptographic systemsmust be basedonrm foundations.Foundations ofCryptographypresents a rigorous andsystematic treatment of foundational issues: dening cryptographic tasks and solvingnew cryptographic problems using existing tools. The emphasis is on the claricationof fundamental concepts and on demonstrating the feasibility of solving several centralcryptographic problems, as opposed to describing ad hoc approaches.

    This second volume contains a rigorous treatment of three basic applications: en-cryption, signatures, and general cryptographic protocols. It builds on the previousvolume, which provides a treatment of one-way functions, pseudorandomness, andzero-knowledge proofs. It is suitable for use in a graduate course on cryptography andas a reference book for experts. The author assumes basic familiarity with the designand analysis of algorithms; some knowledge of complexity theory and probability isalso useful.

    Oded Goldreich is Professor of Computer Science at theWeizmann Institute of Scienceand incumbent of the Meyer W. Weisgal Professorial Chair. An active researcher, hehas written numerous papers on cryptography and is widely considered to be one ofthe world experts in the area. He is an editor of Journal of Cryptology and SIAMJournal on Computing and the author of Modern Cryptography, Probabilistic Proofsand Pseudorandomness.

  • Foundations of Cryptography

    II Basic Applications

    Oded GoldreichWeizmann Institute of Science

  • CAMBRIDGE UNIVERSITY PRESS

    Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, So Paulo, Delhi

    Cambridge University Press

    The Edinburgh Building, Cambridge CB2 8RU, UK

    Published in the United States of America by Cambridge University Press, New York

    www.cambridge.org

    Information on this title: www.cambridge.org/9780521119917

    Oded Goldreich 2004

    This publication is in copyright. Subject to statutory exception

    and to the provisions of relevant collective licensing agreements,

    no reproduction of any part may take place without the written

    permission of Cambridge University Press.

    First published 2004

    This digitally printed version 2009

    A catalogue record for this publication is available from the British Library

    ISBN 978-0-521-83084-3 hardback

    ISBN 978-0-521-11991-7 paperback

  • To Dana

  • ContentsII Basic Applications

    List of Figures page xiPreface xiiiAcknowledgments xxi

    5 Encryption Schemes 373

    5.1. The Basic Setting 3745.1.1. Private-Key Versus Public-Key Schemes 3755.1.2. The Syntax of Encryption Schemes 376

    5.2. Denitions of Security 3785.2.1. Semantic Security 3795.2.2. Indistinguishability of Encryptions 3825.2.3. Equivalence of the Security Denitions 3835.2.4. Multiple Messages 3895.2.5.* A Uniform-Complexity Treatment 394

    5.3. Constructions of Secure Encryption Schemes 4035.3.1.* Stream-Ciphers 4045.3.2. Preliminaries: Block-Ciphers 4085.3.3. Private-Key Encryption Schemes 4105.3.4. Public-Key Encryption Schemes 413

    5.4.* Beyond Eavesdropping Security 4225.4.1. Overview 4225.4.2. Key-Dependent Passive Attacks 4255.4.3. Chosen Plaintext Attack 4315.4.4. Chosen Ciphertext Attack 4385.4.5. Non-Malleable Encryption Schemes 470

    5.5. Miscellaneous 4745.5.1. On Using Encryption Schemes 4745.5.2. On Information-Theoretic Security 4765.5.3. On Some Popular Schemes 477

    vii

  • CONTENTS

    5.5.4. Historical Notes 4785.5.5. Suggestions for Further Reading 4805.5.6. Open Problems 4815.5.7. Exercises 481

    6 Digital Signatures and Message Authentication 497

    6.1. The Setting and Denitional Issues 4986.1.1. The Two Types of Schemes: A Brief Overview 4986.1.2. Introduction to the Unied Treatment 4996.1.3. Basic Mechanism 5016.1.4. Attacks and Security 5026.1.5.* Variants 505

    6.2. Length-Restricted Signature Scheme 5076.2.1. Denition 5076.2.2. The Power of Length-Restricted Signature Schemes 5086.2.3.* Constructing Collision-Free Hashing Functions 516

    6.3. Constructions of Message-Authentication Schemes 5236.3.1. Applying a Pseudorandom Function to the Document 5236.3.2.* More on Hash-and-Hide and State-Based MACs 531

    6.4. Constructions of Signature Schemes 5376.4.1. One-Time Signature Schemes 5386.4.2. From One-Time Signature Schemes to General Ones 5436.4.3.* Universal One-Way Hash Functions and Using Them 560

    6.5.* Some Additional Properties 5756.5.1. Unique Signatures 5756.5.2. Super-Secure Signature Schemes 5766.5.3. Off-Line/On-Line Signing 5806.5.4. Incremental Signatures 5816.5.5. Fail-Stop Signatures 583

    6.6. Miscellaneous 5846.6.1. On Using Signature Schemes 5846.6.2. On Information-Theoretic Security 5856.6.3. On Some Popular Schemes 5866.6.4. Historical Notes 5876.6.5. Suggestions for Further Reading 5896.6.6. Open Problems 5906.6.7. Exercises 590

    7 General Cryptographic Protocols 599

    7.1. Overview 6007.1.1. The Denitional Approach and Some Models 6017.1.2. Some Known Results 6077.1.3. Construction Paradigms 609

    viii

  • CONTENTS

    7.2.* The Two-Party Case: Denitions 6157.2.1. The Syntactic Framework 6157.2.2. The Semi-Honest Model 6197.2.3. The Malicious Model 626

    7.3.* Privately Computing (Two-Party) Functionalities 6347.3.1. Privacy Reductions and a Composition Theorem 6367.3.2. The OTk1 Protocol: Denition and Construction 6407.3.3. Privately Computing c1 + c2 = (a1 + a2) (b1 + b2) 6437.3.4. The Circuit Evaluation Protocol 645

    7.4.* Forcing (Two-Party) Semi-Honest Behavior 6507.4.1. The Protocol Compiler: Motivation and Overview 6507.4.2. Security Reductions and a Composition Theorem 6527.4.3. The Compiler: Functionalities in Use 6577.4.4. The Compiler Itself 681

    7.5.* Extension to the Multi-Party Case 6937.5.1. Denitions 6947.5.2. Security in the Semi-Honest Model 7017.5.3. The Malicious Models: Overview and Preliminaries 7087.5.4. The First Compiler: Forcing Semi-Honest Behavior 7147.5.5. The Second Compiler: Effectively Preventing Abort 729

    7.6.* Perfect Security in the Private Channel Model 7417.6.1. Denitions 7427.6.2. Security in the Semi-Honest Model 7437.6.3. Security in the Malicious Model 746

    7.7. Miscellaneous 7477.7.1.* Three Deferred Issues 7477.7.2.* Concurrent Executions 7527.7.3. Concluding Remarks 7557.7.4. Historical Notes 7567.7.5. Suggestions for Further Reading 7577.7.6. Open Problems 7587.7.7. Exercises 759

    Appendix C: Corrections and Additions to Volume 1 765

    C.1. Enhanced Trapdoor Permutations 765C.2. On Variants of Pseudorandom Functions 768C.3. On Strong Witness Indistinguishability 768

    C.3.1. On Parallel Composition 769C.3.2. On Theorem 4.6.8 and an Afterthought 770C.3.3. Consequences 771

    C.4. On Non-Interactive Zero-Knowledge 772C.4.1. On NIZKs with Efcient Prover Strategies 772C.4.2. On Unbounded NIZKs 773C.4.3. On Adaptive NIZKs 774

    ix

  • CONTENTS

    C.5. Some Developments Regarding Zero-Knowledge 775C.5.1. Composing Zero-Knowledge Protocols 775C.5.2. Using the Adversarys Program in the Proof of Security 780

    C.6. Additional Corrections and Comments 783C.7. Additional Mottoes 784

    Bibliography 785Index 795

    Note: Asterisks indicate advanced material.

    x

  • List of Figures

    0.1 Organization of this work page xvi0.2 Rough organization of this volume xvii0.3 Plan for one-semester course on Foundations of Cryptography xviii5.1 Private-key encryption schemes: an illustration 3755.2 Public-key encryption schemes: an illustration 3766.1 Message-authentication versus signature schemes 5006.2 Collision-free hashing via block-chaining (for t = 7) 5196.3 Collision-free hashing via tree-chaining (for t = 8) 5226.4 Authentication-trees: the basic authentication step 5466.5 An authentication path for nodes 010 and 011 5477.1 Secure protocols emulate a trusted party: an illustration 6017.2 The functionalities used in the compiled protocol 6587.3 Schematic depiction of a canonical protocol 690

    xi

  • Preface

    It is possible to build a cabin with no foundations,but not a lasting building.

    Eng. Isidor Goldreich (19061995)

    Cryptography is concerned with the construction of schemes that withstand any abuse.Such schemes are constructed so as to maintain a desired functionality, even undermalicious attempts aimed at making them deviate from their prescribed functionality.

    The design of cryptographic schemes is a very difcult task. One cannot rely onintuitions regarding the typical state of the environment in which the system operates.For sure, the adversary attacking the systemwill try to manipulate the environment intountypical states. Nor can one be content with countermeasures designed to withstandspecic attacks because the adversary (which acts after the design of the system iscompleted) will try to attack the schemes in ways that are typically different from theones envisioned by the designer. The validity of the foregoing assertions seems self-evident; still, some people hope that in practice, ignoring these tautologieswill not resultin actual damage. Experience shows that these hopes rarely come true; cryptographicschemes based on make-believe are broken, typically sooner than later.

    In view of these assertions, we believe that it makes little sense to make assumptionsregarding the specic strategy that the adversary may use. The only assumptions thatcan be justied refer to the computational abilities of the adversary. Furthermore,it is our opinion that the design of cryptographic systems has to be based on rmfoundations, whereas ad hoc approaches and heuristics are a very dangerous way togo. A heuristic may make sense when the designer has a very good idea about theenvironment in which a scheme is to operate, yet a cryptographic scheme has to operatein a maliciously selected environment that typically transcends the designers view.

    This work is aimed at presenting rm foundations for cryptography. The foundationsof cryptography are the paradigms, approaches, and techniques used to conceptualize,dene, and provide solutions to natural security concerns. We will present some ofthese paradigms, approaches, and techniques, aswell as some of the fundamental results

    xiii

  • PREFACE

    obtained using them. Our emphasis is on the clarication of fundamental concepts andon demonstrating the feasibility of solving several central cryptographic problems.

    Solving a cryptographic problem (or addressing a security concern) is a two-stageprocess consisting of a denitional stage and a constructive stage. First, in the deni-tional stage, the functionality underlying the natural concern is to be identied, and anadequate cryptographic problem has to be dened. Trying to list all undesired situationsis infeasible and prone to error. Instead, one should dene the functionality in terms ofoperation in an imaginary ideal model, and require a candidate solution to emulate thisoperation in the real, clearly dened model (which species the adversarys abilities).Once the denitional stage is completed, one proceeds to construct a system that satis-es the denition. Such a construction may use some simpler tools, and its security isproven relying on the features of these tools. In practice, of course, such a scheme mayalso need to satisfy some specic efciency requirements.

    This work focuses on several archetypical cryptographic problems (e.g., encryptionand signature schemes) and on several central tools (e.g., computational difculty,pseudorandomness, and zero-knowledge proofs). For each of these problems (resp.,tools), we start by presenting the natural concern underlying it (resp., its intuitiveobjective), then dene the problem (resp., tool), andnally demonstrate that the problemmay be solved (resp., the tool can be constructed). In the last step, our focus is on demon-strating the feasibility of solving the problem, not on providing a practical solution. Asa secondary concern, we typically discuss the level of practicality (or impracticality)of the given (or known) solution.

    Computational Difculty

    The specic constructs mentioned earlier (as well as most constructs in this area) canexist only if some sort of computational hardness exists. Specically, all these problemsand tools require (either explicitly or implicitly) the ability to generate instances of hardproblems. Such ability is captured in the denition of one-way functions (see furtherdiscussion in Section 2.1). Thus, one-way functions are the very minimum needed fordoingmost sorts of cryptography. Aswe shall see, one-way functions actually sufce fordoing much of cryptography (and the rest can be done by augmentations and extensionsof the assumption that one-way functions exist).

    Our current state of understanding of efcient computation does not allow us to provethat one-way functions exist. In particular, the existence of one-way functions impliesthat NP is not contained in BPP P (not even on the average), which wouldresolve the most famous open problem of computer science. Thus, we have no choice(at this stage of history) but to assume that one-way functions exist. As justication forthis assumption, we may only offer the combined beliefs of hundreds (or thousands) ofresearchers. Furthermore, these beliefs concern a simply stated assumption, and theirvalidity follows from several widely believed conjectures that are central to variouselds (e.g., the conjecture that factoring integers is hard is central to computationalnumber theory).

    Since we need assumptions anyhow, why not just assume what we want (i.e., theexistence of a solution to some natural cryptographic problem)? Well, rst we need

    xiv

  • PREFACE

    to know what we want: As stated earlier, we must rst clarify what exactly we want;that is, we must go through the typically complex denitional stage. But once this stageis completed, can we just assume that the denition derived can be met? Not really.Once a denition is derived, how can we know that it can be met at all? The way todemonstrate that a denition is viable (and so the intuitive security concern can besatised at all) is to construct a solution based on a better-understood assumption (i.e.,one that is more common and widely believed). For example, looking at the denitionof zero-knowledge proofs, it is not a priori clear that such proofs exist at all (in anon-trivial sense). The non-triviality of the notion was rst demonstrated by presentinga zero-knowledge proof system for statements regarding Quadratic Residuosity thatare believed to be hard to verify (without extra information). Furthermore, contrary toprior beliefs, it was later shown that the existence of one-way functions implies thatany NP-statement can be proven in zero-knowledge. Thus, facts that were not at allknown to hold (and were even believed to be false) were shown to hold by reduction towidely believed assumptions (without which most of modern cryptography collapsesanyhow). To summarize, not all assumptions are equal, and so reducing a complex,new, and doubtful assumption to a widely believed simple (or even merely simpler)assumption is of great value. Furthermore, reducing the solution of a new task to theassumed security of a well-known primitive typically means providing a constructionthat, using the known primitive, solves the new task. This means that we not only know(or assume) that the new task is solvable but also have a solution based on a primitivethat, being well known, typically has several candidate implementations.

    Structure and Prerequisites

    Our aim is to present the basic concepts, techniques, and results in cryptography. Asstated earlier, our emphasis is on the clarication of fundamental concepts and the rela-tionship among them. This is done in a way independent of the particularities of somepopular number-theoretic examples. These particular examples played a central role inthe development of the eld and still offer the most practical implementations of allcryptographic primitives, but this does not mean that the presentation has to be linkedto them. On the contrary, we believe that concepts are best claried when presentedat an abstract level, decoupled from specic implementations. Thus, the most relevantbackground for this work is provided by basic knowledge of algorithms (includingrandomized ones), computability, and elementary probability theory. Background on(computational) number theory, which is required for specic implementations of cer-tain constructs, is not really required here (yet a short appendix presenting the mostrelevant facts is included in the rst volume so as to support the few examples ofimplementations presented here).

    Organization of theWork. This work is organized in two parts (see Figure 0.1): BasicTools and Basic Applications. The rst volume (i.e., [108]) contains an introductorychapter as well as the rst part (Basic Tools), which consists of chapters on computa-tional difculty (one-way functions), pseudorandomness, and zero-knowledge proofs.These basic tools are used for the Basic Applications of the second part (i.e., the current

    xv

  • PREFACE

    Volume 1: Introduction and Basic ToolsChapter 1: IntroductionChapter 2: Computational Difculty (One-Way Functions)Chapter 3: Pseudorandom GeneratorsChapter 4: Zero-Knowledge Proof Systems

    Volume 2: Basic ApplicationsChapter 5: Encryption SchemesChapter 6: Digital Signatures and Message AuthenticationChapter 7: General Cryptographic Protocols

    Figure 0.1: Organization of this work.

    volume), which consists of chapters on Encryption Schemes, Digital Signatures andMessage Authentication, and General Cryptographic Protocols.

    The partition of the work into two parts is a logical one. Furthermore, it has offeredus the advantage of publishing the rst part before the completion of the second part.Originally, a third part, entitled Beyond the Basics, was planned. That part was tohave discussed the effect of Cryptography on the rest of Computer Science (and, inparticular, complexity theory), as well as to have provided a treatment of a varietyof more advanced security concerns. In retrospect, we feel that the rst direction isaddressed in [106], whereas the second direction is more adequate for a collection ofsurveys.

    Organization of the Current Volume. The current (second) volume consists of threechapters that treat encryption schemes, digital signatures and message authentication,and general cryptographic protocols, respectively.Also included is an appendix that pro-vides corrections and additions to Volume 1. Figure 0.2 depicts the high-level structureof the current volume. Inasmuch as this volume is a continuation of the rst (i.e., [108]),one numbering system is used for both volumes (and so the rst chapter of the cur-rent volume is referred to as Chapter 5). This allows a simple referencing of sections,denitions, and theorems that appear in the rst volume (e.g., Section 1.3 presentsthe computational model used throughout the entire work). The only exception to thisrule is the use of different bibliographies (and consequently a different numbering ofbibliographic entries) in the two volumes.

    Historical notes, suggestions for further reading, some open problems, and someexercises are provided at the end of each chapter. The exercises are mostly designed tohelp and test the basic understanding of the main text, not to test or inspire creativity.The open problems are fairly well known; still, we recommend a check on their currentstatus (e.g., in our updated notices web site).

    Web Site for Notices Regarding This Work. We intend to maintain a web site listingcorrections of various types. The location of the site is

    http://www.wisdom.weizmann.ac.il/oded/foc-book.htmlxvi

  • PREFACE

    Chapter 5: Encryption SchemesThe Basic Setting (Sec. 5.1)Denitions of Security (Sec. 5.2)Constructions of Secure Encryption Schemes (Sec. 5.3)Advanced Material (Secs. 5.4 and 5.5.15.5.3)

    Chapter 6: Digital Signatures and Message AuthenticationThe Setting and Denitional Issues (Sec. 6.1)Length-Restricted Signature Scheme (Sec. 6.2)Basic Constructions (Secs. 6.3 and 6.4)Advanced Material (Secs. 6.5 and 6.6.16.6.3)

    Chapter 7: General Cryptographic ProtocolsOverview (Sec. 7.1)Advanced Material (all the rest):

    The Two-Party Case (Sec. 7.27.4)The Multi-Party Case (Sec. 7.5 and 7.6)

    Appendix C: Corrections and Additions to Volume 1Bibliography and Index

    Figure 0.2: Rough organization of this volume.

    Using This Work

    This work is intended to serve as both a textbook and a reference text. That is, it isaimed at serving both the beginner and the expert. In order to achieve this aim, thepresentation of the basic material is very detailed so as to allow a typical undergraduatein Computer Science to follow it. An advanced student (and certainly an expert) willnd the pace (in these parts) far too slow. However, an attempt was made to allow thelatter reader to easily skip details obvious to him/her. In particular, proofs are typicallypresented in a modular way.We start with a high-level sketch of the main ideas and onlylater pass to the technical details. Passage from high-level descriptions to lower-leveldetails is typically marked by phrases such as details follow.

    In a few places, we provide straightforward but tedious details in indented para-graphs such as this one. In some other (even fewer) places, such paragraphs providetechnical proofs of claims that are of marginal relevance to the topic of the work.

    More advanced material is typically presented at a faster pace and with fewer details.Thus, we hope that the attempt to satisfy a wide range of readers will not harm any ofthem.

    Teaching. The material presented in this work, on the one hand, is way beyond whatone may want to cover in a course and, on the other hand, falls very short of what onemay want to know about Cryptography in general. To assist these conicting needs, wemake a distinction between basic and advanced material and provide suggestions forfurther reading (in the last section of each chapter). In particular, sections, subsections,and subsubsections marked by an asterisk (*) are intended for advanced reading.

    xvii

  • PREFACE

    Depending on the class, each lecture consists of 5090 minutes. Lectures115 are covered by the rst volume. Lectures 1628 are covered by thecurrent (second) volume.

    Lecture 1: Introduction, Background, etc. (depending on class)

    Lectures 25: Computational Difculty (One-Way Functions)Main: Denition (Sec. 2.2), Hard-Core Predicates (Sec. 2.5)Optional: Weak Implies Strong (Sec. 2.3), and Secs. 2.4.22.4.4

    Lectures 610: Pseudorandom GeneratorsMain: Denitional Issues and a Construction (Secs. 3.23.4)Optional: Pseudorandom Functions (Sec. 3.6)

    Lectures 1115: Zero-Knowledge ProofsMain: Some Denitions and a Construction (Secs. 4.2.1, 4.3.1, 4.4.14.4.3)Optional: Secs. 4.2.2, 4.3.2, 4.3.34.3.4, 4.4.4

    Lectures 1620: Encryption SchemesMain: Denitions and Constructions (Secs. 5.1, 5.2.15.2.4, 5.3.25.3.4)Optional: Beyond Passive Notions of Security (Overview, Sec. 5.4.1)

    Lectures 2124: Signature SchemesDenitions and Constructions (Secs. 6.1, 6.2.16.2.2, 6.3.1.1, 6.4.16.4.2)

    Lectures 2528: General Cryptographic ProtocolsThe Denitional Approach and a General Construction (Overview, Sec. 7.1).

    Figure 0.3: Plan for one-semester course on Foundations of Cryptography.

    This work is intended to provide all material required for a course on Foundationsof Cryptography. For a one-semester course, the teacher will denitely need to skip alladvanced material (marked by an asterisk) and perhaps even some basic material; seethe suggestions in Figure 0.3. Depending on the class, this should allow coverage of thebasic material at a reasonable level (i.e., all material marked as main and some of theoptional). This work can also serve as a textbook for a two-semester course. In sucha course, one should be able to cover the entire basic material suggested in Figure 0.3,and even some of the advanced material.

    Practice. The aim of this work is to provide sound theoretical foundations for cryp-tography. As argued earlier, such foundations are necessary for any sound practice ofcryptography. Indeed, practice requires more than theoretical foundations, whereas thecurrent work makes no attempt to provide anything beyond the latter. However, given asound foundation, one can learn and evaluate various practical suggestions that appearelsewhere (e.g., in [149]). On the other hand, lack of sound foundations results in aninability to critically evaluate practical suggestions, which in turn leads to unsound

    xviii

  • PREFACE

    decisions. Nothing could be more harmful to the design of schemes that need to with-stand adversarial attacks than misconceptions about such attacks.

    Relationship to Another Book by the Author

    A frequently asked question refers to the relationship of the current work to my textModern Cryptography, Probabilistic Proofs and Pseudorandomness [106]. That textconsists of three brief introductions to the related topics in its title. Specically,Chapter 1of [106] provides a brief (i.e., 30-page) summary of the current work. The other twochapters of [106] provide a wider perspective on two topics mentioned in the currentwork (i.e., Probabilistic Proofs and Pseudorandomness). Further comments on the latteraspect are provided in the relevant chapters of the rst volume of the current work(i.e., [108]).

    A Comment Regarding the Current Volume

    There are no privileges without duties.Adv. Klara Goldreich-Ingwer (19122004)

    Writing the rst volume was fun. In comparison to the current volume, the denitions,constructions, and proofs in the rst volume were relatively simple and easy to write.Furthermore, in most cases, the presentation could safely follow existing texts. Conse-quently, the writing effort was conned to reorganizing the material, revising existingtexts, and augmenting them with additional explanations and motivations.

    Things were quite different with respect to the current volume. Even the simplestnotions dened in the current volume aremore complex thanmost notions treated in therst volume (e.g., contrast secure encryptionwith one-way functions or secure protocolswith zero-knowledge proofs). Consequently, the denitions are more complex, andmany of the constructions and proofs are more complex. Furthermore, in most cases,the presentation could not follow existing texts. Indeed, most effort had to be (and was)devoted to the actual design of constructions and proofs, which were only inspired byexisting texts.

    The mere fact that writing this volume required so much effort may imply that thisvolume will be very valuable: Even experts may be happy to be spared the hardship oftrying to understand this material based on the original research manuscripts.

    xix

  • Acknowledgments

    . . . very little do we have and inclose which we can call our own in thedeep sense of the word. We all have to accept and learn, either from ourpredecessors or from our contemporaries. Even the greatest genius wouldnot have achieved much if he had wished to extract everything from insidehimself. But there are many good people, who do not understand this,and spend half their lives wondering in darkness with their dreams oforiginality. I have known artists who were proud of not having followedany teacher and of owing everything only to their own genius. Such fools!

    Goethe, Conversations with Eckermann, 17.2.1832

    First of all, I would like to thank three remarkable people who had a tremendousinuence on my professional development: Shimon Even introduced me to theoreticalcomputer science and closely guided my rst steps. SilvioMicali and Sha Goldwasserled my way in the evolving foundations of cryptography and shared with me theirconstant efforts for further developing these foundations.

    I have collaborated with many researchers, yet I feel that my collaboration withBenny Chor and Avi Wigderson had the most important impact on my professionaldevelopment and career. I would like to thank them both for their indispensable contri-bution to our joint research and for the excitement and pleasure I hadwhen collaboratingwith them.

    Leonid Levin deserves special thanks as well. I had many interesting discussionswith Leonid over the years, and sometimes it took me too long to realize how helpfulthese discussions were.

    Special thanks also to four of my former students, from whom I have learned a lot(especially regarding the contents of this volume): to Boaz Barak for discovering theunexpected power of non-black-box simulations, to Ran Canetti for developing deni-tions and composition theorems for secure multi-party protocols, to Hugo Krawczykfor educating me about message authentication codes, and to Yehuda Lindell for signif-icant simplication of the construction of a posteriori CCA (which enables a feasiblepresentation).

    xxi

  • ACKNOWLEDGMENTS

    Next, Id like to thank a few colleagues and friends with whom I had signicantinteraction regarding Cryptography and related topics. These include Noga Alon,Hagit Attiya, Mihir Bellare, Ivan Damgard, Uri Feige, Shai Halevi, Johan Hastad,Amir Herzberg, Russell Impagliazzo, Jonathan Katz, Joe Kilian, Eyal Kushilevitz,Yoad Lustig, Mike Luby, Daniele Micciancio, Moni Naor, Noam Nisan, AndrewOdlyzko, Yair Oren, Rafail Ostrovsky, Erez Petrank, Birgit Ptzmann, Omer Reingold,Ron Rivest, Alon Rosen, Amit Sahai, Claus Schnorr, Adi Shamir, Victor Shoup,Madhu Sudan, Luca Trevisan, Salil Vadhan, Ronen Vainish, Yacob Yacobi, and DavidZuckerman.

    Even assuming I did not forget people with whom I had signicant interaction ontopics touching upon this book, the list of people Im indebted to is far more extensive.It certainly includes the authors of many papers mentioned in the reference list. It alsoincludes the authors of many Cryptography-related papers that I forgot to mention, andthe authors of many papers regarding the Theory of Computation at large (a theorytaken for granted in the current book).

    Finally, I would like to thank Boaz Barak, Alex Healy, Vlad Kolesnikov, YehudaLindell, and Minh-Huyen Nguyen for reading parts of this manuscript and pointing outvarious difculties and errors.

    xxii

  • CHAPTER FIVE

    Encryption Schemes

    Up to the 1970s, Cryptography was understood as the art of building encryptionschemes, that is, the art of constructing schemes allowing secret data exchange overinsecure channels. Since the 1970s, other tasks (e.g., signature schemes) have beenrecognized as falling within the domain of Cryptography (and even being at least ascentral to Cryptography). Yet the construction of encryption schemes remains, and islikely to remain, a central enterprise of Cryptography.

    In this chapter we review the well-known notions of private-key and public-keyencryption schemes. More importantly, we dene what is meant by saying that suchschemes are secure. This denitional treatment is a cornerstone of the entire area,and much of this chapter is devoted to various aspects of it. We also present severalconstructions of secure (private-key and public-key) encryption schemes. It turns outthat using randomness during the encryption process (i.e., not only at the key-generationphase) is essential to security.

    Organization. Our main treatment (i.e., Sections 5.15.3) refers to security underpassive (eavesdropping) attacks. In contrast, in Section 5.4, we discuss notions of se-curity under active attacks, culminating in robustness against chosen ciphertext attacks.Additional issues are discussed in Section 5.5.

    TeachingTip. We suggest to focus on the basic denitional treatment (i.e., Sections 5.1and 5.2.15.2.4) and on the the feasibility of satisfying these denitions (as demon-started by the simplest constructions provided in Sections 5.3.3 and 5.3.4.1). Theoverview to security under active attacks (i.e., Section 5.4.1) is also recommended.We assume that the reader is familiar with the material in previous chapters (andspecically with Sections 2.2, 2.4, 2.5, 3.23.4, and 3.6). This familiarity is importantnot only because we use some of the notions and results presented in these sections butalso because we use similar proof techniques (and do so while assuming that this is notthe readers rst encounter with these techniques).

    373

  • ENCRYPTION SCHEMES

    5.1. The Basic Setting

    Loosely speaking, encryption schemes are supposed to enable private exchange ofinformation between parties that communicate over an insecure channel. Thus, the basicsetting consists of a sender, a receiver, and an insecure channel that may be tapped byan adversary. The goal is to allow the sender to transfer information to the receiver,over the insecure channel, without letting the adversary gure out this information.Thus, we distinguish between the actual (secret) information that the receiver wishes totransmit and the message(s) sent over the insecure communication channel. The formeris called the plaintext, whereas the latter is called the ciphertext. Clearly, the ciphertextmust differ from the plaintext or else the adversary can easily obtain the plaintext bytapping the channel. Thus, the sender must transform the plaintext into a correspondingciphertext such that the receiver can retrieve the plaintext from the ciphertext, but theadversary cannot do so. Clearly, something must distinguish the receiver (who is ableto retrieve the plaintext from the corresponding ciphertext) from the adversary (whocannot do so). Specically, the receiver knows something that the adversary does notknow. This thing is called a key.

    An encryption scheme consists of a method of transforming plaintexts into cipher-texts and vice versa, using adequate keys. These keys are essential to the ability to effectthese transformations. Formally, these transformations are performed by correspondingalgorithms: an encryption algorithm that transforms a given plaintext and an adequate(encryption) key into a corresponding ciphertext, and a decryption algorithm that giventhe ciphertext and an adequate (decryption) key recovers the original plaintext. Actu-ally, we need to consider a third algorithm, namely, a probabilistic algorithm used togenerate keys (i.e., a key-generation algorithm). This algorithm must be probabilistic(or else, by invoking it, the adversary obtains the very same key used by the receiver).We stress that the encryption scheme itself (i.e., the aforementioned three algorithms)may be known to the adversary, and the schemes security relies on the hypothesis thatthe adversary does not know the actual keys in use.1

    In accordance with these principles, an encryption scheme consists of threealgorithms. These algorithms are public (i.e., known to all parties). The two obviousalgorithms are the encryption algorithm, which transforms plaintexts into ciphertexts,and the decryption algorithm, which transforms ciphertexts into plaintexts. By theseprinciples, it is clear that the decryption algorithm must employ a key that is knownto the receiver but is not known to the adversary. This key is generated using a thirdalgorithm, called the key-generator. Furthermore, it is not hard to see that the encryp-tion process must also depend on the key (or else messages sent to one party can beread by a different party who is also a potential receiver). Thus, the key-generationalgorithm is used to produce a pair of (related) keys, one for encryption and one for de-cryption. The encryption algorithm, given an encryption-key and a plaintext, producesa ciphertext that when fed to the decryption algorithm, together with the corresponding

    1 In fact, in many cases, the legitimate interest may be served best by publicizing the scheme itself, because thisallows an (independent) expert evaluation of the security of the scheme to be obtained.

    374

  • 5.1 THE BASIC SETTING

    D

    K

    X

    plaintext

    Receivers protected regionSenders protected region

    X E

    K

    plaintext

    ADVERSARY

    ciphertext

    The key K is known to both receiver and sender, but is unknown tothe adversary. For example, the receiver may generate K at randomand pass it to the sender via a perfectly-private secondary channel (notshown here).

    Figure 5.1: Private-key encryption schemes: an illustration.

    decryption-key, yields the original plaintext.We stress that knowledgeof the decryption-key is essential for the latter transformation.

    5.1.1. Private-Key Versus Public-Key Schemes

    A fundamental distinction between encryption schemes refers to the relation betweenthe aforementioned pair of keys (i.e., the encryption-key and the decryption-key). Thesimpler (and older) notion assumes that the encryption-key equals the decryption-key.Such schemes are called private-key (or symmetric).

    Private-Key Encryption Schemes. To use a private-key scheme, the legitimate partiesmust rst agree on the secret key. This can be done by having one party generate thekey at random and send it to the other party using a (secondary) channel that (unlikethe main channel) is assumed to be secure (i.e., it cannot be tapped by the adversary). Acrucial point is that the key is generated independently of the plaintext, and so it can begenerated and exchanged prior to the plaintext even being determined. Assuming thatthe legitimate parties have agreed on a (secret) key, they can secretly communicateby using this key (see illustration in Figure 5.1): The sender encrypts the desiredplaintext using this key, and the receiver recovers the plaintext from the correspondingciphertext (by using the same key). Thus, private-key encryption is a way of extendinga private channel over time: If the parties can use a private channel today (e.g., theyare currently in the same physical location) but not tomorrow, then they can use theprivate channel today to exchange a secret key that they may use tomorrow for secretcommunication.

    A simple example of a private-key encryption scheme is the one-time pad. Thesecret key is merely a uniformly chosen sequence of n bits, and an n-bit long ci-phertext is produced by XORing the plaintext, bit-by-bit, with the key. The plaintextis recovered from the ciphertext in the same way. Clearly, the one-time pad provides

    375

  • ENCRYPTION SCHEMES

    D Xplaintext

    Receivers protected regionSenders protected region

    X Eplaintext

    ADVERSARY

    ciphertext

    e

    e e d

    The key-pair (e, d) is generated by the receiver, who posts theencryption-key e on a public media, while keeping the decryption-keyd secret.

    Figure 5.2: Public-key encryption schemes: an illustration.

    absolute security. However, its usage of the key is inefcient; or, put in other words,it requires keys of length comparable to the total length (or information contents) ofthe data being communicated. By contrast, the rest of this chapter will focus on en-cryption schemes in which n-bit long keys allow for the secure communication ofdata having an a priori unbounded (albeit polynomial in n) length. In particular, n-bitlong keys allow for signicantly more than n bits of information to be communicatedsecurely.

    Public-Key Encryption Schemes. A new type of encryption schemes emerged inthe 1970s. In these so-called public-key (or asymmetric) encryption schemes, thedecryption-key differs from the encryption-key. Furthermore, it is infeasible to nd thedecryption-key, given the encryption-key. These schemes enable secure communicationwithout the use of a secure channel. Instead, each party applies the key-generationalgorithm to produce a pair of keys. The party (denoted P) keeps the decryption-key,denoted dP , secret and publishes the encryption-key, denoted eP . Now, any party cansend P private messages by encrypting them using the encryption-key eP . Party P candecrypt these messages by using the decryption-key dP , but nobody else can do so.(See illustration in Figure 5.2.)

    5.1.2. The Syntax of Encryption Schemes

    We start by dening the basic mechanism of encryption schemes. This denition saysnothing about the security of the scheme (which is the subject of the next section).

    Denition 5.1.1 (encryption scheme): An encryption scheme is a triple, (G, E , D),of probabilistic polynomial-time algorithms satisfying the following two conditions:

    1. On input 1n, algorithm G (called the key-generator) outputs a pair of bit strings.2. For every pair (e, d) in the range of G(1n), and for every {0, 1}, algorithms E

    376

  • 5.1 THE BASIC SETTING

    (encryption) and D (decryption) satisfy

    Pr[D(d, E(e, ))=] = 1where the probability is taken over the internal coin tosses of algorithms E and D.

    The integer n serves as the security parameter of the scheme. Each (e, d) in the rangeof G(1n) constitutes a pair of corresponding encryption/decryption keys. The stringE(e, ) is the encryption of the plaintext {0, 1} using the encryption-key e,whereasD(d, ) is the decryption of the ciphertext using the decryption-key d.

    We stress that Denition 5.1.1 says nothing about security, and so trivial (insecure)algorithms may satisfy it (e.g., E(e, )

    def= and D(d, ) def= ). Furthermore, Deni-tion 5.1.1 does not distinguish private-key encryption schemes from public-key ones.The difference between the two types is introduced in the security denitions: In apublic-key scheme the breaking algorithm gets the encryption-key (i.e., e) as an ad-ditional input (and thus e = d follows), while in private-key schemes e is not given tothe breaking algorithm (and thus, one may assume, without loss of generality, thate = d).

    We stress that this denition requires the scheme to operate for every plaintext,and specically for plaintext of length exceeding the length of the encryption-key.(This rules out the information theoretic secure one-time pad scheme mentionedearlier.)

    Notation. In the rest of this text, we write Ee() instead of E(e, ) and Dd() insteadof D(d, ). Sometimes, when there is little risk of confusion, we drop these subscripts.Also, we let G1(1n) (resp., G2(1n)) denote the rst (resp., second) element in thepair G(1n). That is, G(1n) = (G1(1n), G2(1n)). Without loss of generality, we mayassume that |G1(1n)| and |G2(1n)| are polynomially related to n, and that each of theseintegers can be efciently computed from the other. (In fact, we may even assume that|G1(1n)| = |G2(1n)| = n; see Exercise 6.)

    Comments. Denition 5.1.1maybe relaxed in severalwayswithout signicantly harm-ing its usefulness. For example, we may relax Condition (2) and allow a negligible de-cryption error (e.g., Pr[Dd(Ee()) =] < 2n). Alternatively, one may postulate thatCondition (2) holds for all but a negligible measure of the key-pairs generated byG(1n).At least one of these relaxations is essential for some suggestions of (public-key) en-cryption schemes.

    Another relaxation consists of restricting the domain of possible plaintexts (andciphertexts). For example, one may restrict Condition (2) to s of length (n), where : NN is some xed function. Given a scheme of the latter type (with plaintextlength ), we may construct a scheme as in Denition 5.1.1 by breaking plaintexts intoblocks of length (n) and applying the restricted scheme separately to each block. (Notethat security of the resulting scheme requires that the security of the length-restrictedscheme be preserved under multiple encryptions with the same key.) For more detailssee Sections 5.2.4 and 5.3.2.

    377

  • ENCRYPTION SCHEMES

    5.2. Denitions of Security

    In this section we present two fundamental denitions of security and prove their equiv-alence. The rst denition, called semantic security, is the most natural one. Semanticsecurity is a computational-complexity analogue of Shannons denition of perfect pri-vacy (which requires that the ciphertext yield no information regarding the plaintext).Loosely speaking, an encryption scheme is semantically secure if it is infeasible tolearn anything about the plaintext from the ciphertext (i.e., impossibility is replacedby infeasibility). The second denition has a more technical avor. It interprets se-curity as the infeasibility of distinguishing between encryptions of a given pair ofmessages. This denition is useful in demonstrating the security of a proposed encryp-tion scheme and for the analysis of cryptographic protocols that utilize an encryptionscheme.

    We stress that the denitions presented in Section 5.2.1 go far beyond saying that itis infeasible to recover the plaintext from the ciphertext. The latter statement is indeed aminimal requirement for a secure encryption scheme, but we claim that it is far tooweaka requirement. For example, one should certainly not use an encryption scheme thatleaks the rst part of the plaintext (even if it is infeasible to recover the entire plaintextfrom the ciphertext). In general, an encryption scheme is typically used in applicationswhere even obtaining partial information on the plaintext may endanger the securityof the application. The question of which partial information endangers the securityof a specic application is typically hard (if not impossible) to answer. Furthermore,we wish to design application-independent encryption schemes, and when doing soit is the case that each piece of partial information may endanger some application.Thus, we require that it be infeasible to obtain any information about the plaintextfrom the ciphertext. Moreover, in most applications the plaintext may not be uniformlydistributed, and some a priori information regarding itmay be available to the adversary.We thus require that the secrecy of all partial information be preserved also in such acase. That is, given any a priori information on the plaintext, it is infeasible to obtainany (new) information about the plaintext from the ciphertext (beyond what is feasibleto obtain from the a priori information on the plaintext). The denition of semanticsecurity postulates all of this.

    Security of Multiple Plaintexts. Continuing the preceding discussion, the denitionsare presented rst in terms of the security of a single encrypted plaintext. However,in many cases, it is desirable to encrypt many plaintexts using the same encryption-key, and security needs to be preserved in these cases, too. Adequate denitions anddiscussions are deferred to Section 5.2.4.

    ATechnical Comment: Non-UniformComplexity Formulation. To simplify the ex-position, we dene security in terms of non-uniform complexity (see Section 1.3.3 ofVolume 1). Namely, in the security denitions we expand the domain of efcient adver-saries (and algorithms) to include (explicitly or implicitly) non-uniformpolynomial-sizecircuits, rather than only probabilistic polynomial-time machines. Likewise, we make

    378

  • 5.2 DEFINITIONS OF SECURITY

    no computational restriction regarding the probability distribution fromwhichmessagesare taken, nor regarding the a priori information available on these messages. We notethat employing such a non-uniform complexity formulation (rather than a uniform one)may only strengthen the denitions, yet it does weaken the implications proven betweenthe denitions because these (simpler) proofs make free usage of non-uniformity. Auniform-complexity treatment is provided in Section 5.2.5.

    5.2.1. Semantic Security

    A good disguise should not reveal the persons height.Sha Goldwasser and Silvio Micali, 1982

    Loosely speaking, semantic security means that nothing can be gained by lookingat a ciphertext. Following the simulation paradigm, this means that whatever can beefciently learned from the ciphertext can also be efciently learned from scratch (orfrom nothing).

    5.2.1.1. The Actual Denitions

    To be somewhatmore accurate, semantic securitymeans that whatever can be efcientlycomputed from the ciphertext can be efciently computed when given only the lengthof the plaintext. Note that this formulation does not rule out the possibility that thelength of the plaintext can be inferred from the ciphertext. Indeed, some informationabout the length of the plaintext must be revealed by the ciphertext (see Exercise 4).We stress that other than information about the length of the plaintext, the ciphertext isrequired to yield nothing about the plaintext.

    In the actual denitions, we consider only information regarding the plaintext (ratherthan information regarding the ciphertext and/or the encryption-key) that can be ob-tained from the ciphertext. Furthermore, we restrict our attention to functions (ratherthan randomized processes) applied to the plaintext. We do so because of the intuitiveappeal of this special case, and are comfortable doing so because this special case im-plies the general one (see Exercise 13). We augment this formulation by requiring thatthe infeasibility of obtaining information about the plaintext remain valid even in thepresence of other auxiliary partial information about the same plaintext. Namely, what-ever can be efciently computed from the ciphertext and additional partial informationabout the plaintext can be efciently computed given only the length of the plaintext andthe same partial information. In the denition that follows, the information regarding theplaintext that the adversary tries to obtain is represented by the function f, whereas thea priori partial information about the plaintext is represented by the function h. The in-feasibility of obtaining information about the plaintext is required to hold for anydistribution of plaintexts, represented by the probability ensemble {Xn}nN.

    Security holds only for plaintexts of length polynomial in the security parameter. Thisis captured in the following denitions by the restriction |Xn| poly(n), where polyrepresents an arbitrary (unspecied) polynomial. Note that we cannot hope to providecomputational security for plaintexts of unbounded length or for plaintexts of length

    379

  • ENCRYPTION SCHEMES

    that is exponential in the security parameter (see Exercise 3). Likewise, we restrict thefunctions f and h to be polynomially-bounded, that is, | f (z)|, |h(z)| poly(|z|).

    The difference between private-key and public-key encryption schemes ismanifestedin the denition of security. In the latter case, the adversary (which is trying to obtaininformation on the plaintext) is given the encryption-key, whereas in the former caseit is not. Thus, the difference between these schemes amounts to a difference in theadversary model (considered in the denition of security). We start by presenting thedenition for private-key encryption schemes.

    Denition 5.2.1 (semantic security private-key): An encryption scheme, (G, E , D),is semantically secure (in the private-key model) if for every probabilistic polynomial-time algorithm A there exists a probabilistic polynomial-time algorithm A such thatfor every probability ensemble {Xn}nN, with |Xn| poly(n), every pair of polynomi-ally bounded functions f, h : {0, 1} {0, 1}, every positive polynomial p and allsufciently large n

    Pr[A(1n , EG1(1n)(Xn), 1

    |Xn |, h(1n , Xn))= f (1n , Xn)]

    < Pr[A(1n , 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]+ 1p(n)

    (The probability in these terms is taken over Xn as well as over the internal coin tossesof either algorithms G, E, and A or algorithm A.)

    We stress that all the occurrences of Xn in each of the probabilistic expressions re-fer to the same random variable (see the general convention stated in Section 1.2.1in Volume 1). The security parameter 1n is given to both algorithms (as well as to thefunctions h and f ) for technical reasons.2 The function h provides both algorithms withpartial information regarding the plaintext Xn . Furthermore, h also makes the deni-tion implicitly non-uniform; see further discussion in Section 5.2.1.2. In addition, bothalgorithms get the length of Xn . These algorithms then try to guess the value f (1n , Xn);namely, they try to infer information about the plaintext Xn . Loosely speaking, in a se-mantically secure encryption scheme the ciphertext does not help in this inference task.That is, the success probability of any efcient algorithm (i.e., algorithm A) that is giventhe ciphertext can be matched, up to a negligible fraction, by the success probability ofan efcient algorithm (i.e., algorithm A) that is not given the ciphertext at all.

    Denition 5.2.1 refers to private-key encryption schemes. To derive a denition ofsecurity for public-key encryption schemes, the encryption-key (i.e., G1(1n)) shouldbe given to the adversary as an additional input.

    2 The auxiliary input 1n is used for several purposes. First, it allows smooth transition to fully non-uniformformulations (e.g., Denition 5.2.3) in which the (polynomial-size) adversary depends on n. Thus, it is good toprovide A (and thus also A) with 1n . Once this is done, it is natural to allow also h and f to depend on n. Infact, allowing h and f to explicitly depend on n facilitates the proof of Proposition 5.2.7. In light of the factthat 1n is given to both algorithms, we may replace the input part 1|Xn | by |Xn |, because the former may berecovered from the latter in poly(n)-time.

    380

  • 5.2 DEFINITIONS OF SECURITY

    Denition 5.2.2 (semantic security public-key): An encryption scheme, (G, E , D),is semantically secure (in the public-key model) if for every probabilistic polynomial-time algorithm A, there exists a probabilistic polynomial-time algorithm A such thatfor every {Xn}nN, f, h, p, and n as in Denition 5.2.1

    Pr[A(1n , G1(1

    n), EG1(1n)(Xn), 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]< Pr

    [A(1n , 1|Xn |, h(1n , Xn))= f (1n, Xn)

    ]+ 1p(n)

    Recall that (by our conventions) both occurrences of G1(1n), in the rst probabilisticexpression, refer to the same random variable. We comment that it is pointless to givethe random encryption-key (i.e., G1(1n)) to algorithm A (because the task as well asthe main inputs of A are unrelated to the encryption-key, and anyhow A could generatea random encryption-key by itself).

    Terminology. For sake of simplicity, we refer to an encryption scheme that is seman-tically secure in the private-key (resp., public-key) model as a semantically secureprivate-key (resp., public-key) encryption scheme.

    The reader may note that a semantically secure public-key encryption scheme cannotemploy a deterministic encryption algorithm; that is, Ee(x) must be a random variablerather than a xed string. This is more evident with respect to the equivalent Deni-tion 5.2.4. See further discussion following Denition 5.2.4.

    5.2.1.2. Further Discussion of Some Denitional Choices

    We discuss several secondary issues regarding Denitions 5.2.1 and 5.2.2. The in-terested reader is also referred to Exercises 16, 17, and 19, which present additionalvariants of the denition of semantic security.

    Implicit Non-Uniformity of the Denitions. The fact that h is not required to becomputable makes these denitions non-uniform. This is the case because both algo-rithms are given h(1n , Xn) as auxiliary input, and the latter may account for arbitrary(polynomially bounded) advice. For example, letting h(1n , ) = an {0, 1}poly(n) meansthat both algorithms are supplied with (non-uniform) advice (as in one of the com-mon formulations of non-uniform polynomial-time; see Section 1.3.3). In general, thefunction h can code both information regarding its main input and non-uniform ad-vice depending on the security parameter (i.e., h(1n , x) = (h (x), an)). We commentthat these denitions are equivalent to allowing A and A to be related families of non-uniform circuits, where by related wemean that the circuits in the family A = {An}nNcan be efciently computed from the corresponding circuits in the family A = {An}nN.For further discussion, see Exercise 9.

    Lack of Computational Restrictions Regarding the Function f. We do not evenrequire that the function f be computable. This seems strange at rst glance because(unlike the situation with respect to the function h, which codes a priori information

    381

  • ENCRYPTION SCHEMES

    given to the algorithms) the algorithms are asked to guess the value of f (at a plaintextimplicit in the ciphertext given only to A). However, as we shall see in the sequel (seealso Exercise 13), the actual technical content of semantic security is that the proba-bility ensembles {(1n , E(Xn), 1|Xn |, h(1n , Xn))}n and {(1n , E(1|Xn |), 1|Xn |, h(1n, Xn))}nare computationally indistinguishable (and so whatever A can compute can also becomputed by A). Note that the latter statement does not refer to the function f , whichexplains why we need not make any restriction regarding f.

    Other Modications of No Impact. Actually, inclusion of a priori information re-garding the plaintext (represented by the function h) does not affect the denition ofsemantic security: Denition 5.2.1 remains intact if we restrict h to only depend onthe security parameter (and so only provide plaintext-oblivious non-uniform advice).(This can be shown in various ways; e.g., see Exercise 14.1.) Also, the function f canbe restricted to be a Boolean function having polynomial-size circuits, and the randomvariable Xn may be restricted to be very dull (e.g., have only two strings in its sup-port): See proof of Theorem 5.2.5. On the other hand, Denition 5.2.1 implies strongerforms discussed in Exercises 13, 17 and 18.

    5.2.2. Indistinguishability of Encryptions

    A good disguise should not allow a mother to distinguish her own children.Sha Goldwasser and Silvio Micali, 1982

    The following technical interpretation of security states that it is infeasible to distinguishthe encryptions of two plaintexts (of the same length). That is, such ciphertexts arecomputationally indistinguishable as dened in Denition 3.2.7. Again, we start withthe private-key variant.

    Denition 5.2.3 (indistinguishability of encryptions private-key): An encryptionscheme, (G, E , D), has indistinguishable encryptions (in the private-key model) iffor every polynomial-size circuit family {Cn}, every positive polynomial p, all suf-ciently large n, and every x , y {0, 1}poly(n) (i.e., |x | = |y|),

    | Pr [Cn(EG1(1n)(x))=1] Pr [Cn(EG1(1n )(y))=1] | < 1p(n)The probability in these terms is taken over the internal coin tosses of algorithms Gand E.

    Note that the potential plaintexts to be distinguished can be incorporated into the circuitCn . Thus, the circuit models both the adversarys strategy and its a priori information:See Exercise 11.

    Again, the security denition for public-key encryption schemes is derived by addingthe encryption-key (i.e., G1(1n)) as an additional input to the potential distinguisher.

    382

  • 5.2 DEFINITIONS OF SECURITY

    Denition 5.2.4 (indistinguishability of encryptions public-key): An encryptionscheme, (G, E , D), has indistinguishable encryptions (in the public-key model) if forevery polynomial-size circuit family {Cn}, and every p, n, x, and y as in Denition 5.2.3

    | Pr [Cn(G1(1n), EG1(1n)(x))=1] Pr [Cn(G1(1n), EG1(1n )(y))=1] | < 1p(n)Terminology. We refer to an encryption scheme that has indistinguishable encryptionsin the private-key (resp., public-key) model as a ciphertext-indistinguishable private-key (resp., public-key) encryption scheme.

    Failure of Deterministic Encryption Algorithms. A ciphertext-indistinguishablepublic-key encryption scheme cannot employ a deterministic encryption algorithm (i.e.,Ee(x) cannot be a xed string). The reason is that for a public-key encryption schemewith a deterministic encryption algorithm E , given an encryption-key e and a pair ofcandidate plaintexts (x , y), one can easily distinguish Ee(x) from Ee(y) (by merelyapplying Ee to x and comparing the result to the given ciphertext). In contrast, in casethe encryption algorithm itself is randomized, the same plaintext can be encryptedin many exponentially different ways, under the same encryption-key. Furthermore,the probability that applying Ee twice to the same message (while using independentrandomization in Ee) results in the same ciphertext may be exponentially vanishing.(Indeed, as shown in Section 5.3.4, public-key encryption schemes having indistin-guishable encryptions can be constructed based on any trapdoor permutation, and theseschemes employ randomized encryption algorithms.)

    5.2.3. Equivalence of the Security Denitions

    The following theorem is stated and proven for private-key encryption schemes. Asimilar result holds for public-key encryption schemes (see Exercise 12).

    Theorem 5.2.5 (equivalence of denitions private-key): A private-key encryptionscheme is semantically secure if and only if it has indistinguishable encryptions.

    Let (G, E , D) be an encryption scheme.We formulate a proposition for each of the twodirections of this theorem. Each proposition is in fact stronger than the correspondingdirection stated in Theorem 5.2.5. The more useful direction is stated rst: It assertsthat the technical interpretation of security, in terms of ciphertext-indistinguishability,implies the natural notion of semantic security. Thus, the following proposition yieldsa methodology for designing semantically secure encryption schemes: Design andprove your scheme to be ciphertext-indistinguishable, and conclude (by applying theproposition) that it is semantically secure. The opposite direction (of Theorem 5.2.5)establishes the completeness of the latter methodology, and more generally assertsthat requiring an encryption scheme to be ciphertext-indistinguishable does not ruleout schemes that are semantically secure.

    383

  • ENCRYPTION SCHEMES

    Proposition 5.2.6 (useful direction: indistinguishability implies security): Sup-pose that (G, E , D) is a ciphertext-indistinguishable private-key encryption scheme.Then (G, E , D) is semantically secure. Furthermore, Denition 5.2.1 is satised byusing A = MA, where M is a xed oracle machine; that is, there exists a single Msuch that for every A letting A = MA will do.

    Proposition 5.2.7 (opposite direction: security implies indistinguishability): Sup-pose that (G, E , D) is a semantically secure private-key encryption scheme. Then(G, E , D) has indistinguishable encryptions. Furthermore, the conclusion holds evenif the denition of semantic security is restricted to the special case satisfying thefollowing four conditions:

    1. The random variable Xn is uniformly distributed over a set containing two strings;2. The value of h depends only on the length of its input or alternatively h(1n , x) =

    h(n), for some h;3. The function f is Boolean and is computable by a family of (possibly non-uniform)

    polynomial-size circuits;4. The algorithm A is deterministic.

    In addition, no computational restrictions are placed on algorithm A (i.e., A can beany function), and moreover A may depend on {Xn}nN, h, f , and A.

    Observe that the four itemized conditions limit the scope of the four universal quantiersin Denition 5.2.1, whereas the last sentence removes a restriction on the existentialquantier (i.e., removes the complexity bound on A) and reverses the order of quanti-ers allowing the existential quantier to depend on all universal quantiers (rather thanonly on the last one). Thus, each of these modications makes the resulting denitionpotentially weaker. Still, combining Propositions 5.2.7 and 5.2.6, it follows that a weakversion of Denition 5.2.1 implies (an even stronger version than) the one stated inDenition 5.2.1.

    5.2.3.1. Proof of Proposition 5.2.6

    Suppose that (G, E , D) has indistinguishable encryptions.Wewill show that (G, E , D)is semantically secure by constructing, for every probabilistic polynomial-time algo-rithm A, a probabilistic polynomial-time algorithm A such that the condition in De-nition 5.2.1 holds. That is, for every {Xn}nN, f and h, algorithm A guesses f (1n , Xn)from (1n, 1|Xn |, h(1n , Xn)) essentially as well as A guesses f (1n , Xn) from E(Xn) and(1n , 1|Xn |, h(1n , Xn)). Our construction of A consists of merely invoking A on input(1n , E(1|Xn |), 1|Xn |, h(1n , Xn)), and returning whatever A does. That is, A invokes Awith a dummy encryption rather than with an encryption of Xn (which A expects toget, but A does not have). Intuitively, the indistinguishability of encryptions impliesthat A behaves nearly as well when invoked by A (and given a dummy encryption) aswhen given the encryption of Xn , and this establishes that A is adequate with respectto A. The main issue in materializing this plan is to show that the specic formulationof indistinguishability of encryptions indeed supports the implication (i.e., implies that

    384

  • 5.2 DEFINITIONS OF SECURITY

    A guesses f (1n , Xn) essentially as well when given a dummy encryption as when giventhe encryption of Xn). Details follow.

    The construction of A:Let A be an algorithm that tries to infer partial information (i.e.,the value f (1n , Xn)) from the encryption of the plaintext Xn (when also given 1n , 1|Xn |

    and a priori information h(1n , Xn)). Intuitively, on input E() and (1n , 1||, h(1n , )),algorithm A tries to guess f (1n , ). We construct a new algorithm, A, that performsessentially as well without getting the input E(). The new algorithm consists of invok-ing A on input EG1(1n )(1

    ||) and (1n , 1||, h(1n , )), and outputting whatever A does.That is, on input (1n , 1||, h(1n, )), algorithm A proceeds as follows:

    1. A invokes the key-generator G (on input 1n), and obtains an encryption-key e G1(1n).

    2. A invokes the encryption algorithm with key e and (dummy) plaintext 1||, ob-taining a ciphertext Ee(1||).

    3. A invokes A on input (1n , , 1||, h(1n , )), and outputs whatever A does.

    Observe that A is described in terms of an oracle machine that makes a single oraclecall to (any given) A, in addition to invoking the xed algorithmsG and E . Furthermore,the construction of A depends neither on the functions h and f nor on the distributionof plaintexts to be encrypted (represented by the probability ensembles {Xn}nN). Thus,A is probabilistic polynomial-time whenever A is probabilistic polynomial-time (andregardless of the complexity of h, f , and {Xn}nN).

    Indistinguishability of encryptions will be used to prove that A performs essentiallyas well as A. Specically, the proof will use a reducibility argument.

    Claim 5.2.6.1: Let A be as in the preceding construction. Then, for every {Xn}nN, f ,h, and p as in Denition 5.2.1, and all sufciently large ns

    Pr[A(1n , EG1(1n)(Xn), 1

    |Xn |, h(1n , Xn))= f (1n , Xn)]

    < Pr[A(1n , 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]+ 1p(n)

    Proof: To simplify the notations, let us incorporate 1|| into hn()def= h(1n , ) and let

    fn()def= f (1n , ).Also,we omit 1n from the inputs given to A, shorthanding A(1n , c, v)

    by A(c, v). Using the denition of A, we rewrite the claim as asserting

    Pr[A(EG1(1n )(Xn), hn(Xn))= fn(Xn)

    ](5.1)

    < Pr[A(EG1(1n)(1

    |Xn |), hn(Xn))= fn(Xn)]+ 1

    p(n)

    Intuitively, Eq. (5.1) follows from the indistinguishability of encryptions. Otherwise,by xing a violating value of Xn and hardwiring the corresponding values of hn(Xn)and fn(Xn), we get a small circuit that distinguishes an encryption of this value of Xnfrom an encryption of 1|Xn |. Details follow.

    385

  • ENCRYPTION SCHEMES

    Assume toward the contradiction that for some polynomial p and innitely manyns Eq. (5.1) is violated. Then, for each such n, we have E[n(Xn)] > 1/p(n), where

    n(x)def= Pr [A(EG1(1n)(x), hn(x))= fn(x)] Pr [A(EG1(1n)(1|x |), hn(x))= fn(x)]

    We use an averaging argument to single out a string xn in the support of Xn such thatn(xn) E[n(Xn)]: That is, let xn {0, 1}poly(n) be a string for which the value ofn() is maximum, and so n(xn) > 1/p(n). Using this xn , we introduce a circuit Cn ,which incorporates the xed values fn(xn) and hn(xn), and distinguishes the encryptionof xn from the encryption of 1|xn |. The circuitCn operates as follows.On input = E(),the circuit Cn invokes A(, hn(xn)) and outputs 1 if and only if A outputs the valuefn(xn). Otherwise, Cn outputs 0.

    This circuit is indeed of polynomial size because it merely incorporates strings ofpolynomial length (i.e., fn(xn) andhn(xn)) and emulates a polynomial-time computation(i.e., that of A). (Note that the circuit family {Cn} is indeed non-uniform since itsdenition is based on a non-uniform selection of xns as well as on a hardwiring of(possibly uncomputable) corresponding strings hn(xn) and fn(xn).) Clearly,

    Pr[Cn(EG1(1n)())=1

    ] = Pr [A(EG1(1n )(), hn(xn))= fn(xn)] (5.2)Combining Eq. (5.2) with the denition of n(xn), we get Pr [Cn(EG1(1n )(xn))=1] Pr [Cn(EG1(1n )(1|xn |))=1] = n(xn)

    >1

    p(n)

    This contradicts our hypothesis that E has indistinguishable encryptions, and the claimfollows.

    We have just shown that A performs essentially as well as A, and so Proposition 5.2.6follows.

    Comments. The fact that we deal with a non-uniform model of computation allowsthe preceding proof to proceed regardless of the complexity of f and h. All thatour denition of Cn requires is the hardwiring of the values of f and h on a singlestring, and this can be done regardless of the complexity of f and h (provided that| fn(xn)|, |hn(xn)| poly(n)).

    When proving the public-key analogue of Proposition 5.2.6, algorithm A is denedexactly as in the present proof, but its analysis is slightly different: The distinguishingcircuit, considered in the analysis of the performance of A, obtains the encryption-keyas part of its input and passes it to algorithm A (upon invoking the latter).

    5.2.3.2. Proof of Proposition 5.2.7

    Intuitively, indistinguishability of encryption (i.e., of the encryptions of xn and yn) isa special case of semantic security in which f indicates one of the plaintexts and hdoes not distinguish them (i.e., f (1n , z) = 1 iff z = xn and h(1n , xn) = h(1n , yn)). Theonly issue to be addressed by the actual proof is that semantic security refers to uniform

    386

  • 5.2 DEFINITIONS OF SECURITY

    (probabilistic polynomial-time) adversaries, whereas indistinguishability of encryptionrefers to non-uniform polynomial-size circuits. This gap is bridged by using the func-tion h to provide the algorithms in the semantic-security formulation with adequatenon-uniform advice (which may be used by the machine in the indistinguishability ofencryption formulation).

    The actual proof is by a reducibility argument. We show that if (G, E , D) has dis-tinguishable encryptions, then it is not semantically secure (not even in the restrictedsense mentioned in the furthermore-clause of the proposition). Toward this end, weassume that there exists a (positive) polynomial p and a polynomial-size circuit family{Cn}, such that for innitely many ns there exists xn , yn {0, 1}poly(n) so that Pr [Cn(EG1(1n)(xn))=1] Pr [Cn(EG1(1n )(yn))=1] > 1p(n) (5.3)Using these sequences of Cns, xns and yns, we dene {Xn}nN, f and h (referred toin Denition 5.2.1) as follows:

    The probability ensemble {Xn}nN is dened such that Xn is uniformly distributedover {xn , yn}.

    The (Boolean) function f is dened such that f (1n , xn) = 1 and f (1n , yn) = 0, forevery n. Note that f (1n , Xn) = 1 with probability 1/2 and equals 0 otherwise.

    The function h is dened such that h(1n, Xn) equals the description of the circuitCn . Note that h(1n , Xn) = Cn with probability 1, and thus h(1n , Xn) reveals noinformation on the value of Xn.

    Note that Xn , f , and h satisfy the restrictions stated in the furthermore-clause of theproposition. Intuitively, Eq. (5.3) implies violation of semantic security with respect tothe Xn , h, and f . Indeed, we will present a (deterministic) polynomial-time algorithmA that, given Cn = h(1n , Xn), guesses the value of f (1n , Xn) from the encryptionof Xn , and does so with probability non-negligibly greater than 1/2. This violates(even the restricted form of) semantic security, because no algorithm, regardless of itscomplexity, can guess f (1n , Xn) with probability greater than 1/2 when only given1|Xn | (because given the constant values 1|Xn | and h(1n , Xn), the value of f (1n , Xn) isuniformly distributed over {0, 1}). Details follow.

    Let us assume, without loss of generality, that for innitely many ns

    Pr[Cn(EG1(1n )(xn))=1

    ]> Pr

    [Cn(EG1(1n)(yn))=1

    ] + 1p(n)

    (5.4)

    Claim 5.2.7.1: There exists a (deterministic) polynomial-time algorithm A such thatfor innitely many ns

    Pr[A(1n , EG1(1n )(Xn), 1

    |Xn |, h(1n , Xn))= f (1n , Xn)]

    >1

    2+ 1

    2p(n)

    Proof: The desired algorithm A merely uses Cn = h(1n, Xn) to distinguish E(xn) fromE(yn), and thus given E(Xn) it produces a guess for the value of f (1n , Xn). Specically,on input = E() (where is in the support of Xn) and (1n , 1||, h(1n , )), algorithm A

    387

  • ENCRYPTION SCHEMES

    recoversCn = h(1n , ), invokesCn on input, and outputs 1 ifCn outputs 1 (otherwise,A outputs 0).3

    It is left to analyze the success probability of A. Letting m = |xn| = |yn|, hn() def=h(1n , ) and fn()

    def= f (1n , ), we havePr[A(1n , EG1(1n)(Xn), 1

    |Xn |, hn(Xn))= fn(Xn)]

    = 12

    Pr [A(1n , EG1(1n )(Xn), 1|Xn |, hn(Xn))= fn(Xn) | Xn = xn]+ 1

    2 Pr [A(1n , EG1(1n )(Xn), 1|Xn |, hn(Xn))= fn(Xn) | Xn = yn]

    = 12

    Pr [A(1n , EG1(1n )(xn), 1|xn |, Cn)=1]+ 1

    2 Pr [A(1n , EG1(1n )(yn), 1|yn |, Cn)=0]

    = 12

    (Pr [Cn(EG1(1n)(xn))=1]+ 1 Pr [Cn(EG1(1n )(yn))=1])>

    1

    2+ 1

    2p(n)

    where the inequality is due to Eq. (5.4).

    In contrast, as aforementioned, no algorithm (regardless of its complexity) can guessf (1n , Xn)with success probability above 1/2,when given only 1|Xn | and h(1n , Xn). Thatis, we have the following:

    Fact 5.2.7.2: For every n and every algorithm A

    Pr[A(1n , 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ] 12

    (5.5)

    Proof: Just observe that the output of A, on its constant input values 1n , 1|Xn | andh(1n , Xn), is stochastically independent of the random variable f (1n , Xn), which inturn is uniformly distributed in {0, 1}. Eq. (5.5) follows (and equality holds in case Aalways outputs a value in {0, 1}).

    Combining Claim 5.2.7.1 and Fact 5.2.7.2, we reach a contradiction to the hypothesisthat the scheme is semantically secure (even in the restricted sense mentioned in thefurthermore-clause of the proposition). Thus, the proposition follows.

    Comment. When proving the public-key analogue of Proposition 5.2.7, algorithm Ais dened as in the current proof except that it passes the encryption-key, given to it aspart of its input, to the circuit Cn . The rest of the proof remains intact.

    3 We comment that the value 1 output by Cn is an indication that is more likely to be xn , whereas theoutput of A is a guess of f (). This point may be better stressed by redening f such that f (1n , xn )

    def= xn andf (1n , x)

    def= yn if x = xn , and having A output xn if Cn outputs 1 and output yn otherwise.

    388

  • 5.2 DEFINITIONS OF SECURITY

    5.2.4. Multiple Messages

    Denitions 5.2.15.2.4 only refer to the security of an encryption scheme that is usedto encrypt a single plaintext (per generated key). Since the plaintext may be longer thanthe key, these denitions are already non-trivial, and an encryption scheme satisfyingthem (even in the private-key model) implies the existence of one-way functions (seeExercise 2). Still, in many cases, it is desirable to encrypt many plaintexts using thesame encryption-key. Loosely speaking, an encryption scheme is secure in themultiple-message setting if analogous denitions (to Denitions 5.2.15.2.4) also hold whenpolynomially many plaintexts are encrypted using the same encryption-key.

    We show that in the public-key model, security in the single-message setting(discussed earlier) implies security in the multiple-message setting (dened inSection 5.2.4.1). We stress that this is not necessarily true for the private-key model.

    5.2.4.1. Denitions

    For a sequence of strings x = (x (1), ..., x (t)), we let Ee(x) denote the sequence of thet results that are obtained by applying the randomized process Ee to the t stringsx (1), ..., x (t), respectively. That is, Ee(x) = (Ee(x (1)), ..., Ee(x (t))). We stress that ineach of these t invocations, the randomized process Ee utilizes independently chosenrandom coins. For the sake of simplicity, we consider the encryption of (polynomi-ally) many plaintexts of the same (polynomial) length (rather than the encryption ofplaintexts of various lengths as discussed in Exercise 20). The number of plaintextsas well as their total length (in unary) are given to all algorithms either implicitly orexplicitly.4

    Denition 5.2.8 (semantic security multiple messages):

    For private-key: An encryption scheme, (G, E , D), is semantically secure for mul-tiple messages in the private-key model if for every probabilistic polynomial-time algorithm A, there exists a probabilistic polynomial-time algorithm A suchthat for every probability ensemble {Xn = (X (1)n , ..., X (t(n))n )}nN, with |X (1)n | = =|X (t(n))n | poly(n) and t(n) poly(n), every pair of polynomially bounded functionsf, h : {0, 1} {0, 1}, every positive polynomial p and all sufciently large n

    Pr[A(1n , EG1(1n )(Xn), 1

    |Xn |, h(1n , Xn))= f (1n , Xn)]

    < Pr[A(1n , t(n), 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]+ 1

    p(n)

    For public-key: An encryption scheme, (G, E , D), is semantically secure for multiplemessages in the public-key model if for A, A, t , {Xn}nN, f, h, p, and n, as in the

    4 For example, A can infer the number of plaintexts from the number of ciphertexts, whereas A is given thisnumber explicitly. Given the number of the plaintexts as well as their total length, both algorithms can infer thelength of each plaintext.

    389

  • ENCRYPTION SCHEMES

    private-key case, it holds that

    Pr[A(1n , G1(1

    n), EG1(1n)(Xn), 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]< Pr

    [A(1n , t(n), 1|Xn |, h(1n , Xn))= f (1n , Xn)

    ]+ 1

    p(n)

    (The probability in these terms is taken over Xn as well as over the internal coin tossesof the relevant algorithms.)

    We stress that the elements of Xn are not necessarily independent; they may depend onone another. Note that this denition also covers the case where the adversary obtainssome of the plaintexts themselves. In this case it is still infeasible for him/her to obtaininformation about the missing plaintexts (see Exercise 22).

    Denition 5.2.9 (indistinguishability of encryptions multiple messages):

    For private-key: An encryption scheme, (G, E , D), has indistinguishable encryptionsfor multiple messages in the private-key model if for every polynomial-size cir-cuit family {Cn}, every positive polynomial p, all sufciently large n, and everyx1, ..., xt(n), y1, ..., yt(n) {0, 1}poly(n), with t(n) poly(n), it holds that

    | Pr [Cn(EG1(1n)(x))=1] Pr [Cn(EG1(1n )(y))=1] | < 1p(n)where x = (x1, ..., xt(n)) and y = (y1, ..., yt(n)).

    For public-key: An encryption scheme, (G, E , D), has indistinguishable encryp-tions for multiple messages in the public-key model if for t , {Cn}, p, n, andx1, ..., xt(n), y1, ..., yt(n) as in the private-key case

    | Pr [Cn(G1(1n), EG1(1n)(x))=1] Pr [Cn(G1(1n), EG1(1n)(y))=1] | < 1p(n)The equivalence of Denitions 5.2.8 and 5.2.9 can be established analogously to theproof of Theorem 5.2.5.

    Theorem 5.2.10 (equivalence of denitions multiplemessages):Aprivate-key (resp.,public-key) encryption scheme is semantically secure for multiple messages if and onlyif it has indistinguishable encryptions for multiple messages.

    Thus, proving that single-message security implies multiple-message security for onedenition of security yields the same for the other. We may thus concentrate on theciphertext-indistinguishability denitions.

    5.2.4.2. The Effect on the Public-Key Model

    We rst consider public-key encryption schemes.

    390

  • 5.2 DEFINITIONS OF SECURITY

    Theorem 5.2.11 (single-message security implies multiple-message security): Apublic-key encryption scheme has indistinguishable encryptions for multiple messages(i.e., satises Denition 5.2.9 in the public-key model) if and only if it has indistinguish-able encryptions for a single message (i.e., satises Denition 5.2.4).

    Proof: Clearly, multiple-message security implies single-message security as a specialcase. The other direction follows by adapting the proof of Theorem 3.2.6 to the currentsetting.

    Suppose, toward the contradiction, that there exist a polynomial t , a polynomial-sizecircuit family {Cn}, and a polynomial p, such that for innitely many ns, there existsx1, ..., xt(n), y1, ..., yt(n) {0, 1}poly(n) so that Pr [Cn(G1(1n), EG1(1n )(x))=1] Pr [Cn(G1(1n), EG1(1n )(y))=1] > 1p(n)where x = (x1, ..., xt(n)) and y = (y1, ..., yt(n)). Let us consider such a generic n andthe corresponding sequences x1, ..., xt(n) and y1, ..., yt(n). We use a hybrid argument.Specically, dene

    h(i)def= (x1, ..., xi , yi+1, ..., yt(n))

    and H (i)ndef= (G1(1n), EG1(1n )(h(i)))

    Since H (0)n = (G1(1n), EG1(1n )(y)) and H (t(n))n = (G1(1n), EG1(1n)(x)), it follows thatthere exists an i {0, ..., t(n) 1} so that Pr [Cn(H (i)n )=1] Pr [Cn(H (i+1)n )=1] > 1t(n) p(n) (5.6)We show that Eq. (5.6) yields a polynomial-size circuit that distinguishes the encryptionof xi+1 from the encryption of yi+1, and thus derive a contradiction to security in thesingle-message setting. Specically, we construct a circuit Dn that incorporates thecircuit Cn as well as the index i and the strings x1, ..., xi+1, yi+1, ..., yt(n). On input anencryption-key e and (corresponding) ciphertext , the circuit Dn operates as follows:

    For every j i , the circuit Dn generates an encryption of x j using the encryption-key e. Similarly, for every j i + 2, the circuit Dn generates an encryption of y jusing the encryption-key e.Let us denote the resulting ciphertexts by 1, ..., i , i+2, ..., t(n). That is, j Ee(x j ) for j i and j Ee(y j ) for j i + 2.

    Finally, Dn invokesCn on input the encryption-key e and the sequence of ciphertexts1, ..., i , , i+2, ..., t(n), and outputs whatever Cn does.

    We stress that the construction of Dn relies in an essential way on the fact that theencryption-key is given to Dn as input.

    We now turn to the analysis of the circuit Dn . Suppose that is a (random)encryption of xi+1 with (random) key e; that is, = Ee(xi+1). Then, Dn(e, ) Cn(e, Ee(h(i+1))) = Cn(H (i+1)n ), where X Y means that the random variables Xand Y are identically distributed. Similarly, for = Ee(yi+1), we have Dn(e, )

    391

  • ENCRYPTION SCHEMES

    Cn(e, Ee(h(i))) = Cn(H (i)n ). Thus, by Eq. (5.6), we have Pr [Dn(G1(1n), EG1(1n)(yi+1))=1]Pr [Dn(G1(1n), EG1(1n)(xi+1))=1] > 1t(n) p(n)

    in contradiction to our hypothesis that (G, E , D) is a ciphertext-indistinguishablepublic-key encryption scheme (in the single-message sense). The theoremfollows.

    Discussion. The fact that we are in the public-key model is essential to this proof. Itallows the circuit Dn to form encryptions relative to the same encryption-key used inthe ciphertext given to it. In fact, as previously stated (and proven next), the analogousresult does not hold in the private-key model.

    5.2.4.3. The Effect on the Private-Key Model

    In contrast to Theorem 5.2.11, in the private-key model, ciphertext-indistinguishabilityfor a single message does not necessarily imply ciphertext-indistinguishability for mul-tiple messages.

    Proposition 5.2.12: Suppose that there exist pseudorandom generators (robust againstpolynomial-size circuits). Then, there exists a private-key encryption scheme that sat-ises Denition 5.2.3 but does not satisfy Denition 5.2.9.

    Proof:We start with the construction of the desired private-key encryption scheme. Theencryption/decryption key for security parameter n is a uniformly distributed n-bit longstring, denoted s. To encrypt a ciphertext, x , the encryption algorithm uses the key sas a seed for a (variable-output) pseudorandom generator, denoted g, that stretchesseeds of length n into sequences of length |x |. The ciphertext is obtained by a bit-by-bitexclusive-or of x and g(s). Decryption is done in an analogous manner.

    We rst show that this encryption scheme satises Denition 5.2.3. Intuitively,this follow from the hypothesis that g is a pseudorandom generator and the fact thatx U|x | is uniformly distributed over {0, 1}|x |. Specically, suppose toward the contra-diction that for some polynomial-size circuit family {Cn}, a polynomial p, and innitelymany ns

    | Pr[Cn(x g(Un))=1] Pr[Cn(y g(Un))=1] | > 1p(n)

    where Un is uniformly distributed over {0, 1}n and |x | = |y| = m = poly(n). On theother hand,

    Pr[Cn(x Um)=1] = Pr[Cn(y Um)=1]392

  • 5.2 DEFINITIONS OF SECURITY

    Thus, without loss of generality

    | Pr[Cn(x g(Un))=1] Pr[Cn(x Um)=1] | > 12 p(n)

    Incorporating x into the circuitCn , we obtain a circuit that distinguishesUm from g(Un),in contradiction to our hypothesis (regarding the pseudorandomness of g).

    Next, we observe that this encryption scheme does not satisfy Denition 5.2.9.Specically, given the ciphertexts of two plaintexts, one may easily retrieve theexclusive-or of the corresponding plaintexts. That is,

    Es(x1) Es(x2) = (x1 g(s)) (x2 g(s)) = x1 x2This clearly violates Denition 5.2.8 (e.g., consider f (x1, x2) = x1 x2) as well asDenition 5.2.9 (e.g., consider any x = (x1, x2) and y = (y1, y2) such that x1 x2 =y1 y2). Viewed in a different way, note that any plaintext-ciphertext pair yields acorresponding prex of the pseudorandom sequence, and knowledge of this prexviolates the security of additional plaintexts. That is, given the encryption of a knownplaintext x1 along with the encryption of an unknown plaintext x2, we can retrievex2.5

    Discussion. The single-message security of the scheme used in the proof of Propo-sition 5.2.12 was proven by considering an ideal version of the scheme in which thepseudorandom sequence is replaced by a truly random sequence. The latter schemeis secure in an information-theoretic sense, and the security of the actual scheme fol-lowed by the indistinguishability of the two sequences. Aswe show in Section 5.3.1, thisconstruction can be modied to yield a private-key stream-cipher that is secure formultiple message encryptions. All that is needed in order to obtain multiple-messagesecurity is to make sure that (as opposed to this construction) the same portion of thepseudorandom sequence is never used twice.

    AnAlternativeProof ofProposition 5.2.12. Given an arbitrary private-key encryptionscheme (G, E , D), consider the following private-key encryption scheme (G , E , D):

    G (1n) = ((k, r ), (k, r )), where (k, k) G(1n) and r is uniformly selected in{0, 1}|k|;

    E (k,r )(x) = (Ek(x), k r ) with probability 1/2 and E (k,r )(x) = (Ek(x), r ) other-wise;

    and D(k,r )(y, z) = Dk(y).

    If (G, E , D) is secure, then so is (G , E , D) (with respect to a single message); how-ever, (G , E , D) is not secure with respect to two messages. For further discussion seeExercise 21.

    5 On input the ciphertexts 1 and 2, knowing that the rst plaintext is x1, we rst retrieve the pseudorandom

    sequence (i.e., it is just rdef= 1 x1), and next retrieve the second plaintext (i.e., by computing 2 r ).

    393

  • ENCRYPTION SCHEMES

    5.2.5.* A Uniform-Complexity Treatment

    As stated at the beginning of this section, the non-uniform complexity formulationwas adopted in this chapter for the sake of simplicity. In contrast, in this subsection,we outline an alternative denitional treatment of security based on a uniform (ratherthan a non-uniform) complexity formulation.We stress that by uniform or non-uniformcomplexity treatment of cryptographic primitives,we refermerely to themodeling of theadversary. The honest (legitimate) parties are always modeled by uniform complexityclasses (most commonly probabilistic polynomial-time).

    The notion of efciently constructible probability ensembles, dened in Section 3.2.3of Volume 1, is central to the uniform-complexity treatment. Recall that an ensemble,X = {Xn}nN, is said to be polynomial-time constructible if there exists a proba