Top Banner
TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE PROBABILITY A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Quinn Culver Peter Cholak, Director Graduate Program in Mathematics Notre Dame, Indiana April 2015
77

TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Oct 19, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE PROBABILITY

A Dissertation

Submitted to the Graduate School

of the University of Notre Dame

in Partial Fulfillment of the Requirements

for the Degree of

Doctor of Philosophy

by

Quinn Culver

Peter Cholak, Director

Graduate Program in Mathematics

Notre Dame, Indiana

April 2015

Page 2: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

This document is in the public domain.

Page 3: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE PROBABILITY

Abstract

by

Quinn Culver

This dissertation contains the results from three related projects, each within the

fields of algorithmic randomness and probability theory.

The first project we undertake, which can be found in Chapter 2, contains the

definition a natural, computable Borel probability measure on the space of Borel

probability measures over 2! that allows us to study algorithmically random mea-

sures. The main results here are as follows. Every (algorithmically) random measure

is atomless yet mutually singular with respect to the Lebesgue measure. The random

reals of a random measure are random for the Lebesgue measure, and every random

real for the Lebesgue measure is random for some random measure. However, for a

fixed Lebesgue-random real, the set of random measures for which that real is ran-

dom is small. Relatively random measures, though mutually singular, always share a

random real that is in fact computable from the join of the measures. Random mea-

sures fail Kolmogorov’s 0-1 law. The shift of a random real for a random measure is

no longer random for that measure.

In our second project, which makes up Chapter 3, we study algorithmically ran-

dom closed subsets of 2!, algorithmically random continuous functions from 2! to 2!,

and the algorithmically random Borel probability measures on 2! from Chapter 2,

especially the interplay among these three classes of objects. Our main tools are

preservation of randomness and its converse, the “no randomness ex nihilo princi-

Page 4: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Quinn Culver

ple,” which together say that given an almost-everywhere defined computable map

from 2! to itself, a real is Martin Lof random for the pushforward measure if and

only if its preimage is random with respect to the measure on the domain. These

tools allow us to prove new facts, some of which answer previously open questions,

and reprove some known results more simply.

The main results of Chapter 3 are the following. We answer an open question in

[3] by showing that X ✓ 2! is a random closed set if and only if it is the set of zeros

of a random continuous function on 2!. As a corollary, we obtain the result that the

collection of random continuous functions on 2! is not closed under composition. We

construct a computable measure Q on the space of measures on 2! such that X ✓ 2!

is a random closed set if and only if X is the support of a Q-random measure. We

also establish a correspondence between random closed sets and the random measures

studied in Chapter 2. Lastly, we study the ranges of random continuous functions,

showing that the Lebesgue measure of the range of a random continuous function is

always strictly between 0 and 1.

In Chapter 4 we e↵ectivize a theorem of Erdos and Renyi [11], which says that

for c � 1, if a fair coin is used to generate a length-N string of 1’s and �1’s, which

are interpreted as gain and loss, then the maximal average gain over bc logNc-length

substrings converges almost surely (in N) to the same limit ↵(c). We show that if the

1’s and �1’s are determined by the bits of a Martin Lof random, then the convergence

holds.

Page 5: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

CONTENTS

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

CHAPTER 1: INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Summary of Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Summary of Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Summary of Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 A word on notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

CHAPTER 2: ALGORITHMICALLY RANDOM MEASURES . . . . . . . . 72.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.2 General e↵ective spaces . . . . . . . . . . . . . . . . . . . . . . 82.2.3 The space of probability measures . . . . . . . . . . . . . . . . 92.2.4 Algorithmic randomness . . . . . . . . . . . . . . . . . . . . . 10

2.3 Random measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Random measures and their randoms . . . . . . . . . . . . . . . . . . 152.5 Random measures are atomless . . . . . . . . . . . . . . . . . . . . . 162.6 Random measures are mutually singular (with respect to the Lebesgue

measure) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.7 Relatively random measures . . . . . . . . . . . . . . . . . . . . . . . 23

CHAPTER 3: THE INTERPLAY OF CLASSES OF ALGORITHMICALLYRANDOM OBJECTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.1 Some topological and measure-theoretic basics . . . . . . . . . 303.2.2 Some computability theory . . . . . . . . . . . . . . . . . . . . 31

3.3 Algorithmically random objects . . . . . . . . . . . . . . . . . . . . . 323.3.1 Algorithmically random sequences . . . . . . . . . . . . . . . . 323.3.2 Algorithmically random closed subsets of 2! . . . . . . . . . . 333.3.3 Algorithmically random continuous functions on 2! . . . . . . 343.3.4 Algorithmically random measures on 2! . . . . . . . . . . . . 35

3.4 Applications of Randomness Preservation and No Randomness Ex Nihilo 363.5 The support of a random measure . . . . . . . . . . . . . . . . . . . . 40

ii

Page 6: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

3.6 The range of a random continuous function . . . . . . . . . . . . . . . 44

CHAPTER 4: A NEW LAW OF LARGE NUMBERS EFFECTIVIZATION . 564.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.2 Stirling’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.3 Maximal average gains over short subgames of a fair game . . . . . . 60

BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

iii

Page 7: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

ACKNOWLEDGMENTS

Thanks to Chris Porter, Laurent Bienvenu, Joe Miller, Uri Andrews, Francois

Ledrappier, David Galvin, Greg Igusa, Benoit Monin, Pablo Lessa, Mathieu Hoyrup,

Peter Cholak, Jason Rute, Gerard Misiolek, Julia Knight, Manfred Denker, and

Mushfeq Khan.

Research partially supported by national Science Foundation, EMSW21-RTG-

0838506.

iv

Page 8: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

CHAPTER 1

INTRODUCTION

Algorithmic randomness was born out of an attempt to make precise the term

“random” by distinguishing a set of random elements that satisfy every almost-sure

property for the Lebesgue measure, �, on 2!, the space of (one-way) infinite binary

sequences (aka reals). However, being in the complement of a singleton is an almost-

sure property, so satisfying every almost sure property is impossible. Thus, the theory

of computation was brought into the picture, and the only properties considered

were those that were su�ciently computable. This gives rise to a distinguished set

MLR�

✓ 2!, called Martin Lof randoms or just randoms, with the property that

�(MLR) = 1.

One trend in algorithmic randomness has been to code other mathematical objects

by infinite binary sequences, declare an object to be (algorithmically) random if it

has a random code, and then to investigate what those random objects look like and

how they behave. In Chapters 2 and 3, the objects are Borel probability measures

on 2!, continuous functions from 2! to itself, and closed subsets of 2!.

Another trend is the so-called e↵ectivization of classical theorems of probabil-

ity/measure theory. Many theorems of probability are “almost sure” results. To

e↵ectivize such a theorem, one assumes the objects in the hypothesis (e.g. func-

tions/random variables) su�ciently computable and concludes that the result holds

on the algorithmically randoms. In Chapter 4, we e↵ectivize a 1970 result of Erdos

and Renyi [11].

1

Page 9: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

1.1 Summary of Chapter 2

In Chapter 2, the objects of study are measures1 over 2!. This project started as

a result of studying the Ergodic Decomposition Theorem (specifically the results in

[14]), which can be viewed as a statement about measures on the space of measures

on 2!. We wondered:

Question 1.1.1. Is there a natural measure on the space of measures over 2!?

Here the word “natural” corresponds to the fact that Lesbesgue measure is consid-

ered the most natural measure on 2!. Just as the randoms for the Lebesgue measure

on 2! constitute the truly random real numbers, if there were a natural measure on

the space of measures, then its random elements should constitute the truly random

measures on 2!.

Definition 2.3.2 defines a measure P on the space of measures on 2! that we

see as answering Question 1.1.1 a�rmatively. Essentially, the measure P says that

the conditional probability of going left from a given node in the full binary tree is

uniformly distributed and independent of other nodes. The measure P is natural in

the sense that every measure on 2! comes from assigning said conditional probabilities

according to some sequence of distributions, and taking that sequence to be IID-

uniform is somehow most natural.

The measure P on determines a collection MLRP

of P -random measures. The

remainder of this section is a synopsis of the results we prove in Chapter 2 about

these P -random measures.

The Lebesgue measure � is the so-called barycenter of P ; that is,Rµ(A) dP (µ) =

�(A) for any Borel set A. By results of Hoyrup [14], this implies that Lebesgue

randoms are exactly those that are random elements for some P -algorithmically-

random measure: MLR�

=S

µ2MLRPMLR

µ

.

1Here the term measure is short for Borel probability measure.

2

Page 10: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Every P -random measure is atomless (i.e., it gives zero probability to singletons)

yet mutually singular with respect to the Lebesgue measure �. (A measure µ is

mutually singular with respect to � if �(A) = 1 and µ(A) = 0 for some Borel set A.)

We conjectured initially that if µ and ⌫ are relatively random measures, then they

share no random reals. Relative randomness is the algorithmic analog of indepen-

dence. So, this conjecture said that if measures µ and ⌫ are generated independently,

then they will not agree on any real’s being random. This conjecture is almost true:

relatively random measures are mutually singular, and, hence, MLRµ

\MLR⌫

has

both µ and ⌫ measure zero. Therefore any agreement between µ and ⌫ on what is

random is rare. Surprisingly, however, there actually is a real that is random for both.

Moreover, there is a uniform construction of such a real using µ and ⌫ as oracles.

Whenever µ is P -random, x 2 MLRµ

, and y di↵ers from that of x at only finitely

many bits, then y /2 MLRµ

; i.e., MLRµ

is an “anti-tailset”. Thus P -random measures

badly fail Kolmogorov’s 0-1 law. We use this fact to prove that if x 2 MLR�

, then

P{µ : x 2 MLRµ

} = 0.

1.2 Summary of Chapter 3

Again, in this project, the focus is on algorithmic randomness in spaces other than

2!. Here, however we focus not just on how the random objects behave, but also on

how they behave with each other. Our main tools are the randomness preservation

principle and its converse, the no randomness ex nihilo principle. This work is joint

with Chris Porter.

The objects in play here are Borel probability measures on 2! (as in Chapter 2),

nonempty closed subsets of 2!, and continuous functions from 2! to 2!. In each of

the three cases, there is a surjective map that assigns to each x 2 2! an object Ox

(a

measure, a nonempty closed set, or a continuous function) and then the object Ox

is

3

Page 11: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

said to be (Martin Lof) random if x is (Martin Lof) random. So, we can talk about

random measures, random closed sets (first defined in [2]), and random continuous

functions (first defined in [3]).

In this project, we rely heavily on the preservation of randomness and no ran-

domness ex nihilo principles. Together, they say that given a computable measure

µ on 2! and a µ-a.e. defined computable map � : 2! ! 2!, an element y 2 2! is

Martin Lof random for the pushforward measure µ � ��1 if and only if y = �(x) for

some x 2 2! that is Martin Lof random for µ. These are powerful tools because they

often allow one to draw conclusions about Martin Lof randoms by showing that the

pushforward measure is what was desired and then merely observing that the map at

hand is computable. Using these tools, we reprove, in a much simpler way, the result

(in [2]) that every random closed set contains an element that is Martin Lof random

for the Lebesgue measure and that every element that is Martin Lof random for the

Lebesgue measure is contained in some random closed set.

We also reprove the fact that if F is a random continuous function for which the

zero set F�1{0N} is nonempty, then this set is, in fact, a random closed set. Our tools

then give for free the previously-left-open converse, which says that every random

closed set is realized as the zero set of some random continuous function. The fact

that the composition of random continuous functions need not be random follows as

a corollary.

For a measure µ on 2!, the 1/3-support is defined to be the set of x 2 2! such

that µ([x � n+ 1] | [x � n]) > 1/3, where [x � n] denotes the set of all elements of 2!

that agree with x on the first n bits. We show that a closed subset of 2! is random if

and only if it is the 1/3-support of some random measure. We also show that there is

a di↵erent way of defining a “random measure” (i.e., a di↵erent measure on the space

of measures) so that the (regular) supports of these random measures are exactly the

random closed sets.

4

Page 12: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

It was shown in [3] that random continuous functions are not necessarily injective

nor surjective. We extend this by showing that random continuous functions are

never injective and never surjective. Moreover, we show that the Lebesgue measure

of the range of a random continuous function is strictly between 0 and 1.

1.3 Summary of Chapter 4

Erdos and Renyi [11] proved that for any c � 1, if a fair coin is used to generate

a length-N string � = �1

�2

· · · �N

of 1’s and �1’s, then the maximal average

max0nN�bc log

2

Nc

�n+1

+ �n+1

+ · · · �n+bc log

2

Nc

bc log2

Nc

converges almost surely (in N) to the same limit ↵ = ↵(c), which is determined by

the equation1

c= 1� h

✓1 + ↵

2

◆,

where h : [0, 1] ! [0, 1] is the binary entropy function

h(x) = �x log2

x� (1� x) log2

(1� x).

The 1’s and �1’s can be interpreted as the gain or loss of a player in a fair game,

so this result says that the maximal average gain over appropriately-sized subgames

converges almost surely to ↵.

This result is a threshold theorem. As Erdos and Renyi note, if K(N) is an

integer-valued function of N such that K(N)

logN

! 1, then the maximal average

max0nN�K(N)

�n+1

+ �n+2

+ · · · �n+K(N)

K(N)

converges almost surely to 0. If K(N) is an integer-valued function of N such that

5

Page 13: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

K(N) c logN for some 0 < c < 1, then the maximal average is almost surely

eventually 1 (and hence converges almost surely to 1). So, the theorem explains

what happens in the only case left to consider, when K(N) grows like c logN for

some c � 1.

Chapter 4 contains an e↵ectivization of this theorem: for any c � 1, not only

does the maximal average converge almost surely to ↵, but, in fact, the convergence

holds on every infinite sequence of 1’s and �1’s that is Martin Lof random.

1.4 A word on notation

Our notation is fairly standard. For various reasons though, we use some di↵erent

notation and conventions in each chapter. In order to make each chapter more self-

contained we also state some definitions more than once (but never more than once

in a given chapter).

6

Page 14: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

CHAPTER 2

ALGORITHMICALLY RANDOM MEASURES

2.1 Introduction

Algorithmic randomness attempts to make precise the notion of a random real

number. Coding other objects (e.g., graphs) by real numbers allows for the study

of the algorithmically random versions of these objects (e.g., algorithmically random

graphs). This has been done, for example, in [6], [3], and [1]. Here we undertake a

similar project, where the objects of study are Borel probability measures on 2!.

We define a natural, computable (in the sense of computable analysis) map from

2! to the space P(2!) of Borel probability measures on 2!. This map pushes the

Lebesgue measure forward, yielding a natural, computable Borel probability measure

P on P(2!). The construction of P is a special (and, we think, the most natural)

case of a construction in [18]. We investigate the algorithmically P -random Borel

probability measures and the algorithmically random reals for those measures.

2.2 Preliminaries

2.2.1 Basics

The set of natural numbers is denoted by !. The function h·, ·i : !2 ! ! is any

computable bijection. The computably enumerable (c.e.) subsets of ! are e↵ectively

numbered as hWe

ie2!.

The set of finite binary strings is denoted by 2<!. It is e↵ectively numbered via

�0

= ? (the empty string), �1

= 0, �2

= 1, �3

= 00, �4

= 01, etc. For strings � and

7

Page 15: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

⌧ , the notation � � ⌧ means that � is an initial segment of ⌧ and � ? ⌧ means that

neither � � ⌧ nor ⌧ � �.

Cantor space, the space of all one-way infinite binary strings, is denoted by 2!.

For � 2 2<!, let [�] = {x 2 2! : x � �} (where x � � means that � is an initial

segment of x); this is the cylinder set generated by �. The collection of all cylinder

sets forms a clopen basis for a topology on 2!. This topology is metrizable via

d(x, y) = 2�min{n:x(n) 6=y(n)}.

2.2.2 General e↵ective spaces

We assume familiarity with the basics of computability theory and computable

analysis over 2! and R. In order to do computable analysis and probability theory

in spaces other than 2! and R, we, following [13] and [8], work in an e↵ective

Polish space,1 which is a complete metric space (X, d) with a countable dense

subset Q = hqi

i such that d(qi

, qj

) is a computable real number uniformly in i and

j. A representation of X is a partial surjective function ⇢ : 2! ! X. Given

a representation ⇢, a ⇢-name for x 2 X is an element of ⇢�1{x}. Any e↵ective

Polish space is equipped with a representation called its standard fast Cauchy

representation, ⇢C

: 2! ! X, defined by ⇢C

(0n010n110n21 · · · ) = x if d(x, qni) 2�i.

Note that di↵erent elements of X cannot have the same ⇢C

-name. We will simply

say name when ⇢ is clear from the context.

Any e↵ective Polish space admits an e↵ective basis for its topology; BX

k

, with

k = hi, ji, is the ball centered at qi

with radius 2�j. When no confusion will be

caused, BX

k

will be written simply as Bk

. A subset U ✓ X is then e↵ectively open,

or ⌃0

1

, if U =S

i2WeB

i

for some c.e. We

; and C ✓ X is e↵ectively closed, or ⇧0

1

, if

X � C is e↵ectively open. A compact subset K ✓ X is e↵ectively compact if the

set of (indices for) finite covers by the Bi

’s is c.e.

1The authors there use the term computable metric space.

8

Page 16: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

A function f : X ! R := R[{1} is left- (resp. right-) computable if f�1(r,1]

(resp. f�1[�1, r)) is e↵ectively open in X for each r 2 Q. Let X and Y be e↵ective

Polish spaces. Then f : X ! Y is computable if f�1(U) is e↵ectively open in X

whenever U is e↵ectively open in Y , uniformly in U . In particular, computability

implies continuity. The following proposition is straightforward

Proposition 2.2.1.

1. A function f : X ! Y , where (Y, ⇢, R) is an e↵ective Polish space, is computableif and only if there is a computable function that outputs a name for f(x) 2 Ywhenever given a name for x 2 X.

2. f : X ! R is left- (resp. right-) computable if and only if there’s a computablefunction that outputs an increasing (resp. decreasing) sequence of rationals con-verging to f(x) whenever given a name for x 2 X.

3. f : X ! R is computable if and only if it is both left- and right-computable.

Where it makes sense, all notions are relativizable. Thus for x 2 2!, we can speak

of an x-computable function, an x-left-computable function, etc.

Proposition 2.2.2 ([14]). Let X and Y be e↵ective Polish spaces.

1. An e↵ectively compact set is e↵ectively closed.

2. If f : X ! Y is computable, and K ✓ X is e↵ectively compact, then f(K) ise↵ectively compact.

3. An e↵ectively closed subset of 2! is e↵ectively compact.

2.2.3 The space of probability measures

For an e↵ective Polish space X, let B(X) be its Borel �-algebra; that is, B(X)

is the smallest class of subsets of X that contains the open sets and is closed under

complementation and countable unions. A Borel probability measure, or just

measure for short, on X is a function µ : B(X) ! [0, 1] such that µ(X) = 1 and

µ(S

i2! Ai

) =P

i

µAi

whenever the Ai

’s are pairwise disjoint elements of B(X). When

X = 2!, Caratheodory’s extension theorem [12] guarantees that the conditions

9

Page 17: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

(a) µ([?]) = 1 and

(b) µ([�]) = µ([�0]) + µ([�1]) for all � 2 2<!

uniquely determine a measure on 2!. Thus, a measure on Cantor space is identified

with a function µ : 2<! ! [0, 1] satisfying conditions (a) and (b). We may write µ(�)

instead of µ([�]). The Lebesgue measure � is defined by �(�) = 2�|�| for each

string �.

The space of all Borel probability measures on an e↵ective Polish space X is

denoted by P(X). It is itself an e↵ective Polish space under the (metrizable) weak-⇤

topology [16]. We do not need the details of the e↵ective structure of P(X) but only

the following proposition.

Proposition 2.2.3 ([16]).

1. A measure µ 2 P(X) is computable if and only if µ(U) is uniformly left-computableon e↵ectively open sets U ✓ X.

2. A function f : X ! P(Y ) is computable if and only if a name for x 2 X uniformlyleft-computes the value of f(x)(BY

k

).

2.2.4 Algorithmic randomness

Let X be an e↵ective Polish space endowed with a Borel probability measure µ,

and let y 2 2! be a name for µ. A y-Martin Lof test for µ randomness is a

uniformly ⌃0,y

1

sequence hUn

in2! such that µ(U

n

) 2�n. An element x 2 X passes

the y-Martin Lof test for µ randomness if x /2T

Un

. An element x 2 X is y-Martin

Lof random for µ if it passes all y-Martin Lof tests for randomness.

An element x 2 X is Martin Lof random for µ, or just µ-random, if it is

y-Martin Lof random for µ for some name y 2 2! of µ. We write MLRµ

for the

set of all µ-randoms. Note that because there are only countably many ⌃0,y

1

sets,

µ(MLRµ

) = 1. Because the Lebesgue measure is special, we often write MLR for

MLR�

.

10

Page 18: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

As noted in [8], for any name y of µ, there is a single y-Martin Lof test for

µ randomness that su�ces to define y-Martin Lof randomness for µ. Such a y-

Martin Lof test for µ randomness is called a universal y-Martin Lof test for µ

randomness.

The following proposition allows us to work with a single name for µ in the cases

we consider.

Proposition 2.2.4. If µ 2 P(X) has a name y of least Turing degree, then x 2 X

is µ-random if and only if x is y-Martin Lof random for µ.

Proof. Suppose z 2 2! is a name for µ with z �T

y. Then any y-Martin Lof test for

µ randomness is also a z-Martin Lof test for µ randomness. Thus if x is z-Martin Lof

random for µ it is also y-Martin Lof random for µ.

When µ 2 P(X) has a name y 2 2! of least Turing degree, we will call the

universal y-Martin Lof test for µ randomness simply a universal µ-test.

An element x 2 X is µ-Kurtz random if there is a name y of µ such that x 2 U

for every U 2 ⌃0,y

1

with µ(U) = 1. We write KRµ

for the collection of all µ-Kurtz

randoms.

Note that because there are only countably many e↵ectively open sets, µ(KRµ

) =

1. Moreover, the following is true.

Proposition 2.2.5 ([17]). If x 2 MLRµ

then x 2 KRµ

.

2.3 Random measures

Given x 2 2!, the nthcolumn x

n

2 2! of x is defined by xn

(k) = 1 if and only

if x(hn, ki) = 1 (recall that hn, ki is a fixed computable bijection between !2 and !).

We write x = �n2!xn

; this is the infinite join operation in the Turing degrees.

Define the map � : 2! ! P(2!), with �(x) written µx

, by µx

(?) = 1 and

µx

(�n

0) = xn

⇤ µx

(�), where xn

is (the real number represented by) the nth col-

11

Page 19: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

umn of x. This map is essentially in [18], but is independently due to Chris Porter.

It is, as the next proposition shows, really just another representation of P(2!).

Proposition 2.3.1. The map � is computable.

Proof. By Proposition 2.2.3, it su�ces to show that given x 2 2! we can uniformly

compute µx

(�). Write µx

(�) =Q

i<|�| µx

(� � i + 1|� � i), where µx

(�|⌧) := µx

([�] \

[⌧ ])/µx

([⌧ ]). Multiplication is computable, so it su�ces to show that µx

(� � i+1|� � i)

is uniformly computable from x. But this is clear since µx

(� � i+ 1|� � i) is either a

column or one minus a column of x, and computably so.

Being computable implies being Borel (indeed continuous), so � pushes � forward

to a Borel probability measure on the space P(2!) of measures.

Definition 2.3.2. The measure P 2 P(P(2!)) is the pushforward via � of the

Lebesgue measure; that is,

P (B) := � � ��1B

for Borel B ✓ P(2!).

Proposition 2.3.3. The measure P is computable.

Proof. By Proposition 2.2.3, it su�ces to show that the measure of an e↵ectively

open set U ✓ P(2!) is uniformly left-computable. Since � is computable, ��1U

is uniformly e↵ectively open. Because � is computable, P (U) = � � (��1(U)) is

left-computable.

The measure P was defined with a goal that Martin Lof random elements of P

are exactly the images under � of random elements of 2!.

Theorem 2.3.4 (Preservation of randomness and no randomness ex nihilo). ⌫ 2

MLRP

if and only if ⌫ = µx

for some (unique) x 2 MLR�

.

12

Page 20: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Proof. (Preservation of randomness) If ⌫ /2 MLRP

, then ⌫ 2T

n2! Un

for some P -

test hUn

in2!. But, � is computable, so ��1U

n

is e↵ectively open in 2! for each n.

Moreover �(��1(Un

)) = P (Un

) 2�n for each n. Thus h��1(Un

)in2! is a �-test,

and, hence, x /2 MLR�

if ⌫ = �(x) = µx

.

(No randomness ex nihilo) Now, we show, following Shen (see [4, Theorem 3.5]),

that if ⌫ 2 MLRP

, then ⌫ = �(x) for some x 2 MLR. Uniqueness follows because

�(x) = �(y) for x 6= y implies each of x and y has a dyadic rational column.

Fix a universal Martin Lof test hUn

in2! for � randomness, and set K

n

:= 2!�Un

.

Define Vn

= P(2!)��(Kn

). Since � is computable, �(Kn

) 2 ⇧0

1

by Proposition 2.2.2,

parts (2) and (3), so Vn

2 ⌃0

1

, and uniformly so. Now

P (Vn

) = 1� P (�(Kn

)) = 1� �(��1(�(Kn

)))

1� �(Kn

)

2�n.

Thus, hVn

in2! is a P -Martin Lof test, so if ⌫ 2 MLR

P

, then ⌫ /2 Vn

for some n; i.e.,

⌫ 2 �(Kn

). The proof is now complete since Kn

✓ MLR�

.

The next proposition shows that � is a measure theoretic isomorphism (see [22])

between (2!,B(2!),�) and (P(2!),B(P(2!)), P ).

Proposition 2.3.5. For any Borel A ✓ 2!, P (�(A)) = �(A).

Proof. Note that �(MLR�

) \ �(A) = �(MLR�

\A) since �(x) = �(x0) and x 2 MLR

13

Page 21: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

implies x = x0. Thus,

P (�(A)) = P (MLRP

\�(A))

= P (�(MLR�

) \ �(A))

= P (�(MLR�

\A))

= ���1�(MLR�

\A)

= �(MLR�

\A)

= �(A).

We will need the next result, which is the same as Theorem 2.3.4 for Kurtz

randomness.

Proposition 2.3.6. ⌫ 2 KRP

if and only if ⌫ = µx

for some (unique) x 2 KR�

.

Proof. If ⌫ /2 KRP

, then ⌫ 2 C for some C 2 ⇧0

1

with P (C) = 0. Because �

is computable, ��1C 2 ⇧0

1

and by the definition of P , ���1C = P (C) = 0, so

��1⌫ \ KR�

= ;.

Now, if x /2 KR�

, then x 2 C for some C 2 ⇧0

1

with �(C) = 0. But then

C is e↵ectively compact, so �(C) 2 ⇧0

1

. By Proposition 2.3.5, P (�(C)) = 0, so

�(x) /2 KRP

.

The last preliminaries we need regard relative randomness and a slight variation

of Van Lambalgen’s Theorem.

Definition 2.3.7. A real y is �-random relative to a real x, written y 2 MLRx

,

if y /2T

Un

whenever Un

is a uniformly ⌃0,x

1

sequence with �(Un

) 2�n. We write

MLRµ

for MLRx

, where µ = µx

.

Theorem 2.3.8. In the product space P(2!)⇥ 2!, with the product measure P ⌦�,

the pair (µ, y) is (P ⌦ �)-random if and only if µ 2 MLRy

and y 2 MLRµ

.

14

Page 22: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Proof. By Theorem 2.3.4, (µ, y) is (P ⌦�)-random if and only if µ = µx

and (x, y) is

(�⌦ �)-random in 2! ⇥ 2!. By Van Lambalgen’s Theorem [21], this happens if and

only if y 2 MLRx

= MLRµ

and x 2 MLRy

= MLRµ

.

2.4 Random measures and their randoms

Now, we begin an analysis of MLRP

.

Proposition 2.4.1. If µ 2 MLRP

, then MLRµ

is dense in 2!.

Proof. If µ 2 MLRP

(indeed if µ 2 KRP

), then µ(�) > 0 for any � 2 2<!.

So, MLRµ

is, in some way, topologically large when µ 2 MLRP

. Theorem 2.6.5

below shows, however, that MLRµ

is (Lebesgue) measure theoretically small.

Lemma 2.4.2 below says that � is the barycenter of P . From work of Hoyrup [14],

this gives Theorem 2.4.4, which says, in particular, that the � randoms are exactly

the the P randoms’ randoms.

Lemma 2.4.2. For each Borel A ✓ 2!, �(A) =RP(2

!)

µ(A)dP (µ).

Proof. The function A 7!RP(2

!)

µ(A)dP (µ) is a measure, so it su�ces to consider

sets of the form A = [�], where � 2 2<!.

Z

P(2

!)

µ(�)dP (µ) =

Z

P(2

!)

Y

i<|�|

µ(�(i)|� � i) dP

=Y

i<|�|

Z

P(2

!)

µ(�(i)|� � i) dP (By independence.)

= 2�|�|.

Hoyrup proved the following result, which applies directly to the setting here.

Theorem 2.4.3 ([14, Theorem 3.1, relativized]). Let Q 2 P(P(2!)) be computable

15

Page 23: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

with barycenter µ. Then for any z 2 2!,

MLRz

µ

=[

⌫2MLRzQ

MLRz

.

Since our measure P 2 P(P(2!)) is computable and � is its barycenter, the

following holds.

Corollary 2.4.4. For any z 2 2!,

MLRz

=[

µ2MLRzP

MLRz

µ

.

2.5 Random measures are atomless

We show now that every random measure assigns each singleton set measure

zero; i.e., random measures are atomless. An atom of a measure µ 2 P(2!) is x 2 2!

such that µ({x}) > 0. Define A = {µ : µ has an atom} so that A =S

n

An

where

An

:= {µ : µ has an atom with measure � 1/n}.

Lemma 2.5.1. An

is e↵ectively closed.

Proof. Let A(n, �) = {µ : µ(�) � 1/n}. By the proof of Proposition 2.3.1, the

map '�

(µ) = µ(�) is computable, so 2! � A(n, �) = '�1

[0, 1/n) 2 ⌃0

1

. Thus,S

�22k A(n, �) 2 ⇧0

1

, and, hence, An

=T

k

S�22k A(n, �) 2 ⇧0

1

.

The notation x =⇤ y for x, y 2 2! means that x and y di↵er on only finitely many

bits; i.e. x =⇤ y if and only if |{i : x(i) 6= y(i)}| < 1.

Lemma 2.5.2 (Kolmogorov’s 0-1 Law [10]). If A 2 B(2!) is closed under =⇤ (i.e. for

all x 2 A, y =⇤ x ) y 2 A), then �(A) = 0 or �(A) = 1.

16

Page 24: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Corollary 2.5.3. If A 2 B(2!) is almost closed under =⇤ (i.e. for �-almost every x,

x 2 A ) y 2 A whenever x =⇤ y), then �(A) = 0 or �(A) = 1.

Proof. The set bA := {x 2 A : 8y[x =⇤ y ) y 2 A]} is closed under =⇤ and

has the same measure as A since A was already almost closed under =⇤. Applying

Lemma 2.5.2 to bA then gives the result.

Lemma 2.5.4. P (A) = 0 or 1.

Proof. By Proposition 2.3.5 and Corollary 2.5.3, it su�ces to show that the set A =

{x : µx

has an atom} is almost closed under =⇤. To that end, we prove that if

x 2 MLR\A and x0 =⇤ x, then x0 2 A. The key here is to notice that y 2 2! is an

atom for µx

if and only ifQ

i2! µx

(y(i)|y � i) > 0. If x0 =⇤ x, then there is N such

that µx

(y(i)|y � i) = µx

0(y(i)|y � i) for all i > N . Thus, unless µx

0(y(i)|y � i) = 0

for some i N , y is also an atom of x0. But µx

0(y(i)|y � i) is either a column of

x0 or one minus a column of x0, and since x 2 MLR, so is x0, which means that

µx

0(y(i)|y � i) = 0 is impossible.

For � 2 2<!, we define a map T�

: P(2!) ! P(2!), and write µ�

for T�

(µ), by

T�

(µ)(⌧) = µ�

(⌧) = µ(⌧ |�) := µ(�⌧)

µ(�)

. This map is like a shift map. It takes a measure

µ, which can be thought of as a tree of conditional probabilities and outputs a new

measure µ�

whose tree of probabilities is the same as that of µ’s above �.

Lemma 2.5.5. For each � 2 2<!, T�

preserves P ; i.e. P (A) = P (T�1

A) for every

Borel A ✓ P(2!).

Proof. Since T�

= T�(n�1)

�T�(n�2)

� · · ·�T�(1)

�T�(0)

for � 2 2n, it su�ces to consider

only the maps T0

and T1

. We prove only that T0

preserves P , since the proof that T1

does is essentially the same.

Because � is a measure isomorphism, it su�ces to show that the map T0

:=

17

Page 25: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

��1 � T0

� � : 2! ! 2! preserves �. For then

P (B) = �(��1(B)) = �(T�1

0

(��1(B)))

= �(��1(T�1

0

(�(��1(B)))))

= �(��1(T�1

0

(B)))

= P (T�1

0

(B)).

To show that T0

preserves �, we first get a nice description of T0

. With x =

�i2!xi

, we can write T0

(x) = x1

� x3

� x4

� x7

� x8

� x9

� x10

� · · · . There is

a 1-1 (computable) function f : ! ! ! such that T0

(x)(n) = x(f(n)). To show

T0

preserves �, it su�ces to show that �(�) = �(T�1

0

(�)) for each � 2 2<!. But

T�1

0

(�) = {x : 8i < |�|[x(f(i)) = �(i)}, so clearly �(�) = �(T�1

0

(�)).

Lemma 2.5.6. P (A) = 0

Proof. Define m : P(2!) ! [0, 1] by

m(µ) = max{r 2 [0, 1] : 9y 2 2!(µ{y} = r)}.

We want to show that m(µ) = 0 for P -a.e. µ 2 P(2!).

Notice that m(µ) = max{µ(0)m(T0

(µ)), µ(1)m(T1

(µ))}. Let

M = {µ : µ(0)m(T0

(µ)) � µ(1)m(T1

(µ))},

so M is the set where the maximum-mass atom is to the left in the tree. Then

using standard facts about the integral and the fact that the functions µ 7! µ(0) and

18

Page 26: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

µ 7! m(T0

(µ)) are P -independent gives

Z

P(2

!)

m(µ) dP (µ) =

Z

M

µ(0)m(T0

(µ)) dP +

Z

P(2

!)�M

µ(1)m(T1

(µ)) dP

Z

P(2

!)

µ(0)m(T0

(µ)) dP +

Z

P(2

!)�M

µ(1)m(T1

(µ)) dP

=

Z

P(2

!)

µ(0) dP

Z

P(2

!)

m(T0

(µ)) dP +

Z

P(2

!)�M

µ(1)m(T1

(µ)) dP

=1

2

Z

P(2

!)

m(T0

(µ)) dP +

Z

P(2

!)�M

µ(1)m(T1

(µ)) dP

=1

2

Z

P(2

!)

m(T0

(µ)) dP +1

2

Z

P(2

!)

m(T1

(µ)) dP

�Z

M

µ(1)m(T1

(µ)) dP

=1

2

Z

P(2

!)

m(µ) dP +1

2

Z

P(2

!)

m(µ) dP �Z

M

µ(1)m(T1

(µ)) dP

=

Z

P(2

!)

m(µ) dP �Z

M

µ(1)m(T1

(µ)) dP.

Thus, either P (M) = 0 or m(T1

(µ)) = 0 for P -a.e. µ 2 M (because µ(1) is P -

a.s. positive). Symmetric computations show that either P (P(2!) � M) = 0 or

m(T0

(µ)) = 0 for P -a.e. µ 2 P(2!)�M .

In the case where P (M) = 0, we have P (P(2!) � M) = 1, so m(T0

(µ)) = 0

for P -a.e. µ 2 P(2!). Similarly, in the case where P (P(2!) � M) = 0, we have

m(T1

(µ)) = 0 for P -a.e. µ 2 P(2!). In either case, because T0

and T1

both preserve

P , m(µ) = 0 for P -a.e. µ 2 P(2!).

In the case where 0 < P (M) < 1, Lemma 2.5.4 implies that P (A) = 0 or

P (A) = 1, so that m(µ), m(T0

(µ)), and m(T1

(µ)) are all P -a.s. 0 or all P -a.s. strictly

positive. Thus in this case m(T1

(µ)) = 0 on a P -positive-measure set, and hence on

P -almost all of P(2!) as well. Therefore m(µ) = 0 for P -a.e. µ 2 P(2!).

Now, we arrive at the main result of this section; random measures are atomless.

Theorem 2.5.7. Every µ 2 KRP

is atomless. In particular every µ 2 MLRP

is

19

Page 27: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

atomless.

Proof. By Lemma 2.5.6, A =S

An

is P -null. Hence, An

is P -null for each n. By

Lemma 2.5.1, each An

is ⇧0

1

.

2.6 Random measures are mutually singular (with respect to the Lebesgue measure)

A measure µ is absolutely continuous with respect to another measure ⌫, writ-

ten µ ⌧ ⌫ if ⌫(A) = 0 ) µ(A) = 0. The property µ ⌧ � implies atomlessness

(recall that � is the Lebesgue measure), so it is natural to ask if µ 2 MLRP

implies

µ ⌧ �. This is far from the case. We show now, in fact, that every µ 2 MLRP

is

mutually singular with respect to �, in symbols µ ? �, which means that µ(A) = 1

for some A 2 B(X) with �(A) = 0. We will actually prove a stronger result: that

MLRµ

\MLRµ

= ;.2

To show that MLRµ

\MLRµ = ;, we will employ the well-studied notion of selec-

tion functions. A selection function is a partial function f : 2<! ! {select, exclude};

f determines which bits are selected for entry into a subsequence. The next lemma

tells us that a certain selection function we use later will select infinitely often.

Lemma 2.6.1. Let µ 2 MLRP

, let x 2 MLRµ

, and let 0 < ↵ < 1. Then there are

infinitely many n such that µ(0|x � n) > ↵.

Proof. In the product space, P(2!)⇥ 2!, the set

EN

= {(µ, x) : 8n � N [µ(0|x � n) ↵]}

is ⇧0

1

, so it su�ces to prove that (P ⌦ �)(EN

) = 0, where P ⌦ � denotes the product

measure, because then for every (µ, x) 2 MLRP⌦�

(actually for every (µ, x) 2 KRP⌦�

)

2The results in this section were proven with Laurent Bienvenu.

20

Page 28: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

there are infinitely many n such that µ(0|x � n) > ↵. By Theorem 2.3.8, (µ, x) 2

MLRP⌦�

if and only if µ 2 MLRP

and x 2 MLRµ

.

Now, with FN

the complement (in P(2!) ⇥ 2!) of EN

, it is clear that for a fixed

x, P -a.e. µ has the property that (µ, x) 2 FN

. Thus

(P ⌦ �)(FN

) =

Z

P(2

!)⇥2

!

1FN d(P ⌦ �)

=

Z

2

!

Z

P(2

!)

1FN dP d� (By Fubini’s Theorem [12].)

=

Z

2

!

1 d�

= 1.

Lemma 2.6.2. Let µ 2 MLRP

, let x 2 MLRµ

, let 0 < ↵ < 1 be rational, and let

n1

< n2

< · · · be the sequence of all ni

such that µ(0|x � ni

) > ↵ for all i, which is

infinite by Lemma 2.6.1. Then yx

2 2! defined by yx

(i) = x(ni

) satisfies the law of

large numbers; i.e.

limn!1

1

n

X

i<n

yx

(i) =1

2.

Proof. This proof is a relativization of the proof of Theorem 7.4.2 in Downey &

Hirschfeldt’s book [10]. The point is that y is the result of a µ-computable selection

strategy that simply selects the bit x(n) from x whenever µ(0|x � n) > ↵. Since

x 2 MLRµ

and the MLRµ

sequences are among those from which it is impossible to

µ-computably select a subsequence that violates the law of large numbers, yx

must

satisfy the law of large numbers.

The next lemma gives us an e↵ective bound for the proof of Lemma 2.6.4. We

state it in slightly simplified form.

Lemma 2.6.3 (Hoe↵ding’s inequality [23]). Let y(1), . . . , y(n) be independent ran-

21

Page 29: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

dom variables on the probability space (2!, µ) taking values in [0, 1].Then

µ

1

n

nX

i=1

y(i)�Z

2

!

1

n

nX

i=1

y(i) dµ � ✏

! e�2n✏

2

for every ✏ > 0.

Lemma 2.6.4. Let µ 2 MLRP

, let x 2 MLRµ

, let 0 < ↵ < 1 be rational, and let

n1

< n2

< · · · be the sequence of all ni

such that µ(0|x � ni

) > ↵ for all i. If hni

ii2!

is infinite, then yx

2 2! defined by yx

(i) = x(ni

) satisfies

lim infn!1

1

n

X

i<n

yx

(i) 1� ↵.

Proof. Let � = 1� ↵ and W ✏

k

= {x : 1

k

Pi<k

yx

(i)� � > ✏} for ✏ 2 Q+. Then W ✏

k

is

uniformly ⌃0,µ

1

and

(x :

1

n

X

i<n

yx

(i)� � > ✏ for infinitely many n

)=\

N

[

k>N

W ✏

k

.

Thus it su�ces to show that µ�S

k>N

W ✏

k

�! 0 e↵ectively in N .

Noticing that

Z

2

!

1

k

X

i<k

yx

(i) dµ =1

k

X

i<k

Z

2

!

yx

(i) dµ 1

k

X

i<k

� = �

and

W ✏

k

✓ {x :1

k

X

i<k

yx

(i)�Z

2

!

1

k

X

i<k

yx

(i) dµ > ✏},

we can apply Lemma 2.6.3 to conclude that µ�S

k>N

W ✏

k

�P

k>N

e�2k✏

2 ! 0 e↵ec-

tively in N .

Theorem 2.6.5. If µ 2 MLRP

, then MLRµ

\MLRµ

= ;, so µ ? �.

Proof. Lemmas 2.6.2 and 2.6.4 together imply that MLRµ

\MLRµ

= ;. But, since

22

Page 30: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

�(MLRµ

) = 1, it must be the case that �(MLRµ

) = 0.

2.7 Relatively random measures

Recall that µ 2 MLR⌫

means that y 2 MLRx

where µ = �(y) and ⌫ = �(x). If

both µ 2 MLR⌫

and ⌫ 2 MLRµ

, we say that µ and ⌫ are relatively random.

It was conjectured initially that relatively random measures would share no ran-

doms. In fact an immediate consequence of Theorems 2.4.4 and 2.6.5 is the following,

which shows the conjecture almost true.

Theorem 2.7.1. If µ and ⌫ are relatively random, then MLR⌫

µ

\MLR⌫

= ; =

MLRµ

\MLRµ

. In particular, µ ? ⌫.

Interestingly though, relatively random measures do share a random real, and in

a rather strong way. Before proving this, we need a lemma about a universal test for

µ-randomness.

Lemma 2.7.2. Each µ 2 MLRP

has a name of least Turing degree and hence admits

a universal µ-test.

Proof. The point here is that x := ��1(µ) is essentially a name for µ, and the one of

least Turing degree. The proof of Proposition 2.3.1 shows that x computes a name

for µ. Also any name for µ computes µ(�) for any � 2 2<! and hence also must be

able to compute µ(0|�) for each � 2 2<!; this is the same as computing x.

Theorem 2.7.3. There is a computable function G : P(2!)⇥P(2!) ! 2! such that

if µ and ⌫ are relatively random measures, then G(µ, ⌫) 2 MLRµ

\MLR⌫

.3

Proof. The algorithm we are about to define is a greedy one. It builds the common

random by asking the two measures “Which of you cares the most which direction I

go?” and then acting accordingly.

3This result was proven with Joe Miller and Uri Andrews.

23

Page 31: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Define the function G : P(2!)⇥ P(2!) ! 2! by

G(µ, ⌫)(i) = 0 () µ(0 | G(µ, ⌫) � i) > ⌫(1|G(µ, ⌫) � i).

Note that also

G(µ, ⌫)(i) = 1 () µ(1|G(µ, ⌫) � i) > ⌫(0|G(µ, ⌫) � i).

The key fact here is that P{⌫ : G(µ, ⌫) � �} = µ(�) for each µ 2 P(2!) and

� 2 2<!. Indeed, by induction, if P{⌫ : G(µ, ⌫) � �} = µ(�), then by independence

P{⌫ : G(µ, ⌫) � �ai} = P{⌫ : G(µ, ⌫) � �} · P{⌫ : G(µ, ⌫)(|�|) = i|G(µ, ⌫) � �}

= µ(�) · P{⌫ : ⌫(1� i|�) < µ(i|�)}

= µ(�) · µ(i|�)

= µ(�ai).

Let hUµ

n

in2! be a universal µ-test. The sets V

n

:= {⌫ : G(µ, ⌫) 2 Uµ

n

} are

uniformly ⌃0,µ

1

and, with Sn

a prefix-free set of generators of Un

, the above calculation

gives

P (Vn

) = P

[

�2Sn

{⌫ : G(µ, ⌫) � �}!

=X

�2Sn

P ({⌫ : G(µ, ⌫) � �}) = · · ·

· · · =X

�2Sn

µ(�) = µ(Un

) 2�n.

Thus hVn

in2! is a P -test for Martin Lof randomness relative to µ and so if ⌫ is

relatively random to µ, G(µ, ⌫) 2 MLRµ

. By symmetry, G(µ, ⌫) = G(⌫, µ) 2 MLR⌫

as well.

The random element shared by the relatively random measures in Theorem 2.7.3

24

Page 32: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

was derandomized (indeed computed) by the join of those measures. Theorem 2.7.1

shows that in fact any x 2 MLRµ

\MLR⌫

must be derandomized (with respect to

either measure) by µ� ⌫. This raises the following question.

Question 2.7.4. If µ and ⌫ are relatively random and x 2 MLRµ

\MLR⌫

, must x be

computed by µ� ⌫?

Theorem 2.7.1 also leads to Theorem 2.7.7, a special case of which says that the

randoms for a random measure µ form an “anti-tailset”: change even a single bit and

the real loses its µ-randomness.

Recall that for � 2 2<!, µ�

is defined by µ�

(⌧) = µ(⌧ |�).

Lemma 2.7.5. ⌫ and ⇠ are relatively random if and only if ⌫ = µ�

and ⇠ = µ⌧

for

some µ 2 MLRP

and incompatible ⌧, � 2 2<!.

Proof. Clearly µ�

and µ⌧

are relatively random whenever µ 2 MLRP

and � ? ⌧ .

Given two relatively random measure ⌫ and ⇠, by taking p 2 MLR⌫�⇠

and defining µ

by µ(0) = p, µ0

= ⌫, and µ1

= ⇠, we have µ 2 MLRP

.

Lemma 2.7.6. If �x 2 MLRµ

, then x 2 MLRµ� .

Proof. If x /2 MLRµ� , then x 2 Uµ�

n

for each n, where hUµ�n

in2! is a universal µ

-test.

Then Vn

:= {�⌧ : ⌧ 2 Uµ�n

} is ⌃0,µ

1

since µ �T

µ�

and µ(Vn

) = µ[�]µ�

(Uµ�n

) 2�n.

Therefore, hVn

in2! is a µ-test capturing �x, and, whence, �x /2 MLR

µ

.

The next result shows that µ-random elements in one part of the full binary-

branching tree look much di↵erent from those in another part. Contrast this with the

Lebesgue measure, where randomness does not depend on prefixes (i.e. randomness

is a “tail event”).

Theorem 2.7.7. If �x 2 MLRµ

and ⌧ ? �, then ⌧x /2 MLRµ

.

25

Page 33: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Proof. Suppose �x 2 MLRµ

and ⌧x 2 MLRµ

. Then x 2 MLRµ� \MLR

µ⌧ by Lemma 2.7.6.

Since � ? ⌧ , the measures µ�

and µ⌧

are relatively random and hence by Theo-

rem 2.7.1, x /2 MLRµ�µ⌧. Since µ �

T

µ�

, x /2 MLRµ

µ⌧, so by Lemma 2.7.6, ⌧x /2 MLRµ

µ

=

MLRµ

, a contradiction.

We used Kolmogorov’s 0-1 law earlier (Lemma 2.5.2). It seems that random

measures should fail to satisfy Kolmogorov’s 0-1 law, since changing finitely many

bits of a real puts it in a part of the tree whose conditional probabilities are wildly

di↵erent (for a fixed random µ). We now confirm this intuition.

Corollary 2.7.8. Random measures fail to satisfy Kolmogorov’s 0-1 law.

Proof. Let µ 2 MLRP

. By Theorem 2.7.7, closing the set MLRµ

\[0] under tails

adds no randoms and hence no measure. The result is therefore a tailset of measure

µ(0) 2 (0, 1).

We can also use Theorem 2.7.7 to show that given a random x, the probability of

choosing a measure that thinks x is random is zero.

Corollary 2.7.9. If x 2 MLR�

, then P ({µ : x 2 MLRµ

}) = 0.

Proof. Changing only finitely many bits of (the preimage under � of) any P -random

µ does not a↵ect whether x 2 MLRµ

, so by Kolmogorov’s 0-1 law, either

P ({µ : x 2 MLRµ

}) = 0

or

P ({µ : x 2 MLRµ

}) = 1.

But, P ({µ : x 2 MLRµ

}) = 1 if and only if P ({µ : x0 2 MLRµ

}) = 1, where

x0(i) = x(i) for i > 0 and x0(0) = 1� x(0). So, if P ({µ : x 2 MLRµ

}) = 1, then there

is µ such that x, x0 2 MLRµ

, contrary to Theorem 2.7.7.

26

Page 34: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Let T : 2! ! 2! be the shift map; so Tx(i) = x(i + 1). This map preserves

the Lebesgue measure.Clearly, no element of MLRP

is preserved by T , since that

would introduce dependence amongst the conditional probabilities of µ. Moreover,

the following holds.

Theorem 2.7.10. If µ 2 MLRP

and x 2 MLRµ

, then Tx /2 MLRµ

.

Proof. If x 2 MLRµ

and Tx 2 MLRµ

, then there are incompatible strings � � x and

⌧ � Tx such that x = �y and Tx = ⌧y. But µ�

and µ⌧

are relatively random with

y 2 MLRµ⌧µ�

\MLRµ�µ⌧

contradicting Theorem 2.7.1.

We believe Theorem 2.7.10 can be generalized: Recall (see [19]) that if f : ! ! !

is 1-1 and computable, then x � f 2 MLR�

whenever x 2 MLR�

. Our final conjecture

states that this fails for random elements of random measures.

Conjecture 2.7.11. Suppose µ 2 MLRP

and x 2 MLRµ

. If f : ! ! ! is 1-1,

computable and non-identity, then x � f /2 MLRµ

.

27

Page 35: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

CHAPTER 3

THE INTERPLAY OF CLASSES OF ALGORITHMICALLY RANDOM

OBJECTS

Work in this chapter was done jointly with Chris Porter.

3.1 Introduction

In this chapter, we have two primary goals: (1) to study the interplay between

algorithmically random closed sets on 2!, algorithmically random continuous func-

tions on 2!, and algorithmically random measures on 2!; and (2) to apply two central

results, namely the preservation of randomness principle and the no randomness ex

nihilo principle, to the study of the algorithmically random objects listed above.

Barmpalias, Brodhead, Cenzer, Dashti and Weber initiated the study of algorith-

mically random closed subsets of 2! in [2]. Algorithmically random closed sets were

further studied in, for instance, [1], [9], and [7]. In the spirit of their definition of

algorithmically random closed set, Barmpalias, Brodhead, Cenzer, Dashti and Weber

also defined a notion of algorithmically random continuous function on 2! in [3]. The

connection between random closed sets and e↵ective capacities was explored in [6].

Algorithmically random measures on 2! were studied first in Chapter 2.

One of the central results in [3] is that the set of zeroes of a random continuous

function of 2! is a random closed subset of 2!. Inspired by this result, we here

investigate similar “bridge results,” which allow us to transfer information about one

class of algorithmically random objects to another.

28

Page 36: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Two tools that are central to our investigation, mentioned in (2) above, are the

preservation of randomness principle and the no randomness ex nihilo principle. In

2!, the space of infinite binary sequences, the preservation of randomness principle

tells us that if � : 2! ! 2! is an e↵ective map and µ is a computable probability

measure on 2! such that the domain of � has µ measure 1, then � maps µ-random

members of 2! to members of 2! that are random with respect to the measure ⌫

obtained by pushing µ forward via �. Furthermore, the no randomness ex nihilo

principle tells us that any sequence that is random with respect to ⌫ is the image

of some µ-random sequence under �. Used in tandem, these two principles allow

us to conclude that the image of the µ-random sequences under � is precisely the

⌫-random sequences.

With the exception of our work in Chapter 2, the studies listed above do not make

use of these two tools used in tandem. As we will show, they not only allow for the

simplification of a number of proofs in the above-listed studies, but they also allow

us to answer a number of questions that were left open in the above studies.

The outline of the remainder of this chapter is as follows. In Section 3.2, we

provide the requisite background for the rest of the chapter. In Section 3.3, we review

the basics of algorithmic randomness, including preservation and the no randomness

ex nihilo principle. We also provide the definitions of algorithmic randomness for

closed sets in 2!, random continuous functions on 2!, and measures on 2! and we

list some basic properties of these objects. Section 3.4 contains simplified proofs of

some previously obtained results from [2] and [3], as well as a proof of a conjecture in

[3] that every random closed subset of 2! is the set of zeros of a random continuous

function on 2!. We study the support of a certain class of random measures in Section

3.5, and we establish a correspondence between between random closed sets and the

random measures studied in Chapter 2. Lastly, in Section 3.6, we prove that the

Lebesgue measure of the range of a random continuous function on 2! is always non-

29

Page 37: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

zero, from which it follows that no random continuous function is injective (which

had not been previously established). We also strengthen a result in [3] (namely,

that not every random continuous function is surjective) by proving that no random

continuous function is surjective, from which it follows that the Lebesgue measure of

the range of a random continuous function is never equal to one.

3.2 Background

3.2.1 Some topological and measure-theoretic basics

For n = {0, 1, . . . n � 1} 2 !, the set of all finite strings over the alphabet n is

denoted n<!. When n = 2, we let �0

, �1

, �2

, . . . be the canonical length-lexicographic

enumeration of 2<!, so that �0

= ✏ (the empty string), �1

= 0, �2

= 1, etc.

The space of all infinite sequences over the alphabet n is denoted n!. The elements

of n! are also called reals. The product topology on n! is generated by the clopen

sets

J�K = {x 2 n! : x � �},

where � 2 n<! and x � � means that � is an initial segment of x. When x is a real

and k 2 !, x � k denotes the initial segment of x of length k.

For �, ⌧ 2 n<!, �_⌧ denotes the concatenation of � and ⌧ . In some cases, we will

write this concatenation as �⌧ .

A tree is a subset of n<! that is closed under initial segments; i.e. T ✓ n<! is

a tree if � 2 T whenever ⌧ 2 T and � � ⌧ . A path through a tree T ✓ n<! is a

real x 2 n! satisfying x � k 2 T for every k. The set of all paths through a tree T is

denoted [T ]. Recall the correspondence between closed sets and trees.

Proposition 3.2.1. A set C ✓ n! is closed if and only if C = [T ] for some tree

T ✓ n<!. Moreover, C is nonempty if and only if T is infinite.

A measure µ on n! is a function that assigns to each Borel subset of n! a number

30

Page 38: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

in the unit interval [0, 1] and satisfies µ(S

i2! Bi

) =P

i2! µ(Bi

) whenever the Bi

’s are

pairwise disjoint. Caratheodory’s extension theorem guarantees that the conditions

• µ(J✏K) = 1 and

• µ(J�K) = µ(J�0K) + µ(J�1K) + . . .+ µ(J�_(n� 1)K) for all � 2 n<!

uniquely determine a measure on n!. Thus, a measure is identified with a function

µ : n<! ! [0, 1] satisfying the above conditions, and µ(�) is often written instead of

µ(J�K). The Lebesgue measure � on n! is defined by �(�) = n�|�| for each string

� 2 n<!.

Given a measure µ on n! and �, ⌧ 2 n<!, µ(�⌧ | �) is defined to be

µ(�⌧ | �) = µ(J�⌧K)µ(J�K) .

3.2.2 Some computability theory

A ⌃0

1

class S ✓ n! is an e↵ectively open set, i.e., an e↵ective union of basic clopen

subsets of n!. P ✓ n! is a ⇧0

1

class if 2! \ P is a ⌃0

1

class.

A partial function � : ✓ n! ! m! is computable if the preimage of a ⌃0

1

subset

of m! is a ⌃0

1

subset of the domain of �, uniformly; that is, if for every ⌃0

1

class

U ✓ m!, there is a ⌃0

1

class V ✓ n! such that ��1(U) = V \ dom(�), and an index

for V can be uniformly computed from an index for U . Equivalently, � : ✓ n! ! m!

is computable if there is an oracle Turing machine that when given x 2 n! (as an

oracle) and k 2 ! outputs �(x)(k). We can relativize the notion of a computable

function � : ✓ n! ! m! to any oracle z 2 2! to obtain a z-computable function.

A measure µ on n! is computable if µ(�) is a computable real number, uniformly

in � 2 n<!. Clearly, the Lebesgue measure � is computable.

If µ is a computable measure on n! and � : ✓ n! ! m! is a computable function

31

Page 39: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

defined on a set of µ-measure one, then the pushforward measure µ�

defined by

µ�

(�) = µ(��1(�))

for each � 2 m<! is a computable measure.

3.3 Algorithmically random objects

3.3.1 Algorithmically random sequences

Definition 3.3.1. Let µ be a computable measure on n! and z 2 m!. Then MLRz

µ

is the set of all x 2 n! such that x /2T

n

Un

whenever U0

, U1

, . . . is a uniformly

⌃0,z

1

sequence of subsets of n! with µUn

2�n. Such an x is said to be µ-random

relative to z and such a sequence U0

, U1

, . . . is called a µ-test relative to z. When

z is computable, we simply write MLRµ

, say x is µ-random, and call U0

, U1

, . . . a

µ-test.

The following is well-known and straightforward.

Proposition 3.3.2. Let µ be a computable measure on n! and let z 2 m!. If C ✓ n!

is ⇧0,z

1

and µ(C) = 0, then C \MLRz

µ

= ;.

The following is likely folklore, but it was at least observed in [4].

Proposition 3.3.3. Let µ be a computable measure on n!. If T : ✓ n! ! m! is

computable with µ(dom(T )) = 1, then MLRµ

✓ dom(T ).

Lemma 3.3.4 (Folklore). Let T : ✓ 2! ! 2! be computable, and suppose C is a

⇧0

1

subset of dom(T ). Then T (C) 2 ⇧0

1

, uniformly.

The next theorem represents our main tool here.

Theorem 3.3.5 (Preservation of Randomness and No Randomness Ex Nihilo). Let

T : ✓ 2! ! 2! be computable with �(dom(T )) = 1.

32

Page 40: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

(i) If x 2 MLR�

then T (x) 2 MLR��T�1 .

(ii) If y 2 MLR��T�1 , then there exists x 2 MLR

such that T (x) = y.

Proof.

(i) If T (x) /2 MLR��T�1 , then T (x) 2

Tn

Vn

for some � � T�1 test. Then x 2Tn

T�1Vn

and �(T�1Vn

) 2�n. Moreover, because T is computable (on itsdomain), T�1V

n

= Un

\ dom(T ) for some ⌃0

1

class Un

. Since �(dom(T )) = 1,�(T�1U

n

) 2�n Thus, x /2 MLR�

.

(ii) Let Un

be a universal test for � randomness, and set Kn

= X � Un

. ThenT (K

n

) is uniformly ⇧0

1

by Lemma 3.3.4, so Y �T (Kn

) is uniformly ⌃0

1

. Because��T�1(Y �T (K

n

)) = 1���T�1(T (Kn

)) 1��(Kn

) 2�n, the sets Y �T (Kn

)form a test for � � T�1 randomness. So if y 2 MLR

��T�1 , then y /2 Y � T (Kn

)for some n; i.e. y 2 T (K

n

). The proof is now complete, since Kn

✓ MLR�

.

We will also use a relativization of Theorem 3.3.5.

Corollary 3.3.6. Let T : ✓ 2! ! 2! be computable relative to z 2 2! with

�(dom(T )) = 1.

(i) If x 2 MLRz

, then T (x) 2 MLRz

��T�1

.

(ii) If y 2 MLRz

��T�1

, then there is x 2 MLRz

such that T (x) = y.

Lastly, the following result, known as van Lambalgen’s Theorem, will be useful

to us.

Theorem 3.3.7 ([21]). Let µ and ⌫ be computable measures on m! and n!, respec-

tively. Then for (x, y) 2 m! ⇥ n!, (x, y) 2 MLRµ⌦⌫

if and only if x 2 MLRy

µ

and

y 2 MLR⌫

.

3.3.2 Algorithmically random closed subsets of 2!

Let C(2!) denote the collection of all nonempty closed subsets of 2!. As noted in

Proposition 3.2.1, these are the sets of paths through infinite binary trees. Thus, to

randomly generate a nonempty closed set, it su�ces to randomly generate an infinite

33

Page 41: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

tree. We’ll code infinite trees by reals in 3!, so we can reduce the process of randomly

generating infinite trees to randomly generating reals.

Given x 2 3!, define a tree Tx

✓ 2<! inductively as follows. First, ?, the empty

string is automatically in Tx

. Now, suppose �i

2 Tx

. Then

• �i

_0 2 Tx

and �i

_1 /2 Tx

if x(i) = 0;

• �i

_0 /2 Tx

and �i

_1 2 Tx

if x(i) = 1;

• �i

_0 2 Tx

and �i

_1 2 Tx

if x(i) = 2.

Under this coding Tx

has no dead ends and hence is always infinite. This coding can

be thought of as a labeling of the nodes of 2! by the digits of x; a 0 at a node means

branch only left, a 1 means branch only right, and a 2 means branch both ways. Note

that every tree without dead ends except 2<! itself has infinitely many codes.

Definition 3.3.8. A nonempty closed set C 2 C(2!) is a random closed set if

C = [Tx

] for some x 2 MLR�

.

The main facts about random closed sets that we will use in the sequel are as

follows.

Theorem 3.3.9 ([2]). Every random closed set has Lebesgue measure zero.

Theorem 3.3.10 ([2]). Every random closed set is perfect.

3.3.3 Algorithmically random continuous functions on 2!

Let F(2!) denote the collection of all continuous F : ✓ 2! ! 2!. To define a

random continuous function, we code each element of F(2!) by a real x 2 3!. The

coding is a labeling of the edges of 2! (or equivalently, all nodes in 2<! except ✏) by

the digits of x. Having labeled the edges according to x, the function Fx

coded by x

is defined by Fx

(y) = z if z is the element of 2! left over after following y through

34

Page 42: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

the labeled tree and removing the 2’s. (In the case where only finitely many 0’s and

1’s remain after removing the 2’s, Fx

(y) is undefined.)

Formally, define a labeling function `x

: 2<! \ {✏} ! 3 by `x

(�i

) = xi�1

. Now

Fx

2 F(2!) is defined by Fx

(y) = z if and only if z is the result of removing the 2’s

from the sequence `x

(y � 1), `x

(y � 2), `x

(y � 3), . . . .

Definition 3.3.11. A function F 2 F(2!) is a random continuous function if

F = Fx

for some x 2 MLR�

.

Remark 3.3.1. Fx

is continuous (on its domain), because it is computable relative to

some oracle; namely x. Since 2! is compact and Hausdor↵, it follows that Fx

is a

closed map and, hence, that ran(F ) is ⇧0,F

1

.

We will make use of the following facts about random continuous functions.

Theorem 3.3.12 ([3]). If F 2 F(2!) is random and x 2 2! is computable, then

F (x) 2 2! is random.

Theorem 3.3.13 ([3]). If F 2 F(2!) is random, then F is total.

3.3.4 Algorithmically random measures on 2!

Let P (2!) be the space of probability measures on 2!. Given x 2 2!, the nth

column xn

of x is defined by xn

(k) = 1 if and only if x(hn, ki) = 1, where hn, ki

is some fixed computable bijection between !2 and !. We write x = �n2!xn

. Let

(�i

)i2! be the canonical enumeration of 2<! in the length-lexicographical order. We

define a map : 2! ! P (2!) that sends a real x to the measure µx

satisfying (i)

µx

(✏) = 1 and (ii) µx

(�n

0) = xn

· µx

(�n

), where xn

is the real number corresponding

to the nth column of x.

Definition 3.3.14. A measure µ 2 P (2!) is a random measure if µ = µx

for some

x 2 MLR�

.

35

Page 43: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Let P be the pushforward measure on P (2!) induced by � and . Then we have

the following.

Theorem 3.3.15. Let ⌫ 2 P (2!). Then ⌫ 2 MLRP

if and only if ⌫ = µx

for some

x 2 MLR�

.

The support of a measure µ on 2! is defined to be

Supp(µ) = {x 2 2! : (8n)[µ(x�n) > 0]}

It is not hard to see that Supp(µ) = 2! for every random measure µ.

In Chapter 2, it was shown that random measures are atomless and that the reals

that are random with respect to some random measure are precisely the reals in

MLR�

.

3.4 Applications of Randomness Preservation and No Randomness Ex Nihilo

In this section, we demonstrate the usefulness of preservation of randomness and

the no randomness ex nihilo principle in the study of algorithmically random objects

such as closed sets and continuous functions.

The following is a new, simpler proof of a known result from [2].

Theorem 3.4.1. Every random closed set contains an element of MLR�

, and every

element of MLR�

is contained in some random closed set.

Proof. We define a computable map T : C(2!) ⇥ 2! ! 2! that pushes forward the

product measure �C ⌦ � to � and satisfies T (C, x) 2 C for every (C, x) 2 C(2!)⇥ 2!.

Once we have done this, preservation of randomness and no randomness ex nihilo

imply that the image of a �C ⌦ �-random pair is �-random and any �-random is

the image of some �C ⌦ �-random pair. The result then follows because by Van

36

Page 44: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Lambalgen’s Theorem (Theorem 3.3.7), a pair (C, x) is �C ⌦ �-random if and only if

C is �C-random and x is �-random relative to C.

The map works by using x to tell us which way to go through C (viewed as a

tree) when we have a choice to make. Specifically, having T (C, x) � n = � such

that J�K \ C 6= ;, we define T (C, x)(n) = 0 if J�1K \ C = ; and T (C, x)(n) = 1 if

J�0K \ C = ;. If neither J�0K \ C = ; nor J�1K \ C = ;, then T (C, x)(n) := x(n).

The map T is clearly computable. It pushes �C ⌦� forward to � because if T has

output � 2 2n, then T outputs a next bit of 0 if and only if either J�1K \ C = ; or

both J�1K \ C 6= ; 6= J�0K \ C and x(n) = 0. The former happens with probability

1

3

, and the latter happens with probability 1

3

· 1

2

by independence. The proof is now

complete since 1

3

+ 1

6

= 1

2

.

Let F 2 F(2!). We define ZF

= {x : F (x) = 0}. This is clearly a closed subset

of 2!. In [3], the following was shown.

Theorem 3.4.2 ([3]). Let F 2 F(2!) be random. Then ZF

is a random closed set

provided it is nonempty.

In [3], it was conjectured that the converse also holds, but this was left open. We

prove this conjecture. To do so, we provide a new proof of Theorem 3.4.2, from which

the converse follows immediately. We also make use of an alternative characterization

of random closed sets, due to Diamondstone and Kjøs-Hanssen [9].

Just as a binary tree with no dead ends is coded by a sequence in 3! (see the

paragraph preceding Definition 3.3.8), an arbitrary binary tree is coded by a sequence

in 4!, except now a 3 at a node indicates that the tree is dead above that node. That

is, given x 2 4!, we define a tree Sx

✓ 2<! inductively as follows. First ✏, the empty

string, is included in Sx

by default. Now suppose that �i

2 Sx

. Then

• �i

_0 2 Sx

and �i

_1 /2 Sx

if x(i) = 0;

• �i

_0 /2 Sx

and �i

_1 2 Sx

if x(i) = 1;

37

Page 45: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

• �i

_0 2 Sx

and �i

_1 2 Sx

if x(i) = 2;

• �i

_0 /2 Sx

and �i

_1 /2 Sx

if x(i) = 3.

This coding can be thought of as a labeling of the nodes of 2! by the digits of x; a 0

at a node means that only the left branch is included, a 1 means that only the right

branch is included, a 2 means that both branches are included, and a 3 means that

neither branch is included. Note that every tree except 2<! itself has infinitely many

codes.

Let µGW

be the measure on 4! induced by setting, for each � 2 4<!,

µGW

(�0 | �) = µGW

(�1 | �) = 2/9, µGW

(�2 | �) = 4/9, and µGW

(�3 | �) = 1/9

Via this coding, we can also think of µGW

as a measure on Tree, the space of

binary trees. Then the probability of extending a string in a tree by only 0 is 2/9,

by only 1 is 2/9, by both 0 and 1 is 4/9, and by neither is 1/9. We call a tree T

GW-random if it has a random code; i.e., there is x 2 MLRµGW such that T = S

x

.

Lemma 3.4.3 (Diamondstone and Kjøs-Hanssen [9]). A closed set C is random if

and only if C is the set of paths through an infinite GW-random tree.

Theorem 3.4.4. (i) For every random F 2 F(2!), ZF

is a random closed setprovided that it is nonempty.

(ii) For every random C 2 C(2!), there is some random F 2 F(2!) such thatC = Z

F

.

Proof. We define a computable map : F(2!) ! Tree that pushes forward �F to

µGW

such that the set of paths through (F ) \ dom(F ) is exactly ZF

. Given our

representation of functions as members of 3! and binary trees as members of 4!, we

are really defining a computable map b : 3! ! 4! that pushes forward � to µGW

.

Given F 2 F(2!), which we think of as a {0, 1, 2}-labeling of the edges of the

full binary tree, we build the desired tree by declaring that � 2 (F ) if and only

38

Page 46: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

if the labels by F of the edges of � consists only of 0’s and 2’s. More formally,

as in the paragraph preceding Definition 3.3.11, F comes with a labeling function

`F

: 2<! \ {✏} ! 3 defined by `F

(�i

) = j if and only if x(i) = j where x is the given

code for F . So, � 2 (�) if and only if `F

(��k) 2 {0, 2}<! for every 0 < k |�|.

Clearly, this map is computable.

Now we show that the map pushes �F forward to µGW

. Suppose � 2 (F ),

which, as stated above, means that `F

(��k) 2 {0, 2}<! for every 0 < k |�|. Then

�0 2 (F ) & �1 /2 (F ) , `F

(�0) 2 {0, 2} & `F

(�1) = 1.

The right-hand side of the equivalence occurs with probability (2/3)(1/3) = 2/9.

Similarly,

�0 /2 (F ) & �1 2 (F ) , `F

(�0) = 1 & `F

(�1) 2 {0, 2},

where this latter event also occurs with probability 2/9. Next,

�0 2 (F ) & �1 2 (F ) , `F

(�0) 2 {0, 2} & `F

(�1) 2 {0, 2},

with the latter event occurring with probability (2/3)(2/3) = 4/9. Lastly,

�0 /2 (F ) & �1 /2 (F ) , `F

(�0) = `F

(�1) = 1,

where the event on the right-hand side occurs with probability (1/3)(1/3) = 1/9.

Now, by construction, it follows immediately that any path through the tree (F ) is

a sequence X such that either F (X) = 0! (in the case that `(X�n) = 0 for infinitely

many n) or F (X)" (in the case that `(X�n) = 0 for only finitely many n).

By preservation of randomness and no randomness ex nihilo, a tree is GW-random

39

Page 47: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

if and only if it is the image of some random continuous function F . The conclusion

then follows, by Lemma 3.4.3.

One consequence of Theorem 3.3.12 and Theorem 3.4.4(ii), not noted in [3], is

that the composition of two random continuous functions need not be random.

Corollary 3.4.5. For every random F 2 F(2!), there is some random G 2 F(2!)

such that G � F is not random.

Proof. By Theorem 3.3.12, there is some R 2 MLR such that F (0!) = R. By

Theorem 3.4.1, there is some random C 2 C(2!) containing R. By Theorem 3.4.4(ii),

there is a G 2 F(2!) such that G�1({0!}) = C. It follows that G(F (0!)) = 0!. This,

together with Theorem 3.3.12, implies that G � F is not random.

Another consequence of Theorem 3.4.4 lets us answer an open question from [3]

involving random pseudo-distance functions. Given a closed set C 2 C(2!), a function

� : 2! ! 2! is a pseudo-distance function for C if C is the set of zeroes of �. In

[3] it was shown that if � is a random pseudo-distance function for some C 2 C(2!),

then C is a random closed set, but the converse was left open. By Theorem 3.4.4,

the converse immediately follows.

Corollary 3.4.6. Let C 2 C(2!). Then C has a random pseudo-distance function if

and only if C is a random closed set.

3.5 The support of a random measure

In the previous section, we established a correspondence between random closed

sets and and random continuous functions: a closed set C is random if and only if it

is the set of zeroes of some random continuous function. In this section, we establish

similar correspondences between random closed sets and random measures.

40

Page 48: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Since the support of a measure µ, i.e., the set Supp(µ) = {x 2 2! : 8n µ(x�n) > 0}

is a closed set, one might hope to establish such a correspondence by considering the

supports of random measures. However, it is not hard to see that for each random

measure µ, Supp(µ) = 2!.

If we consider a computable measure on P (2!) di↵erent from the measure P

defined in Section 3.3.4, then such a correspondence can be given. In the first place,

we want a measure Q on P (2!) with the property that no Q-random measure has

full support. In fact, we can choose a measure Q such that each Q-random measure

is supported on a random closed set.

Theorem 3.5.1. There is a computable measure Q on P (2!) such that

(i) every Q-random measure is supported on a random closed set, and

(ii) for every random closed set C ✓ 2!, there is a Q-random measure µ such thatSupp(µ) = C.

Proof. We will define the measure Q so that each Q-random measure is obtained

by restricting Lebesgue measure to a random closed set. That is, each Q-random

measure will be uniform on all of the branching nodes of its support.

We define Q in terms of an almost total functional � : 3! ! 2!. On input x 2 3!,

� will treat x as the code for a closed set and will output the sequence y = �i2!yi

defined as follows. For each i 2 !, we set

yi

=

8>>>><

>>>>:

11 if xi

= 0

01 if xi

= 1

101 if xi

= 2

.

If we think of the columns of y as encoding the conditional probabilities of a measure

µy

, then if (�i

)i2! is the standard enumeration of 2<!, these conditional probabilities

41

Page 49: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

are given by

p�i =

8>>>><

>>>>:

1 if xi

= 0

0 if xi

= 1

1/2 if xi

= 2

.

That is, �(x) = y, where y represents the unique measure µy

such that µy

(�0 | �) =

p�

for each � 2 2<!. Let Q be the measure on P (2!) induced by the composition of

� and the representation map : 2! ! P (2!) defined in Section 3.3.4.

We now verify (i) by showing that � maps each x 2 MLR to a Q-random measure

supported on a random closed set. Let x 2 MLR and set �(x) = y. By preservation

of randomness, (�(x)) = µy

is Q-random.

Next, since x 2 MLR, [Tx

] is a random closed set. We claim that Supp(µy

) = [Tx

].

Suppose that � 2 2<! is the (n + 1)-st extendible node of Tx

. Then one of the

following holds:

(a) �0 2 Tx

and �1 /2 Tx

;

(b) �0 /2 Tx

and �1 2 Tx

; or

(c) �0 2 Tx

and �1 2 Tx

.

Moreover, we have

• Condition (a) holds i↵ x = 0 i↵ µy

(�0 | �) = 1 and µy

(�1 | �) = 0.

• Condition (b) holds i↵ x = 1 i↵ µy

(�0 | �) = 0 and µy

(�1 | �) = 1.

• Condition (c) holds i↵ x = 2 i↵ µy

(�0 | �) = µy

(�1 | �) = 1/2.

One can readily verify that µy

(�_i | �) > 0 if and only if �_i 2 Tx

. Thus,

Z 2 Supp(µy

) , µy

(Z�n) > 0 for every n

, µy

(Z�(n+ 1) | Z�n) > 0 for every n

, Z�(n+ 1) 2 Tx

for every n

, Z 2 [Tx

].

42

Page 50: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

We have thus established that µy

is supported on a random closed set.

To show (ii), let C ✓ 2! be a random closed set. By no randomness ex nihilo,

there is some Martin-Lof random z 2 3! such that C = [Tz

]. Hence, (�(z)) is a Q-

random measure ⌫. By the definition of �, ⌫ has support [Tz

] = C, which establishes

the claim.

Instead of changing the measure on P (2!) we can also establish a correspondence

between random closed sets and random measures by considering, not the support of

a random measure, but what we refer to as its 1/3-support.

Definition 3.5.2. Let µ 2 P (2!) and set

= {� : (8i < |�|) [ µ���(i+ i) | ��i

�> 1/3 ]} [ {✏}.

Then the 1/3-support of the measure µ is the closed set [Tµ

].

Theorem 3.5.3. A closed set C 2 C(2!) is random if and only it is the 1/3-support

of some random measure µ 2 P (2!).

Proof. We define an almost-total, computable, and Lebesgue-measure-preserving map

� : 2! ! 3! that induces a map � : P (2!) ! C(2!) such that �(µ) = [Tµ

]. Suppose

x = �xi

2 2! such that µ(�i

_0 | �i

) = xi

for each i. Then for � 2 Tµ

(which must

exist since ✏ 2 Tµ

),

• if µ(�0) 2 [0, 1/3), then �1 2 Tµ

and �0 /2 Tµ

;

• if µ(�0) 2 (2/3, 1], then �0 2 Tµ

and �1 /2 Tµ

;

• if µ(�0) 2 (1/3, 2/3), then �0 2 Tµ

and �1 2 Tµ

; and

• if µ(�0) = 1/3 or µ(�0) = 2/3, then �(x) is undefined.

Clearly � is defined on a set of measure one, since it is defined on all sequences x

such that xi

6= 1/3 and xi

6= 2/3. Observe that each � 2 Tµ

extends to an infinite

path in [Tµ

]. Thus, if � is the (n+1)-st extendible node in Tµ

, then each of the events

43

Page 51: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

• �0 2 Tµ

and �1 /2 Tµ

,

• �0 /2 Tµ

and �1 2 Tµ

, and

• �0 2 Tµ

and �1 2 Tµ

,

occurs with probability 1/3, since each event corresponds to whether µ(�0) 2 [0, 1/3),

µ(�0) 2 (2/3, 1], or µ(�0) 2 (1/3, 2/3), respectively. It thus follows that the pushfor-

ward measure induced by � and � is the Lebesgue measure on 3!. By preservation

of randomness, each random measure µ is mapped to a random closed set, and by

no randomness ex nihilo, each random closed set is the image of a random measure

under �. This establishes the theorem.

3.6 The range of a random continuous function

In [3], it was shown that for each y 2 2!

�({x 2 2! : y 2 ran(Fx

)}) = 3/4.

From this, it follows that every y 2 2! is in the range of some random F 2 F(2!).

In this section, we prove that �(ran(F )) 2 (0, 1) for every random function F . First,

we will prove that �(ran(F )) > 0 for each random function. This implies that no

random function is injective and that the range of a random function is never a

random closed set. These improve two results of [3] according to which (i) not every

random function is injective and (ii) the range of a random function is not necessarily

a random closed set. Our proof requires us to prove some auxiliary facts about the

measure induced by a random function.

To prove that �(ran(F )) < 1 for every F 2 F(2!), we will show that no random

function is surjective, from which the result immediately follows. Our result on

surjectivity also improves a result of [3] according to which not every random function

is surjective.

44

Page 52: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

We begin by proving the following, which is similar to Lemma 2.4.2 in Chapter 2

for random measures.

Lemma 3.6.1. Let �F be the natural measure on F(2!). Then the measure PF on

P (2!) induced by the map F 7! � � F�1 has barycenter �; i.e.

�(�) =

Z

P(2!)µ(�) dPF(µ)

for each � 2 2<!.

Proof. By change of variables, it su�ces to show that

2�|�| =

Z

F(2

!)

�(F�1J�K) d�F (3.1)

for each � 2 2<!. Without loss of generality, we assume � = 0n. We proceed then by

induction on n.

Equation (3.1) holds when � = ✏, since each random F is total by Theorem 3.3.13.

Now supposing that equation (3.1) holds for 0n, we show it also holds for 0n+1.

Suppose thatRF(2

!)

�(F�1J0nK) d�F = 2�n. To computeRF(2

!)

�(F�1J0n+1K) d�F , we

note that by symmetryRF(2

!)

�(F�1J0n+1K) d�F = 2 ·RF(2

!)

�(J0K \ F�1J0n+1K) d�F

and we proceed to compute sn+1

:=RF(2

!)

�(J0K \ F�1J0n+1K) d�F .

Recall that any F 2 F(2!) can be viewed as a labeling by 0’s, 1’s, and 2’s of

the nodes of the full binary branching tree (where the root node is unlabeled). We

computeRF(2

!)

�(J0K \ F�1J0n+1K) d�F by considering the three equiprobable cases

for the label of the node 0 for an arbitrary F 2 F(2!). The point is that the label

0 contributes to producing an output beginning with 0n+1, the label 1 rules out the

possibility of producing an output beginning with 0n+1, and the label 2 neither con-

tributes to nor rules out the possibility of producing an output beginning with 0n+1.

45

Page 53: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Case 1: If the node 0 is labeled with a 0, then the measure of all sequences extendingthe node 0 that (after removing 2’s) yield an output extending 0n+1 is equalto the measure of all sequences that yield an output extending 0n times 1/2(the measure determined by the initial label 0), i.e., 1/2 · 2�n.

Case 2: If the node 0 is labeled with a 1, then the measure of all sequences extendingthe node 0 that (after removing 2’s) yield an output extending 0n+1 is equalto 0.

Case 3: If the node 0 is labeled with a 2, then the measure of all sequences extendingthe node 0 that (after removing 2’s) yield an output extending 0n+1 is equalto the measure of all sequences that yield an output extending 0n+1 times1/2 (the measure determined by the initial label 2), i.e., 1/2 · s

n+1

.

Putting this all together gives

sn+1

=1

3· 12· 2�n +

1

3· 0 + 1

3· 12· 2s

n+1

,

which yields sn+1

= 2�n/4, as desired.

Lemma 3.6.2 (Hoyrup [15], relativized). Let Q be a computable measure on P (2!)

with barycenter µ. Then for any z 2 2!,

MLRz

µ

=[

⌫2MLRzQ

MLRz

.

Theorem 3.6.3. If F 2 F(2!) is random, then �(ran(F )) > 0.

Proof. Fix a random F 2 F(2!). We show that ran(F ) always contains an element

of MLRF

. Since ran(F ) is ⇧0,F

1

by Remark 3.3.1, it follows by Proposition 3.3.2 that

�(ran(F )) > 0.

46

Page 54: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

By preservation of randomness relative to F , if x 2 MLRF

, then F (x) 2 MLRF

��F�1

.

By Lemmas 3.6.1 and 3.6.2, MLRF

��F�1

✓ MLRF

, so F (x) 2 MLRF

, as desired.

Corollary 3.6.4. If F 2 F(2!) is random, then F is not injective.

Proof. For any y 2 2!, a relativization of Theorem 3.4.4(i) shows that F�1({y}), if

nonempty, is a random closed set relative to y provided that F is random relative to

y. Since ran(F ) has positive Lebesgue measure, there is y 2 ran(F ) that is random

relative to F . Then by Van Lambalgen’s Theorem, F is also random relative to y.

So, F�1({y}) is a nonempty random closed set and, hence, has size continuum by

Theorem 3.3.10. Thus, F is not injective.

Corollary 3.6.5. If F 2 F(2!) is random, then ran(F ) is not a random closed set.

Proof. By Theorem 3.3.9, every random closed set has Lebesgue measure 0. But by

Theorem 3.6.3, the range of a random F 2 F(2!) has positive Lebesgue measure.

This gives the conclusion.

From the proof of Corollary 3.6.4, we can also obtain the following.

Corollary 3.6.6. The measures induced by random functions are atomless.

Proof. Let F 2 F(2!) be random and suppose that z 2 2! is an atom of �F

, i.e.,

�F

({z}) > 0. It follows that z 2 MLRF

�F, since z is not contained in any �

F

-nullsets.

As we argued in the proof of Corollary 3.6.4, F�1({z}) is a nonempty random closed

set and, thus, has Lesbesgue measure zero, by Theorem 3.3.9. This contradicts our

assumption.

We now turn to showing that �(ran(F )) < 1 for every random F 2 F(2!). Instead

of proving this directly, we will first prove the following.

Theorem 3.6.7. If F 2 F(2!) is surjective, then F is not random.

47

Page 55: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

To prove Theorem 3.6.7, we provide a careful analysis of the result from [3] stated

at the beginning of this section; namely, that for each y 2 2!,

�({x 2 2! : y 2 ran(Fx

)}) = 3/4.

This result is obtained by showing that the strictly decreasing sequence (qn

)n2! de-

fined by

qn

= �({x 2 2! : ran(Fx

) \ J0nK})

converges to 3/4 and using the fact that

�({x 2 2! : ran(Fx

) \ J0nK}) = �({x 2 2! : ran(Fx

) \ J�K})

for each � 2 2<! of length n. The sequence (qn

)n2! is obtained by using a case

analysis to derive the following recursive formula:

qn+1

=3

2

p1 + 4q

n

� 3

2� q

n

. (3.2)

For details, see [3, Theorem 2.12].

For F 2 F(2!) and � 2 2<!, let us say that F hits J�K if ran(F )\ J�K 6= ;. Thus,

qn

is the probability that a random F 2 F(2!) hits J�K for some fixed � 2 2<! such

that |�| = n. It is worth noting that the function T (J�K) = qn

for each � of length n

induces an e↵ective capacity on C(2!); see [6] for details on e↵ective capacities.

We will proceed by proving a series of lemmas. First, for each n 2 !, let ✏n

satisfy

qn

= 3/4 + ✏n

. Since

(i) qn

> qn+1

for every n, and

(ii) limn!1 q

n

= 3/4,

we know that each ✏n

is non-negative and limn!1 ✏

n

= 0. Moreover, we have the

48

Page 56: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

following.

Lemma 3.6.8. For each n � 1,

(a) ✏n+1

1

2

✏n

,

(b) ✏n

2�(n+2),

(c) ✏n+1

� 1

2

✏n

� 2�(2n+5), and

(d) ✏n

� 1

2

n+5�1

.

Proof. First, let n � 1. If we substitute 3/4 + ✏n+1

and 3/4 + ✏n

for qn+1

and qn

,

respectively, into Equation (3.2), we obtain (after simplification)

✏n+1

= 3p1 + ✏

n

� 3� ✏n

. (3.3)

Sincep1 + x 1 + x

2

on [0, 1], from (3.3) we can conclude

✏n+1

3�1 +

✏n

2

�� 3� ✏

n

=1

2✏n

,

thereby establishing (a). To show (b), we proceed by induction. Using the fact from

[3] that q1

=p45�5

2

, it follows by direct calculation that

✏1

=

p45� 5

2� 3

4 2�3.

Next, assuming that ✏n

2�(n+2), it follows from (a) that

✏n+1

1

2✏n

1

22�(n+2) = 2�(n+3).

To show (c), for each fixed n � 1, we use a di↵erent approximation ofp1 + x from

below. By (b), since ✏n

2�(n+2), we use the Taylor series approximation 1 + x

2

of

49

Page 57: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

p1 + x on [0, 2�(n+2)], with error term

maxc2[0,2�(n+2)

]

1

4(1 + c)3/2x2

2=

x2

8.

Thus,p1 + x � 1 +

x

2��2�(n+2)

�2

/8 = 1 +x

2� 2�(2n+7)

on [0, 2�(n+2)]. Combining this with Equation (3.3) yields

✏n+1

� 3(1 +✏n

2� 2�(2n+7))� 3� ✏

n

� 1

2✏n

� 2�(2n+5).

Lastly, to prove (d), first observe that

✏1

=

p45� 5

2� 3

4� 2�4 (3.4)

and thus it certainly follows that

✏1

� 1

26 � 1.

Next, using (c), we verify by induction that for n � 2,

✏n

� 1

2n�1

✏1

��2�(n+5) + . . .+ 2�(2n+3)

�. (3.5)

For n = 2, by part (c) we have

✏2

� 1

2✏1

� 2�7.

Supposing that

✏n

� 1

2n�1

✏1

��2�(n+5) + . . .+ 2�(2n+3)

�,

50

Page 58: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

again, by part (c), we have

✏n+1

� 1

2✏n

� 2�(2n+5) � 1

2

⇣ 1

2n�1

✏1

��2�(n+5) + . . .+ 2�(2n+3)

�⌘� 2�(2n+5)

=1

2n✏1

��2�(n+6) + . . .+ 2�(2n+4)

�� 2�(2n+5)

=1

2n✏1

��2�(n+6) + . . .+ 2�(2n+5)

�,

which establishes Equation (3.5). Combining Equations (3.4) and (3.5) yields

✏n

� 1

2n�1

2�4 ��2�(n+5) + . . .+ 2�(2n+3)

�.

=1

2(n+3)

� 2�(n+4)

�2�1 + . . .+ 2�(n�1)

=1

2(n+3)

� 2�(n+4)(1� 2(n�1))

� 2�(n+3) � 2�(n+4)

� 2�(n+4)

� 1

2n+5 � 1.

Lemma 3.6.9. For n � 1, we have

qn+1

qn

1� 2�(n+6).

Proof. By Lemma 3.6.8(d),

✏n

� 1

2n+5 � 1=

2�(n+5)

1� 2�(n+5)

,

which implies

�1� 2�(n+5)

�✏n

� 2�(n+5) = 4 · 2�(n+7) � 3 · 2�(n+7).

51

Page 59: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Multiplying both sides by 1/2 yields

1

2

�1� 2�(n+5)

�✏n

� 3

42�(n+6).

Expanding the left hand side and using the fact from Lemma 3.6.8(a) that 1

2

✏n

� ✏n+1

,

we have1

2✏n

+⇣14+ . . .+ 2�(n+6)

⌘✏n

� 3

42�(n+6) + ✏

n+1

,

which is equivalent to

(1� 2�(n+6))✏n

+3

4(1� 2�(n+6)) � 3

4+ ✏

n+1

.

This yields the inequality�1� 2�(n+6)

�qn

� qn+1

,

from which the conclusion follows.

Lemma 3.6.10. For n � 1, we have

2⇣q

n+1

qn

⌘� 1

!2

n

132

pe< 1.

Proof. First, it follows from Lemma 3.6.9 that

2⇣q

n+1

qn

⌘� 1 1� 2�(n+5)

and hence 2⇣q

n+1

qn

⌘� 1

!2

n

⇣1� 2�(n+5)

⌘2

n

. (3.6)

Next, it is straightforward to verify by cross-multiplication that

2n+5 � 1

2n+5

2n+6 � 1

2n+6

52

Page 60: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

and2n+5 � 1

2n+5

2n+6 � 1

2n+6

!2

,

from which it follows that

2n+5 � 1

2n+5

!2

n

2n+6 � 1

2n+6

!2

n+1

.

Lastly, we have

limn!1

⇣1� 2�(n+5)

⌘2

n

=1

32

pe.

From Equation (3.6) and the fact that the sequence�(1 � 2�(n+5))2

n�n2! is non-

decreasing and converges to 1/ 32

pe, the claim immediately follows.

The proof of following result is essentially the proof of the e↵ective Choquet

Capacity Theorem in [6]. We reproduce the proof here for the sake of completeness.

Lemma 3.6.11. The probability that a random continuous function F hits both J0K

and J1K is 2q1

� 1, and the probability that F hits both J�0K and J�1K for a fixed

� 2 2<! of length n � 1, given that F hits J�K, is equal to 2⇣q

n+1

qn

⌘� 1.

Proof. We write the probability that F hits J�K for some fixed � as P(F 2 H�

). Now

since P(F 2 H0

) = q1

, it follows that P(F 2 H1

\H0

) = 1� q1

(here we use the fact

that every random function is total). By symmetry, we have P(F 2 H0

\H1

) = 1�q1

.

Since F is total with probability one, it follows that

P(F 2 H0

\H1

) = 1��P(F 2 H

0

\H1

) + P(F 2 H1

\H0

)�

and thus

P(F 2 H0

\H1

) = 1� ((1� q1

) + (1� q1

)) = 2q1

� 1.

Next, let � be a string of length n � 1 and let i 2 {0, 1}. Since P(F 2 H�

) = qn

and

53

Page 61: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

P(F 2 H�

_i

) = qn+1

, it follows that

P(F 2 H�

_i

| F 2 H�

) =P(F 2 H

_i

& F 2 H�

)

P(F 2 H�

)=

P(F 2 H�

_i

)

P(F 2 H�

)=

qn+1

qn

.

Consequently,

P(F 2 H�1

\H�0

| F 2 H�

) = P(F 2 H�0

\H�1

| F 2 H�

) = 1� qn+1

qn

Thus,

P(F 2 H�0

\H�1

| F 2 H�

) = 1��P(F 2 H

�0

\H�1

| F 2 H�

)

+ P(F 2 H�1

\H�0

| F 2 H�

)�

= 1�⇣�

1� qn+1

qn

�+�1� q

n+1

qn

�⌘

= 2⇣q

n+1

qn

⌘� 1.

To complete the proof of Theorem 3.6.7, we now define a Martin-Lof test on F(2!)

that covers all surjective functions. We say that a function F 2 F(2!) is onto up

to level n if F 2 H�

for every � 2 2n. By Lemma 3.6.11, the probability that a

function is onto up to level n is

(2q1

� 1)n�1Y

i=1

2⇣q

i+1

qi

⌘� 1

!2

i

132

pe

!n

.

Thus, if we set

Un

= {F 2 F(2!) : F is onto up to level n},

and

f(n) = min{k : ( 32

pe)�k 2�n},

54

Page 62: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

which is clearly computable, then (Uf(n)

)n2! is a Martin-Lof test with the property

that F 2 F(2!) is onto if and only if F 2T

n2! Uf(n)

. This completes the proof.

Corollary 3.6.12. If F 2 F(2!) is random, then �(ran(F )) < 1.

Proof. Suppose �(ran(F )) = 1. Since ran(F ) is closed, it follows that ran(F ) = 2!.

Then F is onto, so it cannot be random.

We also have the following corollary.

Theorem 3.6.13. No measure induced by a random function is a random measure

in the sense of Definition 3.3.14.

Proof. Let F 2 F(2!) be random. Then by Corollary 3.6.12, �(ran(F )) < 1. It

follows that 2! \ ran(F ) is non-empty and open, so J�K ✓ 2! \ ran(F ) for some

� 2 2<!. Thus, �F

(�) = 0. By contrast, for every random measure µ, we have

µ(�) > 0, and the result follows.

55

Page 63: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

CHAPTER 4

A NEW LAW OF LARGE NUMBERS EFFECTIVIZATION

4.1 Introduction

To e↵ectivize a classical theorem of mathematics roughly means to make all the

objects mentioned in the theorem (su�ciently) computable and gauge the e↵ectivity

of the conclusion. When the classical theorem comes from probability, the e↵ectivity

of the conclusion can usually be gauged by whether the conclusion holds of algo-

rithmically randoms. For example, Birkho↵’s Ergodic Theorem (a generalization of

the Strong Law of Large Numbers) says that if f is an integrable function on the

probability space (X,µ) and T : X ! X is measure-preserving, then

1

n

X

i<n

f(T i(x)) !Z

f dµ

for µ-almost-every x 2 X. This theorem was e↵ectivized in [5], where it was con-

cluded that if X = 2! and µ, f , and T are all computable, then the convergence

holds on every Martin Lof random.

This chapter contains an e↵ectivization of a theorem of Erdos and Renyi [11],

which says that the maximal average winnings of short subgames of a fair game

converges almost-surely to a certain constant depending on the length of the subgame.

We basically follow their proof, injecting e↵ectivity wherever necessary.

56

Page 64: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

4.2 Stirling’s Formula

We recall Stirling’s formula, which is a crucial part of the proof the main theorem

of this chapter.

Theorem 4.2.1 ([20]).

n! = (1 + o(1))p2⇡nnne�n. (4.1)

In other words, there is a function R(n) such that R(n) ! 0 as n ! 1 and

n! = (1 +R(n))p2⇡nnne�n. (4.2)

The next lemma is a consequence of Stirling’s formula and gives an e↵ective bound

for the probability that a Bernoulli-1/2 random variable has at least �n successes

where � is some parameter between 1/2 and 1.

Lemma 4.2.2. For all � 2 (1/2, 1) there are positive reals A and B, uniformly

computable from �, such that for all n

An�1/22n(h(�)�1) 2�n

X

n�kn

✓n

k

◆ Bn�1/22n(h(�)�1), (4.3)

where h(�) := �� log2

� � (1� �) log2

(1� �).

Proof. First we prove the upper bound in (4.3). Letm = dn�e. For everym k n,

✓n

k

◆=

✓n

m

◆k�1Y

i=m

n� i

i+ 1✓n

m

◆✓n�m

m+ 1

◆k�m

.

Note that n�m

m+1

< 1 since n 2m. This allows for a first bound using a geometric

57

Page 65: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

series:

2�n

X

mkn

✓n

k

◆ 2�n

✓n

m

◆ X

mkn

✓n�m

m+ 1

◆k�m

< 2�n

✓n

m

◆X

mk

✓n�m

m+ 1

◆k�m

= 2�n

✓n

m

◆m+ 1

2m+ 1� n

Now we use Stirling’s formula to bound�n

m

�from above. Since R(n) ! 0, there is

a natural number R such that R � R(n) for all n. We will use the fact that � m/n,

together with the following claim.

Claim 1. For any � 2 (�, 1), there is an N , computable from � and �, such that

m/n < � whenever n � N .

Proof of claim. Since n� m n� + 1, � m/n � + 1/n. So, with � and � as

oracles, we can compute N such that � + 1/n < � whenever n � N . This proves the

claim.

Fix such a � computable from � (e.g. � = (� + 1)/2) and the corresponding N .

Then for n � N ,

✓n

m

◆=

(1 +R(n))p2⇡nnne�n

(1 +R(m))p2⇡mmme�m(1 +R(n�m))

p2⇡(n�m)(n�m)n�me�(n�m)

=1pn

1p2⇡

1 +R(n)

(1 +R(m))(1 +R(n�m))

1p(m/n)(1�m/n)

2nh(m/n) (4.4)

1pn

1p2⇡

(1 +R)1p

(m/n)(1�m/n)2nh(m/n)

1pn

1p2⇡

(1 +R)1p

(m/n)(1�m/n)2nh(�)

1pn

1p2⇡

(1 +R)1p�

1p(1�m/n)

2nh(�)

1pn

1p2⇡

(1 +R)1p�

1p(1� �)

2nh(�) (4.5)

58

Page 66: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

It remains only to bound m+1

2m+1�n

. We again use the fact that � m/n < � for

n � N :

m+ 1

2m+ 1� n=

m/n+ 1/n

2m/n+ 1/n� 1

� + 1

2� � 1.

Thus B := 1p2⇡

(1 + R) 1p�

1p(1��)

�+1

2��1

works, where � = (1 + �)/2 and R � R(n)

for all n. (Actually B only works for n � N , but we can compute B1

, B2

, . . . , BN�1

that work for n < N , and then compute max{B,B1

, B2

, . . . , BN�1

}.)

Now to finding A. This is a bit easier since we’ll actually show that�

n

dn�e

��

An�1/22nh(�) for some positive constant A. We’ll use (4.4), still with m = dn�e, and

the following facts:

• there is N such that 1+R(n)

(1+R(m))(1+R(n�m))

� 1/2 whenever n � N ,

• 1

m/n

� 1

�+1

for n � 1,

• 1

1�m/n

� 1

1��

for all n.

These facts, together with (4.4), yield

✓n

dn�e

◆� 1

2p2⇡(� + 1)(1� �)

n�1/22nh(dn�e/n).

It is su�cient then to show that 2nh(dn�e/n) � C2nh(�) for some positive constant C

computable from �. To do this, we start by noting that 2nh(dn�e/n) � 2nh(�+1/n), since

dn�e/n � + 1/n and h is decreasing on [1/2, 1]. Using the Taylor expansion of h

centered at �, evaluated at � + 1/n, we get

h(� + 1/n) = h(�) +h0(�)

n+

h00(�)

2n2

+ · · · ,

59

Page 67: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

so that

nh(� + 1/n) = nh(�) + h0(�) +h00(�)

2n+ · · · .

The tail of that series,h00(�)

2n+ · · ·

goes to 0 as n ! 1; and, moreover, we can use Taylor’s inequality to compute N

after whichh00(�)

2n+ · · · � �1.

Putting this all together, we have

2nh(dn�e/n) � 2nh(�+1/n) = 2nh(�)2h0(�)2h

00(�)/(2n)+··· � 2h

0(�)�12nh(�).

So, in summary, A = 2

h0(�)�1

2

p2⇡(�+1)(1��)

works. (Again, A actually only works for su�-

ciently large n (computable from �), but we can check the first finitely many n’s to

find an A that works for all.)

4.3 Maximal average gains over short subgames of a fair game

LetX1

, X2

, . . . be a sequence of IID random variables taking on the values ±1 with

probability 1/2. Each Xn

represents the winnings in a fair game. Let Sn

=P

in

Xi

and

#(N, k) = max0nN�k

Sn+k

� Sn

k= max

0nN�k

Xn+1

+Xn+2

+ · · ·+Xn+k

k

So #(N, k) represents the maximal average gain over length-k subgames of the length-

N game.

For those who prefer the non-probabilistic point-of-view, we are working in the

Cantor space {�1, 1}! with the Lebesgue (fair coin) measure. Sn

and #(N, k) are

60

Page 68: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

then both functions from {�1, 1}! to R; thus we will write Sn

(x) and #(N, k)(x),

where x 2 {�1, 1}!.

Lemma 4.3.1. Let c � 1 and define ↵ 2 (0, 1] via 1/c = 1� h�1+↵

2

�. Then for any

✏ > 0, with ↵0 = ↵+ ✏, there are positive constants B and �, depending only on and

uniformly computable from c and ✏, such that

P (#(N, bc log2

Nc) � ↵0) BN��,

where P (. . .) denotes the probability of the event in parenthesis.

Proof. We begin by noticing that #(N, bc log2

Nc) � ↵0 if and only ifSn+bc log

2

Nc�Sn

bc log2

Nc �

↵0 for some n 2 {0, 1, . . . , N � bc log2

Nc}. When there are exactly d 1’s among

Xn+1

, Xn+2

, . . . , Xn+bc log

2

Nc,

Sn+bc log

2

Nc � Sn

bc log2

Nc =d� (bc log

2

Nc � d)

bc log2

Nc .

This is � ↵0 if and only if d � bc log2

Nc↵

0+1

2

. The probability that d � bc log2

Nc↵

0+1

2

can then be bounded above using (4.3) with � = ↵

0+1

2

:

P (#(N, bc log2

Nc) � ↵0) = P

0

@[

0nN�bc log2

Nc

Sn+bc log

2

Nc � Sn

bc log2

Nc � ↵0

1

A

X

0nN�bc log2

Nc

P

✓Sn+bc log

2

Nc � Sn

bc log2

Nc � ↵0◆

X

0nN�bc log2

Nc

Bbc log2

Nc�1/22bc log2 Nc(h(�)�1)

NBbc log2

Nc�1/22bc log2 Nc(h(�)�1)

NB2bc log2 Nc(h(�)�1)

NB2(c log2 N)(h(�)�1)

= BN c(h(�)�1)+1

61

Page 69: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

Since h is decreasing on [1/2, 1], h((1+↵+ ✏)/2) < h((1+↵)/2), which is equivalent

to c(h(�) � 1) + 1 < 0, so � = �(c(h(�) � 1) + 1) and the B from Lemma 4.2.2

work.

In Chapters 2 and 3, we primarily used the Martin Lof test definition of Martin

Lof randomness. Here, however, we use the Solovay test definition. Recall also from

those chapters that S ✓ {�1, 1}! is e↵ectively open if it is a c.e. union of cylinder

sets. Also, the preimage of an e↵ectively open set via a computable map is e↵ectively

open.

Definition 4.3.2 ([10]). A sequence x 2 {�1, 1}N is Martin Lof random relative

to c if x is not in infinitely many Ai

’s when hAi

ii2N is a sequence of subsets of

{�1, 1}N, uniformly e↵ectively open relative to c, such thatP

P (Ai

) < 1.

Lemma 4.3.3. Let c � 1 be Martin Lof random (relative to ;) and define ↵ 2 (0, 1]

via 1/c = 1� h�1+↵

2

�. Then

lim supN!1

#(N, bc log2

Nc)(x) ↵

for every x 2 {�1, 1}N that is Martin Lof random relative to c.

Proof. Let ✏ > 0 be a (small) computable number, and set ↵0 = ↵+ ✏. It is straight-

forward to show that bc log2

(2(j+1)/c � 1)c = j, so by Lemma 4.3.1, the series

1X

j=1

P (#(2(j+1)/c � 1, j) > ↵0)

converges.

The hypothesis that c is random means that the random variables #(2(j+1)/c�1, j)

are uniformly computable relative to c (note that if c were not random, determining

b2(j+1)/c � 1c could be noncomputable relative to c, since the floor function is not

computable). Thus, the events {#(2(j+1)/c � 1, j) > ↵0} are e↵ectively open relative

62

Page 70: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

to c. It follows that for every x that is Martin Lof random relative to c, the inequality

#(2(j+1)/c � 1, j)(x) ↵0 holds for all but finitely many j.

Since bc log2

Nc = j for 2j/c N 2(j+1)/c�1, the random variables #(N, bc log2

Nc)

and #(2(j+1)/c� 1, j) are looking at the same length of windows to take the max, and

since the latter has more windows,

#(N, bc log2

Nc) #(2(j+1)/c � 1, j)

when 2j/c N 2(j+1)/c � 1. Thus, we know now that for any x that is Martin Lof

random relative to c,

#(N, bc log2

Nc)(x) ↵0

for all but finitely many N . Since ✏ is arbitrary, the proof is complete.

Lemma 4.3.4. Let c � 1 be Martin Lof random (relative to ;) and define ↵ 2 (0, 1]

via 1/c = 1� h�1+↵

2

�. Then

lim infN!1

#(N, bc log2

Nc)(x) � ↵

for every x 2 {�1, 1}N that is Martin Lof random relative to c.

Proof. Let 0 < ✏ < ↵ be computable and set ↵00 = ↵� ✏.

If #(N, k) ↵00, thenS

(r+1)k�Srk

k

↵00 for each 0 r n/k � 1. The random

63

Page 71: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

variables S(r+1)k

� Srk

are IID for di↵erent r’s, so

P (#(N, bc log2

Nc) ↵00) P

✓S(r+1)bc log

2

Nc � Srbc log

2

Nc

bc log2

Nc ↵00,

0 r N

bc log2

Nc � 1

= P

✓Sbc log

2

Nc

bc log2

Nc ↵00◆bN/bc log

2

Nc�1c+1

= P

✓Sbc log

2

Nc

bc log2

Nc ↵00◆bN/bc log

2

Ncc

P

✓Sbc log

2

Nc

bc log2

Nc ↵00◆

N/bc log2

Nc�1

If there are d many 1’s among X1

, . . . , Xbc log2

Nc, then

↵00 �Sbc log

2

Nc

bc log2

Nc =2d� bc log

2

Ncbc log

2

Nc () d bc log2

Nc1 + ↵00

2.

Now, we use the lower bound in (4.3) to estimate P�d bc log

2

Nc1+↵

00

2

�, with

� := 1+↵

00

2

.

P

✓d bc log

2

Nc1 + ↵00

2

◆= 1� P

✓d > bc log

2

Nc1 + ↵00

2

= 1� P

✓d � bc log

2

Nc1 + ↵00

2

1� Abc log2

Nc�1/22bc log2 Nc(h(�)�1)

1� Abc log2

Nc�1/22(c log2 N�1)(h(�)�1)

= 1� Abc log2

Nc�1/22(c log2 N)(h(�)�1)21�h(�)

= 1� Abc log2

Nc�1/2N c(h(�)�1)21�h(�)

1� Abc log2

Nc�1/2N c(h(�)�1)

Because the function h is decreasing on [1/2, 1], h(�) > h((1+↵)/2), so c(h(�)�1) >

64

Page 72: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

c(h((1 + ↵)/2)� 1) = �1, say c(h(�)� 1) = �1 + �. Thus,

1� Abc log2

Nc�1/2N c(h(�)�1) = 1� Abc log2

Nc�1/2N�1+�

1� AN�1+�/2.

The last inequality holds because bc log2

Nc N � for su�ciently large N , so that

bc log2

Nc�1/2 � N��/2.

Putting these inequalities together, with �1

:= �/2, and then using the inequality

1� x e�x, we get

P (#(N, bc log2

Nc) ↵00) ✓1� AN �

1

N

◆N/bc log

2

Nc�1

⇣e�

AN�1

N

⌘N/bc log

2

Nc�1

= e�AN�

1

bc log

2

Nc�1

e�AN�

1

c log

2

N

= e�AN�

1

c log

2

N

e�N

�1

/2(eventually)

N�2 (eventually).

Thus,1X

N=1

P (#(N, bc log2

Nc) < ↵00)

converges.

Because c is random, the random variables #(N, bc log2

Nc) are uniformly com-

putable relative to c (note that if c weren’t random, the floor function could be

problematic), so the sets {x 2 {�1, 1}! : #(N, bc log2

Nc)(x) < ↵00} are uniformly

e↵ectively open. Thus, if x is Martin Lof random relative to c, it is in only finitely

many of those sets; i.e., there is M such that #(N, bc log2

Nc)(x) � ↵00 = ↵ � ✏

65

Page 73: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

whenever N � M . Since ✏ is arbitrary, we are done.

Putting Lemmas 4.3.3 and 4.3.4 together,we have so far shown:

Lemma 4.3.5. For any Martin Lof random c � 1,

#(N, bc log2

Nc)(x) ! ↵

for every x that is Martin Lof random relative to c.

We strengthen this now by showing that in fact the convergence holds for any

c � 1 and any Martin Lof random x, even when x is not random relative to c.

Theorem 4.3.6. Let c � 1, and define ↵ = ↵(c) 2 (0, 1] via 1/c = 1�h�1+↵

2

�. Then

limN!1

#(N, bc log2

Nc)(x) = ↵

for every x 2 {�1, 1}N that is Martin Lof random (relative to ;).

Proof of Theorem 4.3.6. Fix c � 1, a Martin Lof random x 2 {�1, 1}!, and let

✏ > 0. By Van Lambalgen’s Theorem, Lemma 4.3.5, and the continuity of ↵(c), there

is c1

� c that is Martin Lof random relative to x, and there is M1

2 ! such that

#(N, bc1

log2

Nc)(x) > ↵(c) � ✏ whenever N � M1

. Further, we assume M1

is large

enough to guarantee that for every integer n � bc1

log2

M1

c, there is N � M1

such

that bc1

log2

Nc = n.

Let M be such that bc log2

Mc � bc1

log2

M1

c and let N � M . By the hy-

pothesis on M1

, there is N1

2 [M1

, N ] such that bc log2

Nc = bc1

log2

N1

c. Then

#(N, bc log2

Nc) � #(N1

, bc1

log2

N1

c) > ↵(c)� ✏. Since ✏ is arbitrary, we have that

lim infN!1

#(N, bc log2

Nc)(x) � ↵(c).

66

Page 74: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

For the lim sup direction, let c2

c be random relative to x and M2

2 ! be such

that #(N, bc2

log2

Nc)(x) < ↵(c)+✏ whenever N � M1

. Again we assume M2

is large

enough to guarantee that for every integer n � bc2

log2

M2

c, there is N � M2

such

that bc2

log2

Nc = n.

If N � M2

, then bc log2

Nc = [c2

log2

K] for some K � N . But then

#(N, bc log2

Nc) #(K, bc2

log2

Kc) < ↵(c) + ✏.

Since ✏ is arbitrary, we have that

lim supN!1

#(N, bc log2

Nc)(x) ↵(c),

and the proof is now complete.

We do not know if this Theorem 4.3.6 holds for other notions of randomness. It

would be interesting to know, for example, if the theorem is satisfied by all Kurtz

randoms (recall from Chapter 2 that a real is Kurtz random if it avoids all null e↵ec-

tively closed sets). It would be even more interesting to know whether Theorem 4.3.6

characterizes a certain notion of randomness, so that the theorem holds on a real x

if and only if x is that type of random.

67

Page 75: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

BIBLIOGRAPHY

1. L. M. Axon. Algorithmically random closed sets and probability. PhDthesis, 2010. URL http://gateway.proquest.com.lp.hscl.ufl.edu/

openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:

dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3441570. Thesis(Ph.D.)–University of Notre Dame.

2. G. Barmpalias, P. Brodhead, D. Cenzer, S. Dashti, and R. Weber. Algorithmicrandomness of closed sets. J. Logic Comput., 17(6):1041–1062, 2007. ISSN 0955-792X. doi: 10.1093/logcom/exm033. URL http://dx.doi.org.lp.hscl.ufl.

edu/10.1093/logcom/exm033.

3. G. Barmpalias, P. Brodhead, D. Cenzer, J. B. Remmel, and R. Weber. Algo-rithmic randomness of continuous functions. Arch. Math. Logic, 46(7-8):533–546, 2008. ISSN 0933-5846. doi: 10.1007/s00153-007-0060-4. URL http:

//dx.doi.org.lp.hscl.ufl.edu/10.1007/s00153-007-0060-4.

4. L. Bienvenu and C. Porter. Strong reductions in e↵ective randomness. Theoret.Comput. Sci., 459:55–68, 2012. ISSN 0304-3975. doi: 10.1016/j.tcs.2012.06.031.URL http://dx.doi.org/10.1016/j.tcs.2012.06.031.

5. L. Bienvenu, A. R. Day, M. Hoyrup, I. Mezhirov, and A. Shen. A constructiveversion of Birkho↵’s ergodic theorem for Martin-Lof random points. Inform. andComput., 210:21–30, 2012. ISSN 0890-5401. doi: 10.1016/j.ic.2011.10.006. URLhttp://dx.doi.org/10.1016/j.ic.2011.10.006.

6. P. Brodhead, D. Cenzer, F. Toska, and S. Wyman. Algorithmic randomnessand capacity of closed sets. Log. Methods Comput. Sci., (Special issue: 7thInternational Conference on Computability and Complexity in Analysis (CCA2010)):3:16, 16, 2011. ISSN 1860-5974.

7. D. Cenzer and R. Weber. E↵ective randomness of unions and intersec-tions. Theory Comput. Syst., 52(1):48–64, 2013. ISSN 1432-4350. doi:10.1007/s00224-012-9416-1. URL http://dx.doi.org.lp.hscl.ufl.edu/10.

1007/s00224-012-9416-1.

8. A. R. Day and J. S. Miller. Randomness for non-computable mea-sures. Trans. Amer. Math. Soc., 365(7):3575–3591, 2013. ISSN 0002-9947.doi: 10.1090/S0002-9947-2013-05682-6. URL http://dx.doi.org/10.1090/

S0002-9947-2013-05682-6.

68

Page 76: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

9. D. Diamondstone and B. Kjos-Hanssen. Martin-Lof randomness and Galton-Watson processes. Ann. Pure Appl. Logic, 163(5):519–529, 2012. ISSN 0168-0072. doi: 10.1016/j.apal.2011.06.010. URL http://dx.doi.org.lp.hscl.ufl.

edu/10.1016/j.apal.2011.06.010.

10. R. G. Downey and D. R. Hirschfeldt. Algorithmic randomness and complexity.Theory and Applications of Computability. Springer-Verlag New York, Inc., NewYork, 2010. ISBN 978-0-387-95567-4.

11. P. Erdos and A. Renyi. On a new law of large numbers. J. Analyse Math., 23:103–111, 1970. ISSN 0021-7670.

12. G. B. Folland. Real analysis. Pure and Applied Mathematics (New York). JohnWiley & Sons Inc., New York, second edition, 1999. ISBN 0-471-31716-0. Moderntechniques and their applications, A Wiley-Interscience Publication.

13. P. Gacs, M. Hoyrup, and C. Rojas. Randomness on computable probabilityspaces–a dynamical point of view. In S. Albers and J.-Y. Marion, editors, STACS,volume 3 of LIPIcs, pages 469–480. Schloss Dagstuhl - Leibniz-Zentrum fuerInformatik, Germany, 2009. ISBN 978-3-939897-09-5.

14. M. Hoyrup. Randomness and the ergodic decomposition. In B. Lowe, D. Nor-mann, I. N. Soskov, and A. A. Soskova, editors, CiE, volume 6735 of Lecture Notesin Computer Science, pages 122–131. Springer, 2011. ISBN 978-3-642-21874-3.

15. M. Hoyrup. Computability of the ergodic decomposition. Ann. Pure Appl. Logic,164(5):542–549, 2013. ISSN 0168-0072. doi: 10.1016/j.apal.2012.11.005. URLhttp://dx.doi.org.lp.hscl.ufl.edu/10.1016/j.apal.2012.11.005.

16. M. Hoyrup and C. Rojas. Computability of probability measures and Martin-Lof randomness over metric spaces. Inform. and Comput., 207(7):830–847, 2009.ISSN 0890-5401. doi: 10.1016/j.ic.2008.12.009. URL http://dx.doi.org/10.

1016/j.ic.2008.12.009.

17. S. A. Kurtz. Randomness and genericity in the degrees of unsolvability.ProQuest LLC, Ann Arbor, MI, 1981. URL http://gateway.proquest.

com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:

dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:8203509. Thesis(Ph.D.)–University of Illinois at Urbana-Champaign.

18. R. D. Mauldin and M. G. Monticino. Randomly generated distributions. Israel J.Math., 91(1-3):215–237, 1995. ISSN 0021-2172. doi: 10.1007/BF02761647. URLhttp://dx.doi.org/10.1007/BF02761647.

19. A. Nies. Computability and randomness. In Oxford Logic Guides, xv + 443.Oxford University Press, 2009.

69

Page 77: TOPICS IN ALGORITHMIC RANDOMNESS AND EFFECTIVE …cholak/papers/culver.pdf · ple,” which together say that given an almost-everywhere defined computable map ... Ledrappier, David

20. T. Tao. 254a, notes 0a: Stirlings formula, 2010. URL http://terrytao.

wordpress.com/2010/01/02/254a-notes-0a-stirlings-formula/. [Online;accessed 18-February-2014].

21. M. Van Lambalgen. The axiomatization of randomness. The Journal of SymbolicLogic, 55(03):1143–1167, 1990.

22. P. Walters. An introduction to ergodic theory, volume 79 of Graduate Texts inMathematics. Springer-Verlag, New York, 1982. ISBN 0-387-90599-5.

23. Wikipedia. Hoe↵ding’s inequality — wikipedia, the free encyclopedia,2013. URL http://en.wikipedia.org/w/index.php?title=Hoeffding%27s_

inequality&oldid=541070900. [Online; accessed 11-April-2013].

This document was prepared & typeset with pdfLATEX, and formatted with

nddiss2" classfile (v3.2013[2013/04/16]) provided by Sameer Vijay and updated

by Megan Patnott.

70