Top Banner
Course Notes for EE 87021 Advanced Topics in Random Wireless Networks Martin Haenggi November 1, 2010
59

Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Jul 27, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Course Notes forEE 87021

Advanced Topics in Random Wireless Networks

Martin Haenggi

November 1, 2010

Page 2: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

cMartin Haenggi, 2010

ii

Page 3: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Contents

I Point Process Theory 1

1 Introduction 31.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Asymptotic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Description of Point Processes 52.1 One-dimensional Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 General Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Basic Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3.1 One-dimensional Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.2 Spatial Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3.3 General Poisson point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.4 Distributional Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.1 The distribution of a point process . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4.2 Comparison with numerical random variables . . . . . . . . . . . . . . . . . . . . 82.4.3 Distribution of a point process viewed as a random set . . . . . . . . . . . . . . . 92.4.4 Finite-dimensional distributions and capacity functional . . . . . . . . . . . . . . 92.4.5 Measurable decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4.6 Intensity measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.5 Properties of Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.6 Point Process Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6.1 Shaping an inhomogeneous Poisson point process . . . . . . . . . . . . . . . . . . 122.7 Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.8 Marked point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Sums over Point Processes 173.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Campbell’s Theorem for the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 The Probability Generating Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.3.1 The moment-generating function of the sum . . . . . . . . . . . . . . . . . . . . . 193.3.2 The characteristic or Laplace functional for the Poisson point process . . . . . . . 213.3.3 The probability generating functional for the Poisson point process . . . . . . . . 213.3.4 Relationship between moment-generating function and the functionals . . . . . . . 22

3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.4.1 Mean interference in stationary point process . . . . . . . . . . . . . . . . . . . . 23

iii

Page 4: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

3.4.2 Variance of the interference in stationary Poisson point process . . . . . . . . . . . 253.4.3 Interference from nearest transmitters . . . . . . . . . . . . . . . . . . . . . . . . 253.4.4 Interference distribution without fading . . . . . . . . . . . . . . . . . . . . . . . 263.4.5 Interference distribution with fading . . . . . . . . . . . . . . . . . . . . . . . . . 283.4.6 Outage in Poisson networks with Rayleigh fading . . . . . . . . . . . . . . . . . . 29

3.5 Stable Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5.2 LePage Series representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5.3 Shot noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4 Moment Measures of Point Processes 314.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2 The First-Order Moment Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.2.1 Constant density vs. stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.3 Second Moment Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.4 Second Moment Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.5 Second Moments for Stationary Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Conditioning and Palm Theory 435.1 Introduction and Basic Concepts for Stationary Point Processes . . . . . . . . . . . . . . . 43

5.1.1 The local approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.1.2 The global approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.2 The Palm Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2.1 Heuristic introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2.2 A first definition of the Palm distribution (stationary point processes) . . . . . . . 465.2.3 A second definition of the Palm distribution (general point processes) . . . . . . . 475.2.4 Alternative interpretation and conditional intensity . . . . . . . . . . . . . . . . . 50

5.3 The Reduced Palm Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.3.2 Palm distribution for PPPs and proof of Slivnyak’s theorem . . . . . . . . . . . . 515.3.3 Isotropy of Palm distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.3.4 Palm expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.4 Second Moments Measures and Palm Distributions for Stationary Processes . . . . . . . . 52

II Percolation Theory, Connectivity, and Coverage 55

6 Introduction 576.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.2 What is Percolation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

7 Bond and Site Percolation 597.1 Random Trees and Branching Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

7.1.1 Percolation on regular branching trees . . . . . . . . . . . . . . . . . . . . . . . . 597.1.2 Generalization to branching processes . . . . . . . . . . . . . . . . . . . . . . . . 597.1.3 Site percolation on the branching tree . . . . . . . . . . . . . . . . . . . . . . . . 647.1.4 Mean cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

iv

Page 5: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

7.2 Preliminaries for Bond Percolation on the Lattice . . . . . . . . . . . . . . . . . . . . . . 657.3 General Behavior of the Percolation Probability . . . . . . . . . . . . . . . . . . . . . . . 66

7.3.1 The d-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667.3.2 Simple bounds on the percolation probability for the two-dimensional case . . . . 677.3.3 Generalization of the lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . 69

7.4 Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707.4.1 The FKG Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707.4.2 The BK Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727.4.3 Russo’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.4.4 The square-root trick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

7.5 Critical Threshold for Bond Percolation on the Square Lattice . . . . . . . . . . . . . . . 757.5.1 Subcritical Phase: Exponential decrease of the radius of the mean cluster size . . . 757.5.2 Supercritical phase: Uniqueness of the infinite open cluster . . . . . . . . . . . . . 767.5.3 Critical Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777.5.4 Ingredients used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.5.5 The Missing Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

7.6 Further Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.6.1 At the Critical Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.6.2 Generalization to d dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

7.7 Site Percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.7.2 Numerical Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

8 Continuum Percolation 858.1 Gilbert’s Disk Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

8.1.1 Bounding using bond percolation on the square lattice . . . . . . . . . . . . . . . 858.1.2 Bounding using site percolation on the triangular lattice . . . . . . . . . . . . . . 858.1.3 A better lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

8.2 Other Percolation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898.2.1 Interference Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898.2.2 Secrecy Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

9 Connectivity 919.1 Full Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919.2 More General Continuum Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

9.2.1 Connectivity: Random connection model (RCM) . . . . . . . . . . . . . . . . . . 929.3 (Abstract) Random Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

10 Coverage 9510.1 Germ-grain (Boolean) models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

10.1.1 Uniqueness of the infinite component . . . . . . . . . . . . . . . . . . . . . . . . 95

v

Page 6: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

vi

Page 7: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Part I

Point Process Theory

1

Page 8: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless
Page 9: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Chapter 1

Introduction

In the first part of the course, we give an introduction to stochastic geometry and, in particular, point processtheory.

1.1 Motivation

Basic questions:

• How to describe a (random) collection of points in two or three dimensions?

• How about in one dimension? (In one dimension, renewal theory helps deal with processes withindependent increments.

• How to generalize from one dimension to two dimensions? What is the main difference?

• Other random geometric objects: How to describe a “random line” or a “random triangle”?

Stochastic geometry (sometimes used synonymously with geometric probability) deals with randomspatial patterns. Random point patterns are the most important objects, hence the point process theory isoften considered to be the main sub-field of stochastic geometry.

We will use point processes to model the distributions of nodes (users, wireless terminals) in a wirelessnetwork where node locations are subject to uncertainty.

1.2 Asymptotic Notation

Let x tend to a. We write

f(x) = O(g(x)) if the ratio f(x)/g(x) remains bounded.f(x) = o(g(x)) if the ratio f(x)/g(x) goes to 0.f(x) = Θ(g(x)) if f(x) = O(g(x)) and g(x) = O(f(x)).f(x) ∼ g(x) if the ratio f(x)/g(x) approaches 1.

More formally:

f(x) = O(g(x)) as x → a iff lim supx→a

f(x)g(x)

< ∞

Always indicate the limit point.

3

Page 10: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Examples:

• lnx = O(x), x4 = O(ex), sin x = O(1) as x →∞.

• x2 = O(x), sin x = O(x) as x → 0.

• Taylor expansion: ex = 1 + x + O(x2) = 1 + x + o(x) = 1 + x + o(x3/2) = 1 + x + Θ(x2) (asx → 0).

• As x → 0, is coshx = 1 + Θ(x)? Or coshx = 1 + O(x)? And/or coshx = 1 + Θ(x2)? For thesinh, Maple says: taylor(sinh(x),x,3)=x+O(xˆ3).

• Let x →∞. What do we know about f(x) = o(1/x)? Is 1/ log x = o(1)? Is xO(2) = O(ex)?

Note that o(·) implies and is stronger than O(·), i.e., f(x) = o(g(x)) ⇒ f(x) = O(g(x)).

4

Page 11: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Chapter 2

Description of Point Processes

2.1 One-dimensional Point Processes

There is a number of ways to describe a point process x1, x2, . . . , in one dimension. Here are four ofthem:

1. Direct characterization of the points (or arrival times if the axis is a time axis) xi. The problem is thedependency due to the ordering.

2. Using the interarrival times Si = xi+1 − xi. In the case of renewal processes, these increments areindependent.

3. Counting the number of points falling in a set B:

N(B) =∞

i=1

1xi ∈ B

4. Using vacancies:V (B) = 1N(B) = 0

2.2 General Point Processes

In the general case, a point process Φ = x1, x2, . . . is a countable collection of randomly placed pointsin a general space. We will mostly consider the Euclidean spaces Rd, such that Φ ⊂ Rd is a random closedset.

Due to the lack of a natural ordering in higher-dimensional spaces, Methods 1 and 2 are cumbersomeif not impossible to use in the general case. Methods 3 and 4, however, generalize in a straightforwardmanner since B can be taken to be a subset of any higher-dimensional space.

Such a point process can be described using two formalisms:Random measure formalism. For a stochastic point process, we may define a point process as a

collection of (integer-valued) random variables N(B) indexed by B ⊂ R2. N(B) denotes the numberof points in B ⊂ Rd, i.e., N(B) = #Φ ∩ B. The vacancy indicators V (B) are defined as V (B) =1N(B) = 0.

5

Page 12: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Random set formalism. A simple point process can be regarded as a random set Φ = x1, x2, . . .consisting of random variables xi as its elements.

If N1(B) and N2(B) are the numbers of nodes in B for two point processes Φ1 and Φ2, then N(B) =N1(B) + N2(B) is the number of nodes in B of the superimposed process (additive property).

Similarly, if V1(B) and V2(B) are the vacancy indicators, then V (B) = V1(B)V2(B) (multiplicativeproperty).

In this course we are focusing on simple and locally finite processes, i.e.,

N(x) 1 ∀x ∈ Rd

and|B| < ∞ =⇒ N(B) < ∞ w.p. 1 .

If we know the value of the vacancy function V (B) for all sets B, the PP is determined by the comple-ment of the union of all vacant regions:

Φ = R2 \

B ⊂ R2 | V (B) = 1

2.3 Basic Point Processes

2.3.1 One-dimensional Poisson processes

Definition 2.1 (One-dimensional Poisson process) The one-dimensional Poisson point process (PPP), withuniform intensity β, is a point process in R such that

• for every bounded interval (a, b], N(a, b] has a Poisson distribution with mean β(b− a).

• if (a1, b1], (a2, b2], . . . , (am, bm] are disjoint bounded intervals, then N(a1, b1], N(a2, b2], . . . , N(am, bm]are independent random variables.

The first property alone is sufficient to characterize a PP. Alfred Renyi proved that the second property is aconsequence of the first one if it holds for all Borel sets and not just intervals.

Other useful properties of the one-dimensional homogeneous PP are:

1. The inter-arrival times Si are iid, and follow an exponential distribution with parameter β.

2. Let Rk,i = xk+i − xi for k > 0. Then Rk,i is Erlang (gamma) distributed with parameters k and βfor all i.

3. The probability of two or more arrivals in a given interval is asymptotically of smaller order than thelength of the interval:

P(N(a, a + h] ≥ 2) = o(h), h → 0.

We may also define the inhomogeneous PP, where the intensity function β(t) is not a constant, in asimilar way. For this process, the average number of points in any bounded interval (a, b] is

EN(a, b] =

b

a

β(t)dt,

and the number of arrivals in disjoint intervals are independent random variables.

6

Page 13: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

2.3.2 Spatial Poisson processes

Definition 2.2 (Spatial point process) The spatial PPP, with intensity β, is a point process in R2 such that

• for every bounded closed set B, N(B) has a Poisson distribution with mean βλ2(B).

• if B1, B2, . . . , Bm are disjoint bounded sets, then N(B1), N(B2), . . . , N(Bm) are independent ran-dom variables.

Here, λ2(B) denotes the area of the set B. β is the expected number of points of the process per unit area.

Theorem 2.1 (Conditional property) Consider a homogeneous PPP in R2 with intensity β > 0. LetW ⊂ R2 be any region with 0 < |W | < ∞. Given that N(W ) = n, the conditional distribution of N(B)for B ⊆ W is binomial:

P (N(B) = k | N(W ) = n) =

n

k

pk(1− p)n−k,

where p = |B|/|W |.

Proof: See [1, Lemma 1.1].

The only distinction between a binomial process and a PPP in W is that different realizations of the PPPwill consist of different number of points.

2.3.3 General Poisson point processes

A uniform (homogeneous) PPP in Rd, d ≥ 1, or an inhomogeneous PPP in Rd, or a PPP on some otherspace S, is defined as:

Definition 2.3 (Poisson point process (PPP)) The PPP on a space S with intensity measure Λ is a pointprocess such that

• for every compact set B ⊂ S, N(B) has a Poisson distribution with mean Λ(B):

P(N(B) = k) = exp−

B

λ(x)dx·

B

λ(x)dxk

k!

• if B1, B2, . . . , Bm are disjoint compact sets, then N(B1), N(B2), . . . , N(Bm) are independent.

2.4 Distributional Characterization

Like any random process, a point process can be described in statistical terms by defining the space ofpossible outcomes and then specifying the probabilities of different events.

7

Page 14: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

2.4.1 The distribution of a point process

The space of realizations of a point process in Rd is N, the set of all counting measures on Rd, where acounting measure is non-negative, integer-valued and finite on compact sets.

Basic events: A basic event EB,k about the point process is the event that there are exactly k points inthe region B, for compact B ⊂ Rd and non-negative integer k = 0, 1, 2, . . ..

EB,k = N(B) = k = N ∈ N : N(B) = k.

Canonical space: Let N be the set of all counting measures on Rd and N be the σ-field of subsets of Ngenerated by all events of the form EB,k. The space N equipped with its σ-field N is called the canonicalspace or outcome space for a point process in Rd.

A point process Φ may now be defined formally, using its counting measure N = NΦ, as a measurablemap N : Ω → N from a probability space (Ω,A, P) to the outcome space (N,N ). Thus, each elementaryoutcome ω ∈ Ω determines an outcome Nω ∈ N for the entire point process.

The distribution of NΦ is

P(E) = P N−1(E) = P(N ∈ E) ∀E ∈ N .

Measurability requires that N−1(E) ∈ A. Note that the counting random variables are indexed by theBorel sets B.A general RV (not necessarily numeric) is also referred to as a random element. So, a point process is arandom element (Ω,A, P) → (N,N ).

Note:• A possible source of confusion is the fact that in the case of numerical RVs, the Borel sets B are

events. In the case of PPs, the Borel sets are indices of the counting random variables, or argumentsif the counting RVs are viewed as functions on Borel sets.

• In principle we should distinguish the random point set Φ and the counting measure NΦ. However,in the literature often the point process random element Φ is used itself for its associated countingmeasure for notational convenience.

2.4.2 Comparison with numerical random variables

Numerical random variables. Let X be a RV on (Ω,A, P), i.e., an A-measurable function on Ω. Thedistribution of X is the measure

P P X−1

on (R,B), defined byP (B) = P X−1(B) = P(X ∈ B) ∀B ∈ B ,

where the pre-image of B is the set X−1(B) = ω ∈ Ω : X(ω) ∈ B.Elementary events in B are the intervals (a, b], b > a, and B includes all intersections and unions of

countably many intervals.Measurability is the requirement that X−1(B) ∈ A for all B ∈ B.For a numerical random variable X : Ω → R, we almost always focus on the distribution function of

X , where B = (−∞, x]:

F (x) P ((−∞, x]) = P(X x) (right-continuous)

The distribution function always exists and fully describes the distribution.

8

Page 15: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Comparison with point processes:

Numerical random variable Point processProbability space (Ω,A, P) (Ω,A, P)Measurable space (R,B) (N,N )Random element X ∈ R N ∈ N

Events B ∈ B E ∈ NDistribution P (B) = P X−1(B) = P(X ∈ B) P(E) = P N−1(E) = P(N ∈ E)Measurability condition X−1(B) = ω ∈ Ω : X(ω) ∈ B ∈ A N−1(E) = ω ∈ Ω : Nω ∈ E ∈ AMeasure space (R,B, P ) (N,N ,P)Distribution function F (x) = P ((−∞, x])Range of counting measure N(B) ∈ N0

We write Nω, since N(ω) may be confused with N(B).Most often, the underlying probability space can be taken to be the generic one on the unit interval:

(Ω,A, P) = ([0, 1],B, | · |), where | · | denotes the Lebesgue measure.The number of points in B can be expressed as

N(B) =

B

N(dx) .

2.4.3 Distribution of a point process viewed as a random set

As mentioned previously, a point process can be interpreted as a random measure or a random set orsequence. Let ϕ be a simple a locally finite countable subset of Rd. Then the outcome space N is the set ofall ϕ, and a point process Φ is a random choice of one of the ϕ in N. We also have

NP(dϕ) = 1 ,

in the same way that for numerical random variables with distribution function (cdf) F (x) and pdf f(x) wehave

RP (dx) =

RdF (x) =

Rf(x)dx = 1 .

We will make extensive use of this notation later.

2.4.4 Finite-dimensional distributions and capacity functional

Definition 2.4 (Finite-dimensional distribution) The finite-dimensional distributions (fidis) of a point pro-cess are the joint probability distributions of

(N(B1), ......, N(Bm))

for all finite integers m > 0 and all compact B1, B2, . . ..

Equivalently the fidis specify the probabilities of all events of the form

N(B1) = k1, . . . , N(Bm) = km

involving finitely many regions.

9

Page 16: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Definition 2.5 (Capacity functional) The capacity functional of a simple point process Φ is the functional

T (K) = P(N(K) > 0) , K compact .

Facts:

• If the fidis of two point processes X,Y are identical then X and Y have the same distribution.

• If the capacity functionals of two simple point processes X,Y are identical then they have the samedistribution1.

• A simple point process is a homogeneous PPP of intensity λ on Rd if and only if its capacity func-tional

T (K) = 1− exp(−λK) , for all compact K ⊂ Rd .

• X ⊂ Rd is stationary if for any fixed v ∈ Rd the distribution of X + v is the same as that of X .

• A simple point process is stationary if T (K) = T (K + v) for ∀v ∈ Rd and all compact K.

Theorem 2.2 (Renyi’s Theorem) Let Φ be a PP and λ : Rd → R be a non-negative function such thatΛ(B) =

B

λ(x)dx < ∞ for all bounded B. If

P(Φ ∩B = ∅) = exp(−Λ(B))

for any Borel B then Φ is Poisson with intensity function λ.

Proof: See Grimmett/Stirzaker p. 288f.Idea: Partition B into a finite number n of subsets of equal and decreasing size such that in the limit n →∞,the union of the disjoint subsets is exactly B. From the condition (2.2) it follows that the indicator RVs thatthe subsets are not empty are independent. Use that independence to calculate the probability generatingfunction, bound it, and use monotone convergence to show that the pgf is that of a Poisson process.

2.4.5 Measurable decomposition

Consider point processes with an infinite number of points a.s. Any N can be decomposed measurably as

N =∞

i=1

δxi

δX(B) = 1B(x) is the Dirac measure for B ∈ B.The Dirac measure and thus N is an atomic measure, i.e., it is concentrated on a countably infinite

collection of points. A counting measure is a special atomic measure that gives each point a mass of zeroor one. In contrast, the Lebesgue measure is a diffuse measure since it gives zero mass to every point. Bothare Radon measures since they are finite on finite sets. In fact, every translation-invariant Radon measureon Rd,Bd is a constant multiple of the Lebesgue measure.

1This holds more generally for random closed sets.

10

Page 17: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

2.4.6 Intensity measure

Definition 2.6 (Intensity (mean) measure) The intensity or mean measure is defined by as

Λ(B) E(Φ(B))

for a PPP Φ.

If there is an intensity function λ(x), we have

Λ(B) =

B

λ(x)dx .

Using the distribution of the point process, we can also write

Λ(B) =

Nϕ(B)P (dϕ) .

In general, we haveΛ(x) = 0, ∀x ∈ Rd .

This means that we do not expect to have a point on a specific location (unless the PP has at least onedeterministically placed point). In other words, Λ is a diffuse measure. It cannot be used for conditioningon points at specific locations without some precaution.

The intensity (function) λ(x) may not exist. If it does, we can retrieve it from the intensity measure bycalculating the limit

λ(x) = limr→0

EΦ(b(x, r))|b(o, r)| .

The denominator is the volume of the d-dimensional ball of radius r, which is cdrd, where

cd =πd/2

Γ(d/2 + 1)

is the volume of the unit ball.While stationarity implies that the intensity function is constant, the converse is not true: A constant

intensity function does not imply stationarity.

Theorem 2.3 (Conditional property) Let Φ a general PPP ∈ Rd with intensity λ. Take B such that0 < Λ(B) < ∞. Conditioned on Φ(B) = n, the n points in B are distributed with probability density

λ(x)Λ(B)

.

Proof: In the homogeneous case (constant λ), we have seen that, conditioned on Φ(B) = n, the npoints form a binomial point process. This is a straightforward generalization to the inhomogeneous case.The intensity λ(x) is a density that needs to be normalized by Λ(B) to be a proper probability density.

2.5 Properties of Point Processes

Let Φx = x1 + x, x2 + x, . . . be the point process translated by x ∈ Rd.

11

Page 18: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Stationarity. A PP on Rd is stationary, if its distribution is translation-invariant, i.e., P(E) = P(E + x),for all E ∈ N and x ∈ Rd. E + x or Ex is defined as

E + x = Ex = ϕ ∈ N : ϕ−x ∈ E , E ∈ N .

Equivalently, stationarity implies P(Φ ∈ E) = P(Φx ∈ E), again for all E and x.

Isotropy. A PP on Rd is isotropic, if its distribution is rotationally invariant with respect to rotations aboutthe origin o, i.e., P(E) = P(rE), where r is an arbitrary rotation about o in Rd,i.e.,

rE = ϕ ∈ N : r−1ϕ ∈ E , E ∈ N .

Motion-invariance. A stationary and isotropic PP is called motion-invariant.

Remarks. The class of motion-invariant PPs is by far the most important for our purposes. The oneexception are inhomogeneous PPPs, which are analytically tractable due to their independence property.

For stationary PPs, the intensity measure is translation-invariant:

Λ(B) = EΦ(B) = EΦx(B) = EΦ(B−x) = Λ(B−x) , ∀x .

If follow that the intensity measure is then just a multiple of the Lebesgue measure, with λ being theproportionality constant:

Λ(B) = λ|B| .

Example 2.1

• Isotropic, non-stationary PP: An inhomogeneous PPP over Rd with λ(x) = f(x) (i.e., radiallysymmetric intensity function). Also, a binomial point process of constant intensity defined on b(o, r).

• Stationary, non-isotropic PP: A randomly translated 2D integer lattice: Ψ Z2 + (U, V ), whereU and V are uniform random variables over the unit interval (0, 1). To demonstrate anisotropy, letB be a square of side length 1.1. Then P(Ψ(B) = 0) = 0, while for the rotated square (with thesame area) by π/4, denoted by B♦, we have P(Ψ(B♦) = 0) > 0, since the rotated square fits in thelattice without intersecting with any lattice point.

2.6 Point Process Transformations

2.6.1 Shaping an inhomogeneous Poisson point process

Theorem 2.4 (Mapping theorem) Let Φ be a non-homogeneous PPP on Rd with intensity function λ.Let f : Rd → Rs be measurable and Λ(f−1y) = 0,∀y ∈ Rs (f does not shrink a compact set to a point).Let

µ(B) Λ(f−1(B)) =

f−1(B)λ(x)dx ,

where it satisfies µ(B) < ∞,∀ compact B.Then

f(Φ) =

x∈Φ

f(x)

is a PPP with intensity measure µ.

12

Page 19: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Corollary 2.5 (Linear mapping) Consider a stationary PPP Φ of intensity λ, and let A : Rd → Rd be anon-singular linear mapping, given by a transformation matrix A ∈ Rd×d. Then A(Φ) = Ax : x ∈ Φ isa stationary PPP with intensity λ det(A−1)

Example 2.2 If Φ = xi is a homogeneous PPP of intensity σ on R2, what is the intensity (function) λ(x)of the one-dimensional PPP Ψ = xi? What is the intensity function of Φ = xi2?

In the first case, f(x) = x. Let B = [0, r). Then f−1(B) = b(o, r), and we have

µ([0, r)) = Λ(b(o, r)) =

b(o,r)σdx = σπr2 ,

from which the density follows as

λ(x) =ddx

µ([0, x)) = 2σπx , x ≥ 0 .

Hence this is an inhomogeneous one-dimensional PPP with intensity function λ(x) = 2σπx1x ≥ 0.In the second case, f(x) = x2. The preimage of the interval [0, r) is the ball b(o,

√r), and we find

µ([0, r)) = σπr and λ(x) = σπ1x ≥ 0.

FonThinning of a Poisson point process

Theorem 2.6 (Independent thinning) Let g : Rd → [0, 1] be a thinning function, i.e., consider a realiza-tion of a stationary PPP Φ and delete each point with probability 1 − g(x), independently of all otherpoints. This procedure, called thinning, generates an inhomogeneous PPP with density function λg(x).

Proof: Let Φ be the original process and Φ the process after thinning. Independence follows from theconstruction. The distribution of Φ is computed as follows:

P(Φ(B) = k) =∞

i=k

P(Φ(B) = i)P(Φ(B) = k | Φ(B) = i)

Given Φ(B) = i, the i points of Φ in B are uniformly distributed. Thus the conditional probability ofΦ(B) = 1 given Φ(B) = 1 is just

P(Φ(B) = 1 | Φ(B) = 1) = |B|−1

B

g(x)dx

and the complete conditional distribution is

P(Φ(B) = k | Φ(B) = i) =

i

k

|B|−1

B

g(x)dx

k 1− |B|−1

B

g(x)dx

i−k

.

Hence, with G B

g(x)dx,

P(Φ(B) = k) =∞

i=k

e−λ|B| (λ|B|)i

i!

i

k

|B|−1G

k (1− |B|−1G)i−k

= e−λ|B| (λG)k

k!

i=k

(λ|B|(1− |B|−1G))i−k

(i− k)!

= e−λ|B| (λG)k

k!eλ|B|(1−|B|−1G)

=(λG)k

k!e−λG .

13

Page 20: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

2.7 Distances

Definition 2.7 (Contact distance) The contact distance is dist(u,Φ), u ∈ Rd, Φ ⊂ Rd the distance frompoint u to the nearest point of Φ:

dist(u,Φ) minx ∈ Φ : x− u , u ∈ Rd .

Example 2.3 Let Φ be a homogeneous PPP of intensity λ. P(dist(u,Φ) ≤ r) = P(N(b(u, r)) > 0) =1 − e−λcdrd , where cd = |b(o, 1)| is the volume of the d-dimensional unit ball. So V = cd · dist(u,Φ)d isthe largest ball that we can fit in before we hit a point in Φ. We have that P(V < v) = 1− e−λ·v.

Definition 2.8 (Contact distribution function or empty space function) The contact distribution functionor empty space function F is the cdf of dist(u,Φ):

F u(r) P(dist(u,Φ) ≤ r) = P(N(b(u, r)) > 0) = T (b(u, r))

Remarks. If Φ is stationary, F u does not depend on u. F u(r) = T (b(u, r)) is the special case of thecapacity function where we use balls for the argument of the capacity function.

Definition 2.9 (Nearest-neighbor distance) The nearest-neighbor distance distNN(x,Φ), x ∈ Φ, is thedistance from a point in Φ to its nearest neighbor:

distNN(x,Φ) = miny ∈ Φ \ x : x− y , x ∈ Φ .

The corresponding distribution function is the nearest-neighbor distance distribution function.

For the PPP, the contact distribution function and nearest-neighbor distance distribution function areidentical in the following sense:

dist(u,Φ) d= distNN(x, Φ | x ∈ Φ) .

This follows from the independence property of the PPP: Conditioning on the PPP having a point at xdoes not affect the distribution of the other points. A rigorous discussion of conditioning on such events ofprobability 0 follows in Chapter 5.

2.8 Marked point processes

In a marked point process a mark mi ∈ M, where M is the mark space, is added to each point xi.

Example 2.4 (Marked point processes)

• In a forest, m may denote the diameter of the trees.

• In a sensor network m may denoted the battery level or transmission power, etc.

14

Page 21: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

• In ALOHA, mi ∈ 0, 1 could be the transmit or receive state of the node.

Definition 2.10 (Marked point process:) A marked point process in space Rd with marks in space M is aPP Φ on Rd ×M such that Φ(K ×M) < ∞, ∀K ⊂ Rd, K compact. So Φ = (xi, mi) ⊂ Rd ×M.

Example 2.5 A PPP on R3 cannot be viewed as a marked PP in R2 × R (with points in R2 and the thirdcoordinate as the mark). The reason is the following: In a general marked PP, M need not be compact.It can be the whole mark space, so if we use the 3rd dimension as the mark space then in a compact setK ∈ R2 there will be an infinite number of points such that NY (K × R) ≮ ∞

Example 2.6 On the other hand, a PPP on R2 × [0, a] of intensity λ can be interpreted as a marked pointprocess on R2 with marks from M = [0, a]. The projected point process on R2 has intensity λa, sinceΛ(B × [0, a]) = λa|B| for B ⊂ R2. The marks attached to each points are uniformly distributed on [0, a].

The mark space can be R or Z, or a subset thereof, but it can also be more complicated, such as Rn afunction space. For example, the fading states from a node to all other nodes in the process may be attachedas a mark.

It is always possible to interpret the marked PP as an ordinary PP on the product space Rd×M. So whydo we need marked PPs? The reason is that Euclidean motions affect the positions of the points but leavesthe marks unchanged. Thus it is often more intuitive to deal with marks than PPs on product spaces.

Theorem 2.7 (Marking theorem for Poisson point processes) Let Φ be a marked point process on Rd ×M. Let Φ be the projected point process (without marks). The the following two statements are equivalent:

1. Φ is a Poisson process on Rd with intensity measure Λ, and, given Φ, the marks mi are iid withdistribution Q on M.

2. Φ is a Poisson process on Rd ×M with intensity measure Λ⊗Q.

Proof: See [2, Sec. 5.2].

15

Page 22: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

16

Page 23: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Chapter 3

Sums over Point Processes

3.1 Notation

Let Φ = x1, x2, . . . = xi be a point process. A sum of f(x) over Φ can be alternatively written as

x∈Φ

f(x) =

Rdf(x)Φ(dx) =

Rdf(x)p(x)dx ,

wherep(x) =

y∈Φ

δ(x− y) .

δ(x) is the Dirac delta function.The mean value of the sum can alternatively be written as

E

x∈Φ

f(x)

=

N

x∈ϕ

f(x)P(dϕ) =

N

Rdf(x)ϕ(dx)P(dϕ) .

The different ways of writing these expressions reflect the variety of approaches to the theory.

Example 3.1 (Number of points) The number of points in B is written as

Φ(B) =

x∈Φ

1B(x) =

B

Φ(dx) .

Its mean value can be written in the following ways:

EΦ(B) = E

x∈Φ

1B(x)

=

Nϕ(B)P(dϕ)

=

N

x∈ϕ

1B(x)P(dϕ)

=

N

Rd1B(x)ϕ(dx)P(dϕ)

=

N

B

ϕ(dx)P(dϕ)

17

Page 24: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

3.2 Campbell’s Theorem for the Mean

Theorem 3.1 (Campbell’s formula for the mean) Let Φ be a PP on Rd and let f : Rd → R be a measur-able function. Then the random sum

S =

x∈Φ

f(x)

is a random variable with meanES =

Rdf(x)Λ(dx) ,

provided the RHS is finite. If Φ is a PP on Rd with intensity function λ,

ES =

Rdf(x)λ(x)dx .

This formula also applies to non-simple PPs.

Proof: We prove that the theorem holds for simple f , i.e., if f is a step function

f =m

i=1

ci1Bi

for compact Bi ⊂ Rd and ci ∈ R. We have

S =

x∈Φ

f(x) =

x∈Φ

m

i=1

ci1Bi(x) =m

i=1

ciΦ(Bi)

and thus

ES = E

n

i=1

ciΦ(Bi)

=

m

i=1

ciEΦ(Bi) =m

i=1

ciΛ(Bi) =

Rdf(x)Λ(dx) .

The result for general f follows by monotone approximation. Written differently, Campbell’s formula says

N

Rdf(x)ϕ(dx)P(dϕ) =

Rdf(x)Λ(dx) .

The formula reminds of the expectation of a numerical random variable X with pdf pX(x):

Ef(X) =

f(x)pX(x)dx

Focusing on a PP on a finite domain W with a fixed number of points n and averaging the contribution perpoint in the sum, we have

1n

E

x∈Φ

f(x) =

W

f(x)λ(x)

ndx .

In the case of a BPP, λ(x)/n is indeed the pdf of the node distribution, which shows that there is an analogybetween the mean contribution of a point in a sum over a PP and the expectation of a numerical randomvariable.

18

Page 25: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Corollary 3.2 (Campbell’s formula for stationary point processes) If Φ ⊂ Rd is stationary with inten-sity λ, the sum S =

x∈Φ f(x) is a random variable with mean

ES = λ

Rdf(x)dx .

3.3 The Probability Generating Functional

Definition 3.1 (Probability generating functional of a point process) Let v : Rd → [0, 1] a non-negativemeasurable function. The generating functional of the point process Φ is

G[v] E

x∈Φ

v(x)

=

N

x∈ϕ

v(x)P(dϕ) .

G[v] can also be written as

G[v] = Eexp

x∈Φ

log v(x)

= E

exp

Rdlog v(x)Φ(dx)

.

Letting f(x) log v(x) and S =

x∈Φ f(x), the product G[v] can, as a function of the sum S, beviewed as the moment-generating function of the sum S, i.e.,

MS(t) E(etS)

for t = 1, or its Laplace transformLS(s) E(e−sS)

for s = −1, i.e., G[v] = MS(1) = LS(−1).In the Poisson case, there is a simple formula for these expressions, given by another theorem by Camp-

bell.

3.3.1 The moment-generating function of the sum

Theorem 3.3 (Campbell’s theorem for Poisson point processes) Let Φ be a uniform PPP of intensity λon Rd and f : Rd → R be measurable. Then the sum

S =

x∈Φ

f(x)

is absolutely convergent with probability 1 iff

Rdmin(|f(x)|, 1)dx < ∞ . (3.3.1)

If it is, then

E(etS) = exp

λ

Rd(etf(x) − 1)dx

for any complex t for which the integral converges, and, in particular, when t is purely imaginary.

19

Page 26: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Proof: Consider (again) a simple function (step function) f that assumes finitely many non-zerovalues f1, . . . , fk and is zero outside some bounded region. Let

Aj = x : f(x) = fj , j = 1, . . . , k .

Since the Aj are disjoint, the RVs Nj = Φ(Aj) are iid Poisson with mean λ|Aj |. We also have that

S =k

j=1

fjNj .

We know that for a Poisson RV X with mean µ

E(etX) = exp(µ(et − 1)) .

So the moment-generating function is

E(etS) =k

j=1

E(etfjNj )

=k

j=1

exp(λ|Aj |(etfj − 1))

= exp

k

j=1

Aj

λ(etf(x) − 1)dx

= exp

Rdλ(etf(x) − 1)dx

.

This Campbell theorem also holds for non-stationary processes. In this case, the convergence condition

for the sum is

Rdmin(|f(x)|, 1)Λ(dx) < ∞ ,

and the result is

E(etS) = exp

Rd(etf(x) − 1)Λ(dx)

.

Since the moment-generating function defines the moments, we have the following corollary:

Corollary 3.4 (Mean and variance for PPPs) Let S =

x∈Φ f(x) for a Poisson point process Φ on Rd.Then

ES =

Rdf(x)Λ(dx)

and

var(S) =

Rdf2(x)Λ(dx) .

20

Page 27: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Proof: Using the properties of the moment-generating function

ES =∂

∂tE exp(tS)

t=0

and

var S =∂2

∂t2E exp(tS)

t=0

− (ES)2 .

The expression for the mean ES was obtained for general processes before, but the formula for the varianceis new. It only holds for PPPs.

3.3.2 The characteristic or Laplace functional for the Poisson point process

Specializing Campbell’s expression for the moment-generating function to the case f 0 (to ensure con-vergence) and t = −1, we obtain

E(e−S[f ]) = exp−

Rd(1− e−f(x))Λ(dx)

,

where S[f ] is the sum of the function f over the point process. This is a functional since the argument isthe function f . It is called the characteristic functional or the Laplace functional, sometimes denoted byL[f ]. It is characteristic since if it holds for all step functions f , the process is Poisson.

3.3.3 The probability generating functional for the Poisson point process

Theorem 3.5 (Probability generating functional for the Poisson point process) Let v : Rd → [0, 1] bemeasurable and Φ a Poisson process with intensity measure Λ. Then

G[v] E

x∈Φ

v(x)

= exp

Rd[1− v(x)]Λ(dx)

.

Proof: This follows from the characteristic functional by setting v(x) = e−f(x). An alternative, directproof is as follows: Consider

v(x) = 1−n

i=1

(1− zi)1Ci(x)

for zi ∈ [0, 1] and C1, . . . , Cn pairwise disjoint and compact. Then

G[v] = E

x∈Φ

v(x)

=

Nzϕ(C1)1 · . . . · zϕ(Cn)

n P (dϕ)

= EzΦ(C1)1 · . . . · zΦ(Cn)

n

= EzΦ(C1)1

· . . . · E

zΦ(Cn)n

21

Page 28: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

With EzΦ(C1)1

= exp(−Λ(C1)(1− z1)) by the moment-generating (or probability-generating) function

for the Poisson distribution,

G[v] = exp(−Λ(C1)(1− z1)) · . . . · exp(−Λ(Cn)(1− zn))

= exp

n

i=1

Ci

(1− zi)Λ(dx)

= exp−

Rd(1− v(x))Λ(dx)

This holds for piecewise constant v. The result for general v follows again by standard arguments frommeasure theory.

Example 3.2 (Void probability) Consider a uniform PPP of intensity λ. Let v(x) = 1− 1B(x). Then theevent Φ(B) = 0 can be expressed as

x∈Φ

v(x) = 1 .

It occurs with probability

P(Φ(B) = 0) = E

x∈Φ

v(x)

= G[v] = exp(−λ|B|) ,

as expected.

3.3.4 Relationship between moment-generating function and the functionals

Here, the relationships between the different expectations and functionals in this sections are explained. Fornotational simplicity, when discussing PPPs, we focus on the homogeneous case.

1. LetS[f ] =

x∈Φ

f(x) ; P [v] =

x∈Φ

v(x)

be the sum of f(x) over a point process Φ and P be the product of v(x) over Φ, respectively. ThenS[f ] and P [v] are related as

P [v] = eS[log v] ; S[f ] = log P [ef ] .

The same holds for their mean values, e.g.,

EP [v] = E(eS[log v]) .

2. The expected value E(eS[f ]) is the moment-generating function of S for t = 1 or the Laplace trans-form of S for s = −1:

E(eS[f ]) = MS(1) = LS(−1)

The moment-generating function for PPPs is given by Campbell’s theorem.

22

Page 29: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

3. The expected product EP [v] is called the probability generating functional (pgfl) of the point process:

G[v] ≡ EP [v]

4. Campbell’s theorem says that for PPPs,

E(eS[f ]) = exp

λ

Rd(ef(x) − 1)dx

if the integral converges. This is the mgf for t = 1.

5. If the integral diverges to −∞, the sum S = ∞ a.s., and the mgf is 0. So the mgf is well definedunless the integral diverges to +∞. A sufficient condition to avoid this is to focus on f 0.

6. Equivalently, focus on f 0 but consider the mgf for t = −1, i.e., use

E(eS[f ]) ≡ E(e−S[−f ]) .

This way, we obtain the Laplace functional or characteristic functional, defined as L[f ] E(e−S[f ])for f 0. For the PPP,

L[f ] = E(e−S[f ]) = exp−λ

Rd(1− e−f(x))dx

, f 0 .

7. The Laplace functional is related to the pgfl by setting v = e−f , i.e.,

L[f ] ≡ G[e−f ] .

This yields

G[v] = EP [v] = exp−λ

Rd(1− v(x))dx

,

where v : Rd → [0, 1].

3.4 Applications

3.4.1 Mean interference in stationary point process

Take a stationary point process Φ ⊂ Rd of intensity λ. Assuming that each node (point) is transmitting atunit power and the path loss law is (x) = x−α for a path loss exponent α > 0, we want to find.

EI = E

x∈Φ

x−α

.

By Campbell’s formula for the mean,

EI = λ

Rdx−αdx .

23

Page 30: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

In two dimensions,

λ

R2x−αdx = λ

0

0r−αrdrdβ = 2π

λ

2− αr2−α

0, α = 2.

For α = 2,

λ

R2x−2dx = 2πλ log r

0.

For this to be finite, we need 2 − α < 0 due to the upper integration bound and 2 − α > 0 due to thelower. And for 2 = α, it does not converge anyway. So the mean interference for the power path loss lawis infinite in two dimensions for all values of the path loss exponent.

How about in three dimensions? We find

EI = λ

R3x−αdx = 4π

λ

3− αr3−α

0.

Again, the same problem. For larger number of dimensions d, these integrals may get tedious. We can usea mapping first as follows:

1. Map to one dimension. using f(x) = x. The new point process Φ has intensity measure Λ([0, r)) =λcdrd and intensity function λ(r) = λcddrd−1.

2. Apply Campbell’s theorem:

EI = E

r∈Φ

r−α

=

0r−αλ(r)dr

= λcd

d

d− αrd−α

0

, α = d.

So for all d, there is no α for which the mean exists. If the upper bound is the culprit, that means that allthe interferers far away contribute most of the interference. If α > d, this problem is solved. The problemat r = 0 is due to the singularity of the path loss law near 0. This is clearly a modeling artefact, since noreceiver ever gets more power than was transmitted. If the path loss function is replaced by a more accurateone, for example

g(x) = min1, x−α or g(x) = (1 + x)−α ,

the issue at r → 0 is solved, and the mean interference is finite as soon as α > d.

Comparing the interference sum with condition (3.3.1), we notice that for f(x) = x−α, the conditionis not violated, since the integrand is min|f(x)|, 1. So the interference is an absolutely converging sumwith a proper distribution, even though the mean may not be finite. If the condition is violated, then I hasno proper distribution, since I = ∞ a.s.

Example 3.3 (Mean interference with bounded path loss)Let the path loss law be g(x) = min1, x−α. Then the mean interference in a PPP of intensity λ on theplane is

EI = λπ +2λπ

α− 2, α > 2.

24

Page 31: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

If the diameter of the network is bounded to R > 1,

EI = λπ +λπ

α− 2(1−R2−α) , ∀α > 0.

Remarks.

• Why can we not use x−α directly as the mapping function? Check the mean measure: In this caseΛ([0, 1]) = Λ([1,∞]) = ∞ (a.s.), which violates the finiteness of the mean measure.

• The mean is identical for all point process with the same intensity. Also, due to stationarity, theinterference is the same no matter where on Rd we measure.

3.4.2 Variance of the interference in stationary Poisson point process

For general stationary point process, we are not in the position to find the variance. But in the Poisson case,we can apply Campbell’s theorem. Let g(x) = minr−α

0 , x−α for r0 > 0. For a homogeneous PPP onRd,

var I = λ

Rdg2(x)dx = λcdr

d−2α

0 +λcdd

d− 2αrd−2α

r0

= λcdrd−2α

0

2α− d

.

if 2α > d (which is usually the case). For finite variance if r0 → 0 (and a finite upper integration bound),we would need α < d/2—which would be highly unusual.

3.4.3 Interference from nearest transmitters

Let I1 be the interference from the nearest transmitter. We find

P(I1 x) = P(R−α x) = P(R > x−1/α) = exp(−λcdx−δ) ,

where δ = d/α. We getEI1 = c1/δ

dΓ(1− 1/δ) .

If δ < 1 then this does not exist. So with the singular path loss law, the mean of only one interferer isalready infinite.The tail P(I1 > x) of this distribution is

P(I1 > x) ∼ λcdx−δ x →∞ .

If δ < 1,

P(I1 > x)dx diverges and thus the mean does not exist. There is a heavy tail. Generally, E(Ip

1 )exists for p < δ.

Analogously, we have for the pdf

fI1(x) ∼ λcdδx−δ−1 , x →∞.

Now, let In denote the interference from the n-th nearest interferer. What is the distribution of In forgeneral n?

The ccdf of the distance to the n-th nearest neighbor Rn is

P(Rn > r) =Γic(n, λcdrd)

Γ(n).

25

Page 32: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

So for n = 2,P(I2 < x) = exp(−λcdx

−δ)(1 + λcdx−δ)

andP(I2 > x) ∼ 1

2(λcd)2x−2δ .

So we need 2δ > 1 for EI2 to exist.For general n:

P(In < x) = exp(−λcdx−δ)

n−1

i=0

λcdx−δ

i

i!

For the tail probability we need to sum from n to ∞, so the dominant term will be the one for i = n asx →∞. Therefore

P(In > x) ∼ 1n!

(λcd)nx−nδ .

This means that E(Ipn) exists for p < nδ.

So for d = 2 (two dimensions), we need to cancel k > α interferers to have a finite second moment.Although we can find the distribution of In for all n, it is difficult to obtain the distribution of the

total interference this way, since the In are neither independent nor identically distributed. We proceeddifferently.

3.4.4 Interference distribution without fading

In this subsection we focus on the case of two-dimensional networks and assume there is no fading, i.e.,I =

x∈Φ g(x). Since g(x) is assumed isotropic, we use : R+ → R+, defined by (x) ≡ g(x). It is

assumed that (x) strictly monotonically decreasing (invertible), and that limx→∞ (x) = 0.Our goal is to find the characteristic function of the interference and from there, if possible, the distri-

bution.We follow a basic yet powerful technique as it was used, for example, in [3]. It consists of two steps:

1. Consider first a finite network, say on a disk of radius a centered at the origin, and condition on havinga fixed number of nodes in this finite area. The nodes’ locations are then iid.

2. Then de-condition on the (Poisson) number of nodes and let the disk radius go to infinity.

Step 1. Consider the interference from the nodes located within distance a of the origin:

Ia =

x∈Φ∩b(o,a)

(x) (3.4.1)

In the limit a →∞, Ia → I . Let FIa be the characteristic function (Fourier transform) of Ia, i.e.,

FIa(ω) E(ejωIa) . (3.4.2)

Conditioning on having k nodes in the disk of radius a,

FIa(ω) = EE(ejωIa | Φ(b(o, a)) = k)

. (3.4.3)

26

Page 33: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Given that there are k points in b(o, a), these points are iid uniformly distributed on the disk with radialdensity

fR(r) =

2r

a2 if 0 r a

0 otherwise ,(3.4.4)

and the characteristic function is the product of the k individual characteristic functions:

E(ejωIa | Φ(b(o, a)) = k) =

a

0

2r

a2exp(jω(r))dr

k

(3.4.5)

Step 2. The probability of finding k nodes in b(o, a) is given by the Poisson distribution, hence:

FIa(ω) =∞

k=0

exp(−λπa2)(λπa2)k

k!E

ejωIa | Φ(b(o, a)) = k

(3.4.6)

Inserting (3.4.5), summing over k, and interpreting the sum as the Taylor expansion of the exponentialfunction, we obtain

FIa(ω) = exp

λπa2

−1 +

a

0

2r

a2exp(jω(r))dr

. (3.4.7)

Integration by parts, substituting r → −1(x), where −1 is the inverse of , and letting a →∞ yields

lima→∞

a2

−1 +

a

0

2r

a2exp(jω(r))dr

=

0(−1(x))2jωejωxdx ,

so thatFI(ω) = exp

jλπω

0(−1(x))2ejωxdx

. (3.4.8)

To get more concrete results, we need to specify the path loss law. For the standard power law (r) = r−α,we obtain

FI(ω) = exp

jλπω

0x−2/αejωxdx

. (3.4.9)

For α 2, the integral diverges, indicating that the interference is infinite almost surely. For α > 2,

FI(ω) = exp−λπΓ(1− 2/α)ω2/αe−jπ/α

, ω 0. (3.4.10)

The values for negative ω are determined by the symmetry condition F∗I(−ω) = FI(ω). For α = 4,

FI(ω) = exp−λπ3/2 exp(−jπ/4)

√ω

. (3.4.11)

This case is of particular interest, since it is the only one where a closed-form expression for the densityexists:

fI(x) =πλ

2x3/2exp

−π3λ2

4x

. (3.4.12)

This is the so-called Levy distribution, which can also be viewed as an inverse gamma distribution, or asthe inverse Gaussian distribution with infinite mean. For other values of α, the densities may be expressed

27

Page 34: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

in an infinite series [3, Eqn. (22)].

The characteristic function (3.4.10) indicates that the interference distribution is a stable distributionwith characteristic exponent 2/α < 1, drift 0, skew parameter β = 1, and dispersion λπΓ(1−2/α) cos(π/α).Details on stable distributions are available in [4]. The corresponding Laplace transform is

LI(s) = exp(−λπΓ(1− 2/α)s2/α) . (3.4.13)

Stable distributions with characteristic exponents less than one do not have any finite moments. In partic-ular, the mean interference diverges, which is due to the singularity of the path loss law at the origin. Thisalso follows immediately from the fact that E(I) = − d

dslog(LI(s))|s=0 = lims→0 c s2/α−1 = ∞.

The method of conditioning on a fixed number of nodes, using the iid property of the node locations,and de-conditioning with respect to the Poisson distribution is applicable to many other problems.

3.4.5 Interference distribution with fading

Here we consider the sumI =

x∈Φ

hxg(x) ,

where the hx are iid. In fact, the random variables hx can be viewed as the marks in a marked point process.

L(s) = E[e−sI ] = E

x∈Φ

e−shg(x)

be the Laplace transform. Since the fading is iid,

L(s) = EΦ

x∈Φ

Eh(e−shg(x))

.

Mapping the PPP to one dimension, we know that λ(r) = λcddrd−1. Again let (x ≡ g(x). Now we usethe pgfl for v = Eh(e−shg(x)) to obtain for the one-dimensional PPP

L(s) = exp−

0Eh[1− e−sh(r)]λ(r)dr

.

Conditioned on h, we have ∞

0

1− exp(−sh(r)

λ(r)dr = λcd

0

1− exp(−shr−α)

drd−1dr

= λcd

0

1− exp(−sh/y)

δyδ−1dy (subst. y ← r−1/α)

= λcd

0

1− exp(−shx)

δx−δ−1dx (subst. x ← y−1)

= λcd

0x−δsh exp(−shx)dx (integration by parts)

= λcd(hs)δΓ(1− δ) , 0 < δ < 1 .

28

Page 35: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

With the expectation over h, we obtain

L(s) = exp− λcdE[hδ]Γ(1− δ)sδ

.

So the interference has a stable distribution with characteristic exponent δ and dispersion λcdE[hδ]Γ(1−δ).

In the Rayleigh fading case (exponential h), E[hδ] = Γ(1 + δ), and

L(s) = exp−λcdΓ(1 + δ)Γ(1− δ)sδ

= exp

−λcds

δ πδ

sin(πδ)

.

As δ → 1, sin(πδ) ≈ π(1− δ), so at the limit we have

L(s) ≈ exp−λcds

δ δ

1− δ

.

As δ → 1, L(s) ≡ 0, so I = ∞ a.s.

3.4.6 Outage in Poisson networks with Rayleigh fading

With Rayleigh fading, the desired signal strength at the receiver S is exponential with mean r−α. Let r = 1first. What is the success probability P(SIR > θ) in a Poisson field of interferers of intensity λ that aresubject to Rayleigh fading?

This is exactly the Laplace transform:

ps P(SIR θ) = P(S > Iθ) = EI(exp(−θI)) = exp(−cdλθδΓ(1 + δ)Γ(1− δ))

If the desired link has distance r,

ps(r) = EI(exp(−θrαI)) = exp(−cdλθδrdΓ(1 + δ)Γ(1− δ)) .

So, in two dimensions, the success probability decays only with r2, although the path loss is rα.The success probability is the cdf of the SIR— so we have a closed-form expression for the SIR distri-

bution in Poisson networks with Rayleigh fading! The pdf is

fSIR(x) = cδxδ−1e−cxδ,

where c = cdλrdΓ(1 + δ)Γ(1− δ)). This is a Weibull distribution. Its mean is

E(SIR) =Γ(1/δ)δc1/δ

=1rα

Γ(1 + 1/δ)(λC(α))1/δ

, C(α) = cdΓ(1 + δ)Γ(1− δ) .

While the success probability is a function of rd, the mean SIR is proportional to r−α, as expected, sincethe received signal power S decays with r−α, while the interference does not depend on r.

The outage expression is valid also when the interferers are not subject to Rayleigh fading. As long asthe desired signal is exponentially distributed, we have

ps = E(exp(−θI)) = exp(−cdλθδrdE(hδ)Γ(1− δ)) ,

where the interferers’ fading distribution Fh can be arbitrary (as long as it has a finite δ-th moment).

29

Page 36: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

3.5 Stable Distributions

3.5.1 Definition

Definition 3.2 (Stable distribution) A RV X is said to have a stable distribution if for all a, b > 0, thereexists c > 0, d, such that

aX1 + bX2d= cX + d

where X1, X2 are iid with the same distribution as X .

Theorem 3.6 For any stable RV X , there is 0 < δ 2 such that the number c in the definition satisfiescδ = aδ + bδ.

δ is the characteristic exponent. For Gaussian RVs, δ = 2 since

aX1 + bX2 ∼ N ((a + b)µ, (a2 + b2)σ2)

i.e., the definition holds with c =√

a2 + b2 and d = (a + b− c)µ.The Laplace transform of a stable RV is

E(e−sX) = exp

− σδ

cos πδ

2

if δ < 1 .

Sometimes κ σδ

cos πδ2

is called the dispersion.

3.5.2 LePage Series representation

Theorem 3.7 (Series representation) Let τi be the arrival times of a PPP of intensity λ and let hi be iidRVs, independent of τi. If the infinite sum

i=1

τ−1/δ

ihi

converges a.s., then it converges to a stable RV.

This is exactly what we need, since in our case, the distances rdi

constitute a homogeneous PPP ofintensity λcd. So our sum is

i=1

rdi

−α/d

,

and δ = d/α as expected.

3.5.3 Shot noise

The sumI(t) =

τ∈Φ

g(t− τ)

for Φ a PPP and g an impulse response is called Poisson shot noise. While shot noise was studied since1909 for one-dimensional processes, it is easily generalized to d dimensions. If g(x) = x−a, the shotnoise is more specifically called power-law Poisson shot noise (with exponent a). So interference I(x) in aPoisson field is a power-law Poisson shot noise process.

30

Page 37: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Chapter 4

Moment Measures of Point Processes

4.1 Introduction

Point process theory provides useful analogues to the mean, variance, and higher-order statistics of nu-merical random variables. Since point process are random elements in the space of locally finite countingmeasures, the moments will be replaced by moment measures. In this chapter, we will define such momentmeasures and study applications.

4.2 The First-Order Moment Measure

The first moment of a point process is its intensity measure Λ(B) = EΦ(B) that we are already familiarwith. It corresponds to the mean of a numerical random variable.

Example 4.1 (Stationary lattice) Let

Φ = m, n ∈ Z : (U1 + ms, U2 + ns) , U1, U2 ∼ U [0, s) .

We find

Λ(B) = EΦ(B) =1s2|B| .

If Φ is stationary on Rd, then Λ(B) ≡ Λ(B + v) for all v ∈ Rd, indicating that the intensity measure isinvariant under translations. But only multiples of the Lebesgue measure have this property, so:

Theorem 4.1 If ν is a translation-invariant measure on Rd, then ν(B) = c|B| for some c > 0.

Corollary 4.2 If Φ is a stationary PP on Rd, then its intensity measure Λ is a constant multiple of theLebesgue measure. The constant is called the intensity of Φ.

If the intensity measure Λ satisfies

Λ(B) =

B

λ(x)dx

for some λ(x), we call λ the intensity function. It has the following interpretation: For some small regiondx ⊂ Rd,

P(Φ(dx) > 0) ∼ EΦ(dx) = Λ(dx) = λ(x)dx .

31

Page 38: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

4.2.1 Constant density vs. stationarity

We know that the density is constant for stationary point processes, i.e., EΦ(B) = c|B|. Does the conversehold, i.e., does EΦ(B) = c|B| imply stationarity?

No, we can construct a counterexample. A Poisson cluster process with parent intensity λ(r) =min1, r−1 and average number of nodes per cluster µ(r) = max1, r. Another counterexample isa mixed PPP/BPP, with a BPP with one node on [0, 1]2 and a PPP on R2 \ [0, 1]2 of intensity 1.

4.3 Second Moment Measures

(See also the Baddeley handout, p. 32ff.)The variance of the point counts Φ(B) is

var Φ(B) = E(Φ(B)2)− (EΦ(B))2

and the covariance is

cov(Φ(A),Φ(B)) = E(Φ(A)Φ(B))− EΦ(A)EΦ(B) .

Definition 4.1 (Second moment measure) Let Φ be a PP on Rd. Then Φ(2) Φ × Φ is a point processon Rd ×Rd consisting of all ordered pairs (x, x) of points x, x ∈ Φ. The intensity measure µ(2) of Φ×Φis a measure on R2d defined as

µ(2)(A×B) = E(Φ(A)Φ(B)) .

If A = B, then µ(2)(A2) = E(Φ(A)2), so this contains all information about variances and covariances.So we can write

cov(Φ(A),Φ(B)) = µ(2)(A×B)− Λ(A)Λ(B)

andvar(Φ(A)) = µ(2)(A2)− (Λ(A))2 .

Campbell’s formula for the mean also applies to Φ× Φ, hence

E

x∈Φ

y∈Φ

f(x, y)

=

Rd

Rdf(x, y)µ(2)(dx,dy) .

Example 4.2 (Uniform PPP) For the uniform PPP of intensity λ in Rd, the second moment measure sat-isfies

µ(2)(A×B) = λ2|A||B|+ λ|A ∩B| .

To see this, let’s write A, B as

A = (A ∩B) ∪ (A \B) ; B = (A ∩B) ∪ (B \A) .

32

Page 39: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

Figure 4.1: Illustration for second moment measure. Here Φ ⊂ R+ is a PPP of intensity 1 on R+. Shownis the product PP Φ(2) = Φ × Φ on (R+)2. The points on the diagonal line are the points (x, x) forx ∈ Φ. The box is the rectangle C = A × B with A = [2, 6] and B = [4, 6]. For this realization,Φ(2)(C) = 6. If A ∩ B = ∅, then no points on the diagonal lie in C. The expected number of points in Cis µ(2)(C) = 4 · 2 + 2 = 10.

Then

µ(2)(A×B) = E(Φ(A)Φ(B))= E [(Φ(A ∩B) + Φ(A \B)) · (Φ(A ∩B) + Φ(B \A))]= E(Φ(A \B))E(Φ(B \A)) + E(Φ(A ∩B))E(Φ(A \B))

+ E(Φ(A ∩B)E(Φ(B \A)) + E((Φ(A ∩B))2)

= E(Φ(A))E(Φ(B)) + E((Φ(A ∩B))2)− (E(Φ(A ∩B)))2 varΦ(A∩B)

= Λ(A)Λ(B) + Λ(A ∩B)

= λ2|A||B|+ λ|A ∩B| .

So there is a constant density λ2 on all of Rd × Rd, plus a positive mass on the diagonal (x, x) : x ∈Rd.

The fact that points inside A ∩B are of special importance can also be explained as follows: Considerwhat happens if a point is added. If the additional point falls in A \ B or B \ A, the change is only linearin |A| or |B|, respectively. If it falls into A ∩B, however, the change is quadratic.

Fig. 4.1 shows a realization of a PP Φ(2), where Φ is a PPP on R+.If A = B: µ(2)(A2) = E(Φ2(A)) = E2(Φ(A)). The difference is the mass on the diagonal. But the

difference is exactly the variance. Sovar(Φ(A)) = λ|A| .

33

Page 40: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

We already know that—Φ(A) is Poisson after all.

In many instances, it is needed or makes sense to remove the mass on the diagonal. This leads to thesecond factorial moment measure.

Definition 4.2 (Second factorial moment measure) The second factorial moment measure is the intensitymeasure of the process Φ Φ of all ordered pairs of distinct points of Φ:

α(2)(A×B) = E(Φ(A)Φ(B))− E(Φ(A ∩B)) .

The set of points Φ Φ can be written as

Φ Φ (x, x) ∈ Φ× Φ : x = x .

Again from Campbell, we know that the second factorial moment measure satisfies

E

=

x,y∈Φ

f(x, y)

=

Rd

Rdf(x, y)α(2)(dx,dy) .

The notation= indicates a multi-sum over a set, where none of the elements of the set may be taken more

than once. For example,

m,n∈[5]

1 = 25 , whereas=

m,n∈[5]

1 = 20 .

If A and B are disjoint, then µ(2)(A×B) = α(2)(A×B). Generally we have

µ(2)(A×B) = α(2)(A×B) + Λ(A ∩B) .

The circles in Fig. 4.1 show the points of the product Φ Φ.We can express it as follows:

α(2)(A×B) = E(#(x, y) : x ∈ Φ ∩A, y ∈ Φ ∩B, x = y)

= E

=

x,y∈Φ

1A(x)1B(y)

.

The difference between µ(2)(A×B) and α(2)(A×B) lies in the expectation of the sum

x,y∈Φ : x=y

1A(x)1B(y) =

x∈Φ

1A∩B(x) .

The name “factorial” comes from the fact that

α(2)(A×A) = E(Φ(A)2)− E(Φ(A))= E

Φ(A)(Φ(A)− 1))

.

Also:var Φ(A) = α(2)(A×A) + Λ(A)− (Λ(A))2 .

34

Page 41: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Example 4.3 (PPP) For a Poisson point process,

α(2)(A×B) = Λ(A)Λ(B) .

In the uniform case,α(2)(A×B) = λ2|A||B| .

Usually there exists a density (with respect to the Lebesgue measure) of the second factorial momentmeasure, the second moment density, discussed in the next section. In contrast, the second moment measureµ(2) does not have a density. This is another principal motivation for using (2).

4.4 Second Moment Density

Definition 4.3 (Second moment density (or second-order product density)) A point process Φ on Rd issaid to have second moment density (2) if

α(2)(C) = α(2)(A×B) =

A

B

(2)(x, y)dydx =

C

(2)(x, y)dxdy

for any compact C = A×B ⊂ Rd × Rd.

In differential form, α(2)(dx,dy) = (2)(x, y)dxdy. Informally, (2)(x, y) is the joint probability that thereare two points of Φ at two specified locations x and y:

P(Φ(dx) > 0,Φ(dy) > 0) ∼ E(Φ(dx)Φ(dy)) = (2)(x, y)dxdy .

Another interpretation is the following: If C1 and C2 are disjoint spheres with centers x1 and x2 andinfinitesimal volumes dV1 and dV2. Then (2)(x1, x2)dV1dV2 is the probability that there is a point of Φ inboth C1 and C2.

Example 4.4 (PPP) The uniform PPP of intensity λ has (2)(x, y) = λ2. In the non-uniform case,(2)(x, y) = λ(x)λ(y).

If Φ is stationary then (1) = λ and (2) depends only on the difference of its arguments, i.e., there existsa (2)

st : Rd → R+ such that(2)(x, y) ≡ (2)

st (x− y) ∀x, y ∈ Rd .

If Φ is motion-invariant then (2)st depends only on the distance r = x− y, i.e., there exists a (2)

mi : R+ →R+ such that

(2)(x, y) ≡ (2)st (x− y) ≡ (2)

mi (x− y) = (2)mi (r) , ∀x, y ∈ Rd .

In these cases, (2)st and (2)

mi are also called second moment density or second-order product density. Forexample, in the uniform Poisson case, (2)

st (x) = (2)mi (x) = λ2.

Example 4.5 (Binomial point process) The second moment density for a (uniform) binomial point processwith n points in W is

(2)(x, y) =n(n− 1)|W |2 .

35

Page 42: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Calculation. Let NA = Φ(A\B), NB = Φ(B\A), NC = Φ(A∩B); pA = |A\B|/|W |, pB = |B\A|/|W |,pC = |A ∩ B|/|W |. Note that NA, NB, NC follow a multinomial distribution with probabilities pA, pB ,and pC , respectively.

α(2)(A×B) = E [(NA + NC)(NB + NC)]− ENC

= E(N2C) + E(NANC) + E(NBNC) + E(NANB)− ENC

= n(n− 1)p2

C + pApC + pBpC + pApB

= n(n− 1)|A||B||W |2 .

The mean and variance of NC are npC and npC(1 − pC), respectively, and cov(NA, NB) = −npApB sothat E(N2

C)− ENC = n(n− 1)p2

Cand E(NANB) = cov(NA, NB) + E(NA)E(NB) = n(n− 1)pApB .

Definition 4.4 (Pair correlation function) For a PP Φ ⊂ Rd with intensity function λ(x) and secondmoment density (2)(x, y), the pair correlation function is defined as

g(x, y) (2)(x, y)λ(x)λ(y)

.

For motion-invariant processes, it is

gmi(r) (2)mi (r)λ2

.

The pair correlation function is identical to 1 in the uniform Poisson case, and it is smaller than 1 (for smallr) if there is repulsion and larger than 1 if there is clustering.For a BPP of n points on W , it is (from Example 4.5)

g(x, y) =n(n− 1)|W |2

|W |2

n2= 1− 1

n.

The pair correlation function measures the degree of correlation between the RVs Φ(A) and Φ(B) in anon-centered way. If g(x, y) ≡ 1, then cov(Φ(A),Φ(B)) = 0 for disjoint A, B.

Example 4.6 (PP with fixed finite number of points) Let Φ ⊂ Rd consist of n random points x1, . . . , xn,where xi has marginal probability density fi(u), u ∈ Rd. Then Φ has intensity

λ(x) =n

i=1

fi(x) .

Let fij(u, v) be the marginal joint density of (xi, xj). Then Φ has second moment density

(2)(x, y) ==

i,j∈[n]

fij(x, y) (4.4.1)

and pair correlation function

g(x, y) =

=i,j∈[n] fij(x, y)

i∈[n] fi(x)

j∈[n] fj(y)

.

36

Page 43: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

For the simplest case n = 2, the joint probability having one point in A and the other in B is

α(2)(A×B) =

A

B

(f12(x, y) + f21(x, y))dxdy ,

consistent with (4.4.1).The BPP is a special case with fi(x) = |W |−11x ∈ W (if it is uniform) and fij(x, y) = fi(x)fj(y) =

|W |−21x ∈ W. Its intensity is n/|W |, and the second moment density follows from (4.4.1) to be n(n−1)/|W |2, as expected.

Example 4.7 (Poisson cluster process) Consider a Poisson cluster process Φ, formed from a uniformPPPP (Poisson parent point process) Φp with intensity λp by replacing each x ∈ Φp by a random clusterZx, which is a finite point process. The clusters Zx for different x ∈ Φp are independent processes. Let Zx

have intensity function g(u | x) and second moment density h(u, v | x). Conditioned on Φp, the clusterprocess has intensity

λ(u | Φp) =

x∈Φp

g(u | x) .

The (unconditioned) intensity of the cluster process Φ is thus, by Campbell’s formula,

λ(u) = E(g(u | Φp))

= E

x∈Φp

g(u | x)

= λp

Rdg(u | x)dx .

If all clusters have the same intensity function λc(x), i.e., g(u | x) ≡ λc(u − x), then λ = λpµ, whereµ = Λc(Rd) is the mean number of points per cluster. The second moment density is

(2)(u, v) = λ2 + λp

Rdh(u, v | x)dx .

There are two contributions to this second moment density, the one from the overall process and the onefrom pairs of points in the same cluster. The first contribution is λ2 since it arises from pairs of points fromdifferent clusters, and clusters are independent. The second one is

E

x∈Φp

h(u, v | x)

= λp

Rdh(u, v | x)dx .

Sanity check: Assume each parent had a daughter point at exactly the location of the parent. Theng(u | x) = δx(u) and h(u, v | x) = δx(u)δx(v) = 0 since u = v. So λ(u) = λp

δx(u)dx = λp, and we

obtain the desired result.In the Matern cluster process, the points in each cluster are distributed uniformly at random in a ball of

radius R around their parent points, and each cluster contains a Poisson number of points with mean µ.Accordingly, its intensity function is

λc(x) =µ

cdRd1b(o,R)(x) .

37

Page 44: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

In the two-dimensional case, the second moment density of a cluster Zx is h(u, v | x) = µ2/(π2R4) ifu, v ∈ b(x,R) and 0 otherwise. We have

Rd1u, v ∈ b(x,R)dx =

Rd1b(x,R)(u)1b(x,R)(v)dx

=

Rd1b(u,R)(x)1b(v,R)(x)dx

= |b(u,R) ∩ b(v, R)| .

Hence the second moment density of the Matern cluster process is

(2)(u, v) = λ2pµ

2 + λp

µ2

π2R4|b(u,R) ∩ b(v, R)|

or, since this cluster process is motion-invariant, with r = u− v,

(2)mi (r) = λ2

pµ2 + λp

µ2

π2R4|b(o, R) ∩ b(r, R)| .

The pair correlation function follows as

gmi(r) =(2)(r)λ2

pµ2

= 1 +1λp

|b(o, R) ∩ b(r, R)||b(o, R)|2 .

Interpretation: For r 2R, this is just the pair correlation function of the PPP. Also, as λp → ∞, thisapproaches the PPP’s pair correlation function. For smaller r, g(r) > 1, so the PP exhibits clustering —of course.

In the (two-dimensional) Thomas cluster process, each cluster is a PPP with Gaussian intensity function

λc(x) =µ

2πσ2exp

−x

2

2σ2

, x ∈ R2 .

In this case, the pair correlation function is

gmi(r) = 1 +1

4πλpσ2exp

− r2

4σ2

.

Again we note that the pair correlation function approaches 1 from above as λp →∞.

Generally, for Poisson cluster processes with A Poisson number of nodes per cluster and symmetric proba-bility density function f = f(−x) for the points in each cluster,

(2)st (x) = λ2

1 +

(f ∗ f)(x)λp

, (4.4.2)

where λ = λpµ is the density of the cluster process. To show this, we focus on a single cluster and go backto the case of the finite point process. Given the number of nodes n, they are independently placed. Hence,given n, (2)(x, y | n) = n(n− 1)f(x)f(y) and

(2)(x, y) = µ2f(x)f(y) .

38

Page 45: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Still considering the case where x and y belong to the same cluster, but now averaging over the parentprocess Φp yields

(2)cl (u, v) = E

x∈Φp

µ2f(u− x)f(v − x)

= λpµ2

Rdf(u− x)f(v − x)dx

= λpµ2

Rdf(z)f(v − u + z)dz

= λpµ2(f ∗ f)(u− v) .

To this we need to add the λ2 for the case when x and y belong to different clusters to obtain (4.4.2).The second moment density is invariant under permutation of its arguments.

4.5 Second Moments for Stationary Processes

For a PP Φ ⊂ Rd, stationarity implies

E[Φ(A + v)Φ(B + v)] = E[Φ(A)Φ(B)]

for all v ∈ Rd. Thus µ(2) and α(2) are invariant under simultaneous shifts.Now apply the transform T (x, y) = (x, y − x) from Rd × Rd onto itself. Under this transformation, thesimultaneous shift becomes a shift in the first coordinate, i.e.,

(s, t) −→ (s + v, t) .

The image of α(2) under the transformation is a measure µ which is invariant under translations of the firstcoordinate. This process is illustrated in Fig. 4.2. Since translation-invariant measure are multiples of theLebesgue measure, it follows that

µ = λνd ⊗K ,

where λ is the intensity of the process, νd the d-dimensional Lebesgue measure, and K is a measure on Rd

called the reduced second moment measure. So we have the product of two measures on Rd rather than onemeasure on Rd×Rd — there is a disintegration or separation in the second factorial moment measure: Thesecond factorial moment measure is expressible as the product of a Lebesgue component Λ(dx) along thediagonal x = y and a reduced measure along u = y − x. Using Campbell,

E

=

x,y∈Φ

f(x, y)

=

f(x, y)α(2)(dx,dy)

=

f(x, x + u)µ(dx,du)

= λ

f(x, x + u)dxK(du) .

39

Page 46: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 1010

8

6

4

2

0

2

4

6

8

10

Figure 4.2: Illustration of coordinate transform (x, y) → (x, y − x). Left: Original product point processwith translation v = 1. Right: Transformed product point process with same translation.

Theorem 4.3 (Reduced second moment measure) Let Φ be a stationary PP on Rd with intensity λ. Thenthere is a measure K on Rd such that for general measurable f ,

E

=

x,y∈Φ

f(x, y)

= λ

f(x, x + u)dxK(du) .

K is called the reduced second moment measure of Φ.

Choosing f(x, y) = 1A(x)1B(y − x),

E

=

x,y∈Φ

1A(x)1B(y − x)

= λ

1A(x)1B(u)dxK(du) = λ|A|K(B) . (4.5.1)

Since λ|A| = EΦ(A) and 1B(y − x) = 1B+x(y),

K(B) =E

x∈Φ∩A

Φ((B + x) \ x)EΦ(A)

.

If the second moment density (2)(x, y) exists, it depends only on the difference x−y, and we can write

K(B) =1λ

B

(2)st (u)du . (4.5.2)

40

Page 47: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

This can be seen by expanding the LHS of (4.5.1) as follows:

E

=

x,y∈Φ

1A(x)1B(y − x)

=

1A(x)1B(y − x)(2)(x, y)dxdy

=

A

Rd1B(y − x)(2)(x, y)dxdy

=

A

Rd1B(u)(2)

st (u)dudx

=

A

B

(2)st (u)dudx

= |A|

B

(2)st (u)du ,

which, by (4.5.1), is equal to λ|A|K(B). Hence (4.5.2) follows.In particular for motion-invariant processes, a simpler function is often useful, namely Ripley’s K func-

tion.

Definition 4.5 (Ripley’s K function) Ripley’s K function is defined as

K(r) =1λK(b(o, r)) r 0 .

So λK(r) is the mean number of points y of the process that satisfy 0 < y − x r for a given point xof the process.

Example 4.8 For the uniform PPP on Rd,

K(r) = cdrd , r 0 .

For a stationary PP on Rd which has a second moment density,

K(r) =1λ2

b(o,r)(2)st (x)dx =

b(o,r)gst(x)dx .

where g is the pair correlation function. If the process is on R2 and motion-invariant,

K(r) = 2π

r

0g(r)rdr .

Lemma 4.4 For a motion-invariant PP on R2, the pair correlation function is given by

gmi(r) =1

2πr

ddr

K(r) .

Lemma 4.5 (Invariance of K under thinning) If Φ is obtained by random (stationary) thinning from astationary PP Φ, then the K functions of Φ and Φ are identical.

The definition of the reduced second moment measure varies slightly throughput the literature. Here weadopted the definition in [1]. Some authors scale their K(B) by λ−1 [5], such that K(r) = K(b(o, r)), orλ [6].

A close relative of the K function is the L function.

41

Page 48: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Definition 4.6 (The L function)

L(r)

K(r)cd

1/d

The L function is sometimes preferred to the K function since it is simply L(r) = r for the uniform PPP.

42

Page 49: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Chapter 5

Conditioning and Palm Theory

5.1 Introduction and Basic Concepts for Stationary Point Processes

The Palm probability in point process theory is the probability of an event given that the point processcontains a point at some location. It formalizes the notion of a “typical” point of the process. Informally,the typical point results from a selection procedure in which every point has the same chance of beingselected. This idea, while heuristically clear, needs to be made mathematically precise. For example,points chosen according to some sampling procedure, such as choosing the point closest to the origin arenot typical, because they have been sampled in a specific way. Intuitively, the Palm distribution is theconditional point process distribution given that a point (the typical point) is located at a specific location.

In this section we discuss two heuristic approaches to Palm theory, a local and a global approach, appliedto stationary point processes.

5.1.1 The local approach

We first introduce some new notation. For a point process property Y ∈ N ,

P(Φ has property Y x) P(Φ has property Y | x ∈ Φ)= P(Φ ∈ Y | x ∈ Φ)= P(Y | x ∈ Φ)= P(Y x) .

By stationarityP(Φ ∈ Y x) = P(Φx ∈ Y o) ,

where Φx x1 + x, x2 + x, . . .. Note that P(Φx ∈ Y o) is to be understood as P(Φx ∈ Y | o ∈ Φ),not P(Φx ∈ Y | o ∈ Φx). So for stationary PPs, conditioning may be restricted to o ∈ Φ, without loss ofgenerality.

The nearest-neighbor distance distribution, for example, may be expressed as

D(r) = P(Φ(b(o, r)) > 1 o) = 1− P(Φ(b(o, r)) = 1 o) .

The conditioning event o ∈ Φ has probability zero. For the uniform PPP, the conditional distribution canbe calculated by a limit procedure: The conditional probability

D(r) = 1− P(Φ(b(o, r) \ b(o, )) = 0 | Φ(b(o, )) = 1)

43

Page 50: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

is well-defined for 0 < < r, since

P(Φ(b(o, )) = 1) = λcdd exp(−λcd

d) > 0 .

We have

D(r) = 1− P(Φ(b(o, r) \ b(o, ))) = 0)P(Φ(b(o, )) = 1)P(Φ(b(o, )) = 1)

= 1− P(Φ(b(o, r) \ b(o, )) = 0)

= 1− exp(−λcd(rd − d)) .

It seems reasonable to define D(r) lim→0 D(r), so

D(r) = 1− exp(−λcdrd) , r 0 .

So the spherical contact distribution function (or empty space function) and nearest-neighbor distance dis-tribution for the stationary PPP are identical. And for any compact B with o /∈ B,

P(Φ(B) = n o) = P(Φ(B) = n) for n = 0, 1, 2, . . . .

This suggests that the Palm distribution of the stationary PPP is identical to the distribution of the originalPPP with a point added at the origin. Slivnyak’s theorem formalizes this statement:

Theorem 5.1 (Slivnyak’s theorem) For the PPP,

P(Φ ∈ Y o) = P(Φ ∪ o ∈ Y ) .

So conditioning on o ∈ Φ is the same as adding a point at o. This is not restricted to the uniform case. Wewill provide a proof later.

If the point process is finite (and thus non-stationary), we may arrive at a definition of the nearest-neighbor distance distribution by considering all points (and averaging, if needed). In particular, for theBPP, we may proceed as follows. Let Φ = x1, . . . , xn be a uniform BPP on W ⊂ R2, and let Rx, x ∈ Φ,be the distance to x’s nearest neighbor in Φ \ x. We would like to find P(Rx ≤ r x). For each i ∈ [n],

P(Rx ≤ x | xi = x) = 1− P(Rx > r | xi = x)

= 1− P(Φ!(b(x, r)) = 0) ,

where Φ! = Φ \ xi is a BPP with n− 1 points. Hence

P(Rx ≤ r | xi = x) = 1−|b(x, r) ∩W |

|W |

n−1

.

This is independent of the index i, as expected. Thus it is plausible to interpret it as the conditional proba-bility P(Rx ≤ r x).

44

Page 51: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

5.1.2 The global approach

Consider all points falling into a test set B with |B| > 0. Denote by ΦY (B) the number of points x in Bsuch that Φ−x ∈ Y :

ΦY (B) #x ∈ Φ ∩B : 1Y (Φ−x)

Then

P(Φ ∈ Y o) =E(ΦY (B))

λ|B| .

Since λ|B| is the expected number of points in B, this expression defines the Palm probability for theproperty Y as the fraction of points x expected to fall in B such that Φ−x ∈ Y . By stationarity this doesnot depend on B if |B| is fixed.

5.2 The Palm Distribution

5.2.1 Heuristic introduction

In the stationary case, we can define a distribution function

D(r) = 1− P(Φ(b(o, r)) = 1 o) ,

which, for a “typical point”, would be the distribution function of the distance to its nearest neighbor. Dueto the conditioning, Φ(b(o, r))o ≥ 1. So this definition of the nearest neighbor distance adds a point at theorigin, but then ignores it in the void probability. This approach leads to the Palm distribution, but needs tobe made mathematically precise.

For Y ∈ N , letYx = ϕ ∈ N : ϕ−x ∈ Y .

ThenP(Φx ∈ Y o) = P(Φ ∈ Y−x o) .

Now let Φ be a general PP with distribution P and locally finite intensity measure Λ. We consider aconcrete example for Y : Let

Y = ϕ ∈ N : ϕ(b(o, r)) = 1 ∈ N

and consider the function h : Rd × N → 0, 1 given by

h(x, ϕ) 1B(x)1Y (ϕ−x) = 1B(x)1Yx(ϕ)

for some bounded Borel set B. When is 1Y (ϕ− x) = 1? This is the case if the point x does not have anyneighbor within distance r. So we may write the mean number of points in B whose neighbors are all atdistance at least r as

E

x∈Φ

h(x,Φ)

=

N

x∈ϕ

h(x,ϕ)P(dϕ) .

Note: This is not an application of Campbell’s formula, this is simply writing out the expectation.

45

Page 52: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

More generally we wish to evaluate this expression for arbitrary non-negative measurable functionsh : Rd × N → R+. If Rd is partitioned into domains D1, D2, . . ., each with non-zero volume, then we canwrite

E

x∈Φ

h(x,Φ)

=

k

E

x∈Φ∩Dk

h(x,Φ) | Φ(Dk) > 0

· P(Φ(Dk) > 0) .

Now suppose Dk → dx. Then P(Φ(Dk) > 0) → Λ(dx), and the conditional mean should converge to themean E(h(x,Φ) x) of h(x, ϕ) with respect to P(· x). Hence we obtain

E

x∈Φ

h(x, Φ)

=

RdE(h(x,Φ) x)Λ(dx) .

For stationary Φ, Λ = λνd, so we have

E

x∈Φ

h(x,Φ)

= λ

RdE(h(x, Φx) o)dx ,

since Φx has a point at x if Φ has a point at o.Using the notation Po(Y ) P(Φ ∈ Y o) for Y ∈ N , this takes the form

N

x∈ϕ

h(x, ϕ)P(dϕ) = λ

Rd

Nh(x,ϕx)Po(dϕ)dx (5.2.1)

and so, coming back to our example, for h(x,ϕ) = 1B(x)1Y (ϕ−x), we obtain

N

x∈ϕ∩B

1Y (ϕ−x)P(dϕ) = λ|B|Po(Y )

for Borel B and Y ∈ N . Now we need to construct a distribution on [N,N ] with the desired behavior ofPo.

5.2.2 A first definition of the Palm distribution (stationary point processes)

Take Φ to be a stationary PP with finite non-zero intensity λ.

Definition 5.1 (Palm distribution for stationary point processes) The Palm distribution (at o) of P is adistribution (probability measure) defined on [N,N ] by

Po(Y )

N

x∈ϕ∩B

1Y (ϕ−x)P(dϕ)λ|B| for Y ∈ N .

This definition does not depend on B — it can be an arbitrary Borel set of positive volume.We can give an intuitive explanation for this definition using marked point processes. To each point

x ∈ Φ give a mark 1 or 0 depending on whether the shifted process Φ−x belongs to Y or not. For example,consider (again) Y = ϕ ∈ N | ϕ(b(o, r)) = 1. Then x has mark 1 precisely when its nearest neighbor isfurther than r away. The result is a stationary marked PP with a mark distribution M on 0, 1, and we canwrite

Po(Y ) = M(1) =λ[1]

λ,

46

Page 53: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

where λ[1] is the intensity of the points whose mark is 1. The mean number of points of Φ with mark 1 inB is λ[1]|B|, so the above definition makes Po(Y ) independent of the test set B.

Alternative notation: Frequently, λ(Y ) is used instead of λ[1]. So

Po(Y ) =λ(Y )

λ.

In the derivation above, (5.2.1) suggests that the following result, the so-called refined Campbell theorem(or Campbell-Mecke theorem) holds for stationary PPs:

Theorem 5.2 (Campbell-Mecke) For any non-negative measurable function h on Rd × N,

E

x∈Φ

h(x,Φ)

=

N

x∈ϕ

h(x,ϕ)P(dϕ)

= λ

Rd

Nh(x,ϕx)Po(dϕ)dx .

Taking h(x,ϕ) = f(x) reproduces the standard Campbell theorem.

5.2.3 A second definition of the Palm distribution (general point processes)

This approach uses the Campbell measure C and the Radon-Nikodym theorem.

Definition 5.2 (Campbell measure) The Campbell measure C is defined as a measure on [Rd×N,Bd×N ]by the relationship

N

x∈ϕ

f(x, ϕ)P(dϕ) =

Rd×Nf(x,ϕ)C(d(x,ϕ)) ,

where f is any non-negative measurable function on Rd × N. Equivalently, in integral form,

C(B × Y ) = E(Φ(B)1Y (Φ)) .

The two definitions are equivalent, since for f(x,φ) = 1B(x)1Y (φ),

N

x∈ϕ

f(x, ϕ)P(dϕ) =

Y

φ(B)P(dφ) = E(Φ(B)1Y (Φ)) = C(B × Y ) .

Note that C(B × Y ) EΦ(B) = Λ(B). For fixed Y , let µY (B) C(B × Y ). So µY is a measure andµY Λ, so certainly µY is absolutely continuous with respect to Λ, i.e., µY Λ. By the Radon-Nikodymtheorem, there exists a density fY such that

µY (B) =

B

fY (x)Λ(dx)

for Borel B where fY : Rd → R+ is measurable (and unique up to equality Λ-a.s.). This density fY is theRadon-Nikodym derivative dµY /dΛ. As a consequence, we define the Palm distribution as follows:

47

Page 54: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Definition 5.3 (Palm distribution for general point processes) The Palm distribution Px(Y ) is definedas

Px(Y ) fY (x) ,

where fY (x) is the density pertaining to C(·× Y ), i.e.,

C(B × Y ) =

B

fY (x)Λ(dx) =

B

Px(Y )Λ(dx) . (5.2.2)

For fixed x, Px(·) is indeed a distribution (probability measure) on [N,N ]. We may write the Palmdistribution compactly as the Radon-Nikodym derivative

Px(Y ) dC(·× Y )dΛ

.

This is consistent with the intuition we initially developed: As → 0, we have

P(Φ ∈ Y | Φ(b(x, )) > 0) =P(Φ ∈ Y, Φ(b(x, )) > 0)

P(Φ(b(x, )) > 0)

∼ E(Φ(b(x, ))1Y (Φ))E(Φ(b(x, )))

=C(b(x, )× Y )

Λ(b(x, ))∼ Px(Y ) .

In the stationary case we have

Po(Y ) = Px(Yx) for Y ∈ N ,

since

λ

B

Pz(Y )dz = C(B × Y ) = C(Bx × Yx)

= λ

Bx

Py(Yx)dy

= λ

B

Px+z(Yx)dz

for all B, x, and Y . This implies Pz(Y ) = Px+z(Yx). This second definition is applicable to general pointprocesses. In the stationary case, we can retrieve the refined Campbell theorem as follows:

E

x∈Φ

h(x,Φ)

=

Rd×Nh(x,ϕ)C(d(x,ϕ))

= λ

Rd

Nh(x,ϕ)Px(dϕ)dx

= λ

Rd

Nh(x,ϕx)Po(dϕ)dx

48

Page 55: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Example 5.1 (An elementary property of the Palm distribution in the stationary case) Let Y = ϕ ∈N : ϕ(o) > 0 be the set of simple sequences that have a point at o. What is Po(Y )? We have

Po(Y ) =λ(Y )

λ=

E(ΦY ([0, 1]d))λ

=E#x ∈ Φ ∩ [0, 1]d : 1Y (Φ−x)

λ

=EΦ([0, 1]d)

λ= 1 .

This is of course to be expected, since we are conditioning on o ∈ Φ.

Remarks.

• If Φ is ergodic, we have for all Y ∈ N

Po(Y ) = limm→∞

ΦY ([−m, m]d)Φ([−m, m]d)

.

• The Campbell can also be defined on the original probability space, i.e., as a measure on [Rd×Ω,B×A], where (Ω,A, P) is the original probability space. This is the definition that Baddeley uses.

Example 5.2 (Mixed Poisson point process) Let L be a non-negative random variable defined on Ω. GivenL = λ, let Φ be a homogeneous PPP of intensity λ. The intensity of this (non-ergodic) mixed Poisson pointprocess is

EΦ(B) = E(EΦ(B) | L) = E(L)|B| .

Let A = L ≥ γ for some γ ≤ 0. For B ⊂ Rd, we have

C(B ×A) = E(Φ(B)1A)= E(E(Φ(B)1A | L))= E(L|B|1L ≤ γ)= E(L1L ≤ γ)|B| ,

and it follows that

Px(L ≤ γ) =E(L1L ≤ γ)

EL,

sinceC(B ×A) =

B

Px(A)Λ(dx)

andB

Λ(dx) = E(L)|B|. Hence the distribution of L under Px is skewed compared to its originaldistribution. The event L ≤ γ is less likely to have occurred if Φ has a point at x. Say for P(L ≤ γ) =1− exp(−γ), we obtain

Px(L ≤ γ) = 1− exp(−γ)− γ exp(−γ) .

This is an example where the event A in the Campbell measure C(·×A) is an event in the original probabilityspace.

49

Page 56: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

5.2.4 Alternative interpretation and conditional intensity

We have interpreted Px(Φ ∈ Y ) as the limit of the probability P(Φ ∈ Y | Φ(b(x, )) > 0) as ↓ 0. UsingBayes’ theorem,

P(Φ(b(x, )) > 0 | Φ ∈ Y ) =P(Φ(b(x, )) > 0)

P(Φ ∈ Y )P(Φ ∈ Y | Φ(b(x, )) > 0)

and, as ↓ 0,P(Φ(b(x, )) > 0 | Φ ∈ Y )

P(Φ(b(x, )) > 0)→ Px(Φ ∈ Y )

P(Φ ∈ Y ).

If Φ has an intensity function λ(x), P(Φ(b(x, )) > 0) ∼ λ(x)|b(x, )|, and

P(Φ(b(x, )) > 0 | Φ ∈ Y )|b(x, )| → λ(x)

Px(Φ ∈ Y )P(Φ ∈ Y )

.

So the RHS can be interpreted as the conditional intensity of the point process given Y .

5.3 The Reduced Palm Distribution

5.3.1 Definition

In the reduced Palm distribution P!x, the point at x on which we condition is not included in the distribution:

P!x(Y ) P(Φ \ x ∈ Y x) for Y ∈ N

In particular,P!

o(Y ) = P(Φ \ o ∈ Y o) for Y ∈ N

To make precise the conditioning on the event with probability zero x ∈ Φ, we may write in the stationarycase:

Po(Y ) =

N

x∈ϕ∩B

1Y (ϕ−x \ o)P(dϕ)λ|B| .

In the general case, we define first the so-called reduced version of the Campbell measure. This is the

Definition 5.4 (Reduced Campbell measure) The reduced Campbell measure C! is defined as

N

x∈ϕ

f(x, ϕ \ x)P(dϕ) =

N

x∈ϕ

f(x,ϕ− δx)P(dϕ)

=

Nf(x,ϕ)C!(d(x,ϕ)) .

ϕ \ x and ϕ − δx are two different notations, a set-theoretic notation and a measure-theoretic notation,for the point pattern ϕ with the point x ∈ ϕ deleted. Replacing C by C! in (5.2.2), we arrive at the definitionof the reduced Palm distribution P !

x:

50

Page 57: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

Definition 5.5 (Reduced Palm distribution) The reduced Palm distribution P!x is defined by the relation-

ship

C!(B × Y ) =

B

P!x(Y )Λ(dx) .

or, equivalently, as the Radon-Nikodym derivative

P!x(Y ) dC!(·× Y )

of the reduced Campbell measure with respect to the Lebesgue measure.

The nearest-neighbor distribution function can be expressed via Po or P!o by

D(r) = 1− Po(ϕ ∈ N : ϕ(b(o, r)) = 1)= 1− P!

o(ϕ ∈ N : ϕ(b(o, r)) = 0) .

D is an important characteristic of PPs. The ratio of the ccdf of the NN distance and the spherical contactdistribution function is the so-called J-function defined as

J(r) 1−D(r)1− F (r)

for r 0 ,

where F (r) is the spherical contact distribution function (or empty space function).

Example 5.3 (Reduced Palm distribution for BPP) Let Φn be a BPP with n nodes. Then the reducedPalm distribution of Φn is the distribution of Φn−1.

5.3.2 Palm distribution for PPPs and proof of Slivnyak’s theorem

The theorem of Slivnyak gives the Palm distribution Px of a PPP of intensity measure Λ and distributionP :

Px = P ∗ δδx for all x .

δδx denotes the distribution of the degenerate point process that consists solely of the (non-random) point x,and “∗” denotes the convolution of distributions, which corresponds to the superposition of point processes.This equation can be interpreted as

Px(Y ) = P(Φ ∈ Y x) = P(Φ ∪ x ∈ Y ) for Y ∈ N

or

Nf(ϕ)Px(dϕ) =

Nf(ϕ ∪ x)P(dϕ)

for all measurable non-negative functions f . If the reduced Palm distribution is used it takes on a moreelegant form:

P!x ≡ P

This is a characterization of the PPP.Proof of Slivnyak’s Theorem. Since Px and P∗δδx are the distributions of simple processes, their equality

is established if the corresponding void probabilities are equal, i.e.,

P ∗ δδx(VK) = Px(VK)

51

Page 58: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

for all compact K ⊂ Rd where VK = ϕ ∈ N : ϕ(K) = 0.Let A be any bounded Borel set. Then

A

P ∗ δδx(VK)Λ(dx) =

A\K

P(VK)Λ(dx)

= P(VK)Λ(A \K)= E(1Φ(K) = 0) · E(Φ(A \K))= E (Φ(A \K)1Φ(K) = 0)= C((A \K)× VK) .

Clearly C((A∩K)× VK) = E(Φ(A∩K)1Φ(K) = 0) = 0. (If there was a point in the intersection thevoid indicator would be zero.) Hence

A

P ∗ δδx(VK)Λ(dx) = C(A× VK) =

A

Px(VK)Λ(dx)

using C(B × Y ) =B

Px(Y )Λ(dx).

5.3.3 Isotropy of Palm distributions

Clearly the Palm distribution Po is never a stationary distribution since under Po a PP must always containo. But the example of the PPP shows that P!

o can be stationary! However, if the PP Φ is motion-invariantthen its Palm distribution is isotropic, i.e.,

Po(Y ) = Po(rY ) for Y ∈ N

for every rotation r about the origin.

Example 5.4 (Palm distribution for motion-invariant lattice) Consider a randomly translated and ro-tated lattice Φ. Conditioned on o ∈ Φ, the lattice is no longer stationary, but it is still isotropic. One of thefour nearest neighbors of the origin is bound to lie at an angle between 0 and π/2, with uniform distribu-tion. If a second point is added in the conditioning, say (0, 1) ∈ Φ (if the underlying lattice is Z2), then thePalm distribution equals the (degenerate) distribution of Z2. In general, such two-fold Palm distributionscan be derived from higher-order Campbell measures.

5.3.4 Palm expectations

The Palm distribution translates to the probability measure in the original probability space naturally, i.e.,

Px(Y ) ≡ Px(Φ ∈ Y ) ; P !x(Y ) ≡ P!

x(Φ ∈ Y ) .

Expectations with respect to the (reduced) Palm distribution are often denoted as Ex and E!x, respectively.

5.4 Second Moments Measures and Palm Distributions for Stationary Pro-cesses

In integral form, the reduced Campbell measure can be expressed as

C!(B × Y ) = E!o(Φ(B)1Y (Φ)) .

52

Page 59: Course Notes for EE 87021 Advanced Topics in Random Wireless …mhaenggi/ee87021/CourseNotes-Part-I.pdf · 2010-11-01 · Course Notes for EE 87021 Advanced Topics in Random Wireless

For a stationary point process Φ, the distribution of Φ given that x ∈ Φ is the same as the distribution ofΦx given that o ∈ Φ:

Φ xd= (Φ o) + x = Φx o .

Using refined Campbell,

α(2)(A×B) = E

=

x1,x2∈Φ

1A(x1)1B(x2)

=

N

x∈ϕ

1A(x)ϕ(B \ x)P(dϕ)

= λ

Rd

N1A(x)ϕ((B − x) \ o)Po(dϕ)dx .

We want to express this as

α(2)(A×B) = λ

Rd

Rd1A(x)1B(x + u)K(du)dx (5.4.1)

= λ

A

K(B − x)dx . (5.4.2)

So we may define the reduced second moment measure K as follows:

Definition 5.6 (Reduced second moment measure based on Palm distribution)

K(B)

Nϕ(B \ o)Po(dϕ) =

Nϕ(B)P!

o(dϕ) = E!oΦ(B) .

Hence, K(B) is the mean number of points in Φ ∩B \ o under the condition that in o there is a point ofΦ. In other words, K is the intensity measure of the reduced Palm distribution.

Accordingly, the K function can be defined as

K(r) 1λ

E(Φ(b(o, r))− 1 o) =1λ

E!oΦ(b(o, r)) .

The K function is an important second-order statistic of motion-invariant processes, but it does not fullycharacterize it. Different point processes may have the same K function — in much the same way asdifferent random variables may share the same mean and variance.

53