Top Banner
Math 311 Notes Erin P. J. Pearse April 18, 2007
155

Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Aug 04, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Math 311 Notes

Erin P. J. Pearse

April 18, 2007

Page 2: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

2 Math 311

Page 3: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Contents

0 Course Overview 9

0.1 Logic and inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1 Real Numbers and Monotone Seqs 15

1.1 Introduction. Real Numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.2 Increasing sequences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.3 Limit of an increasing sequence. . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4 Example: e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1.5 Harmonic sum and Euler’s gamma . . . . . . . . . . . . . . . . . . . . . . . 19

1.6 Decreasing seqs, Completeness . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Estimates and approximation 23

2.1 Introduction. Inequalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2 Estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.3 Proving boundedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.4 Absolute values. Estimating size. . . . . . . . . . . . . . . . . . . . . . . . . 25

2.5 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.6 “for n large” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 The Limit of a Sequence 29

3.1 Definition of limit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2 The uniqueness of limits. The K − ε principle. . . . . . . . . . . . . . . . . 30

3.3 Infinite limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.4 An important limit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.5 Writing limit proofs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.6 Some limits involving integrals. . . . . . . . . . . . . . . . . . . . . . . . . . 33

Page 4: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

4 Math 311 CONTENTS

3.7 Another limit involving integrals. . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Error Term Analysis 35

4.1 The error term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.2 Geometric series error term. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.3 Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.4 The Fibonacci numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5 The Limit Theorems 41

5.1 Limits of sums, products, quotients . . . . . . . . . . . . . . . . . . . . . . . 41

5.2 Comparison Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.3 Location theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.4 Subsequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.5 Two common mistakes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6 The Completeness Property 49

6.1 Introduction. Nested intervals. . . . . . . . . . . . . . . . . . . . . . . . . . 49

6.2 Cluster points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.3 Bolzano-Weierstrass Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6.4 Cauchy sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6.5 Completeness Property for sets . . . . . . . . . . . . . . . . . . . . . . . . . 54

7 Infinite Series 57

7.1 Series and sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

7.2 Elementary convergence tests . . . . . . . . . . . . . . . . . . . . . . . . . . 59

7.3 Series with negative terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

7.4 Ratio and Root tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

7.5 Integral, p-series, and asymptotic comparison . . . . . . . . . . . . . . . . . 65

7.6 Alternating series test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7.7 Rearrangements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7.8 Multiplication of Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

8 Power Series 71

8.1 Intro, radius of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

8.2 Convergence at endpoints, Abel summation . . . . . . . . . . . . . . . . . . 73

8.3 Linearity of power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Page 5: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

CONTENTS 5

8.4 Multiplication of power series . . . . . . . . . . . . . . . . . . . . . . . . . . 74

9 Functions of One Variable 75

9.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

9.2 Algebraic operations on functions . . . . . . . . . . . . . . . . . . . . . . . . 77

9.3 Properties of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

9.4 Elementary functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

10 Local and Global Behavior 79

10.1 Intervals. Estimating functions . . . . . . . . . . . . . . . . . . . . . . . . . 79

10.2 Approximating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

10.3 Local behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

10.4 Local and global properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

11 Continuity and limits 83

11.1 Continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

11.1.1 Discontinuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

11.2 Limits of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

11.3 Limit theorems for functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

11.4 Limits and continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

12 Intermediate Value Theorem 91

12.1 Existence of zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

12.2 Applications of Bolzano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

12.3 Monotonicity and the IVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

12.4 Inverse functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

13 Continuity and Compact Intervals 95

13.1 Compact intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

13.2 Bounded continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . . 96

13.3 Extrema of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . 97

13.4 The mapping viewpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

13.5 Uniform continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

14 Differentiation: Local Properties 101

14.1 The derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Page 6: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

6 Math 311 CONTENTS

14.2 Differentiation formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

14.3 Derivatives and local properties . . . . . . . . . . . . . . . . . . . . . . . . . 104

15 Differentiation: Global Properties 107

15.1 The Mean-Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

15.4 L’Hopital’s rule for indeterminate forms . . . . . . . . . . . . . . . . . . . . 108

16 Linearization and Convexity 111

16.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

16.2 Applications to convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

17 Taylor Approximation 115

17.1 Taylor polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

17.2 Taylor’s theorem with Lagrange remainder . . . . . . . . . . . . . . . . . . . 116

17.3 Estimating error in Taylor’s approximation . . . . . . . . . . . . . . . . . . 117

17.4 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

18 Integrability 119

18.1 Introduction. Partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

18.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

18.3 Integrability of monotone or continuous f . . . . . . . . . . . . . . . . . . . 122

18.4 Basic properties of integrable functions . . . . . . . . . . . . . . . . . . . . . 123

19 The Riemann Integral 125

19.3 Riemann sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

19.4 Basic properties of the integral . . . . . . . . . . . . . . . . . . . . . . . . . 126

19.5 Interval addition property . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

19.6 Piecewise properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

20 Derivatives and Integrals 131

20.1 First fundamental theorem of calculus . . . . . . . . . . . . . . . . . . . . . 131

20.2 Second fundamental theorem of calculus . . . . . . . . . . . . . . . . . . . . 132

20.3 Other relations between integrals and derivatives . . . . . . . . . . . . . . . 133

20.4 Logarithm and exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

20.5 Stirling’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

20.6 Growth rate of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Page 7: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

CONTENTS 7

21 Improper Integrals 137

21.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

21.2 Comparison theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

21.3 The Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

21.4 Absolute and conditional convergence . . . . . . . . . . . . . . . . . . . . . 141

22 Sequences and Series of Functions 143

22.1 Pointwise and uniform convergence . . . . . . . . . . . . . . . . . . . . . . . 143

22.2 Criteria for uniform convergence . . . . . . . . . . . . . . . . . . . . . . . . 146

22.3 Continuity and uniform convergence . . . . . . . . . . . . . . . . . . . . . . 147

22.4 Term-by-term integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

22.5 Term-by-term differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . 149

22.6 Power series and analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Page 8: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

8 Math 311 CONTENTS

Page 9: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 0

Course Overview

1

The study of the real numbers, R, and functions of a real var, f(x) = y where x, y real.

Given f : R→ R which describes some system, how to study f?

• Need rigourous vocab for properties of f (definitions)

• Need to see when some properties imply others (theorems)

Result: can make inferences about the system.

Limits: the heart & soul of calculus.

Limits provide a rigourous basis for ideas like sequences, series, continuity, derivatives,

integrals. More adv: model an arbitrary function as a limit of a sequence of “nice”

functions (polys, trigs) or as a sum of “nice” functions (Fourier, wavelets). All of this

requires understanding limits of numbers.

Outline:

1. Logic: not, and, or, implication; rules of inference

2. Sets: elements, intersection, union, containment; special sets

3. The real numbers: algebraic properties (+,×), order properties (<), completeness

properties

1April 18, 2007

Page 10: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

10 Math 311 Course Overview

4. Sequences: types of, convergence, basic results (arithmetic, etc), subsequences, con-

vergence, Cauchy sequences

5. Series: convergence tests, absolute convergence, power series

6. Functions: arith, behavior, continuity & limits, IVT, compact domains

7. Differentiation: MVT, L’Hopital, Taylor & linearization

8. Integrals: integrability and the Riemann integral

9. Special functions: exp, log, gamma

10. Seqs and series of functions

0.1 Logic and inference

Most theorems involve proving a statement of the form “if A is true, then B is true.” This

is written A =⇒ B and called if-then or implication. A is the hypothesis and B is the

conclusion. To say “the hypothesis is satisfied” means that A is true. In this case, one

can make the argument

A =⇒ B

A

B

and infer that B must therefore be true, also.

What does A =⇒ B mean? We use the more familiar connectives “and” and “or”

and “not” (¬) to describe it, via truth tables. Consider:

A B A and B

T T T

T F F

F T F

F F F

and

A B A or B

T T T

T F T

F T T

F F F

and

A ¬A

T F

T T

Page 11: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

0.1 Logic and inference 11

A =⇒ B means that whenever A is true, B must also be true, i.e., it CANNOT be

the case that A is true B is false: (A =⇒ B) ≡ ¬(A and ¬B). This means that the truth

table for =⇒ can be found:

A B ¬B A and (¬B) ¬(A and ¬B)

T T F F T

T F T T F

F T F F T

F F T F T

=⇒

A B A =⇒ B

T T T

T F F

F T T

F F T

A B ¬A ¬B A =⇒ B ¬(A and ¬B) ¬A or B ¬B =⇒ ¬A B =⇒ A ¬A =⇒ ¬B

T T F F T T T T T T

T F F T F F F F T T

F T T F T T T T F F

F F T T T T T T T T

If A =⇒ B and B =⇒ A, then the statements are equivalent and we write “A if

and only if B” as A ⇐⇒ B,A ≡ B, or A iff B. This is often used in definitions.

A B A =⇒ B B =⇒ A (A =⇒ B) and (B =⇒ A) A ⇐⇒ B

T T T T T T

T F F T F F

F T T F F F

F F T T T T

If you know that A ⇐⇒ B, then you can replace A with B (or v.v.) wherever it

appears. A ≡ B is like “=” for logical statements.

One last rule (DeMorgan):

A B ¬A ¬B ¬(A and B) ¬A or ¬B ¬(A or B) ¬A and ¬B

T T F F F F F F

T F F T T T F F

F T T F T T F F

F F T T T T T T

Page 12: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

12 Math 311 Course Overview

Thus, ¬(A and B) ≡ (¬A or ¬B) and ¬(A or B) ≡ (¬A and ¬B).

Example 0.1.1. Thm: a bounded increasing sequence converges.

This means: If a sequence {an} is increasing and bounded, then it converges, i.e.,

({an} increasing) and ({an} bounded) =⇒ {an} converges.

Suppose we are considering the sequence where an = 1 − 1n . We apply the theorem and

see that an must converge (to something?).

Suppose we are considering the sequence an = (−1)n, which is known to diverge. The

theorem is still helpful; by contrapositive,

¬({an} converges) =⇒ ¬(({an} increasing) and ({an} bounded))

{an} diverges =⇒ ¬({an} increasing) or ¬({an} bounded),

using DeMorgan. So an is either not increasing or unbounded. However, an is bounded,

because every term is contained in the finite interval [−1, 1]. Thus, we can infer that an

must not be increasing. (Note: not increasing does not imply decreasing!)

How to prove A =⇒ B.

Direct proof.

1. Assume the hypothesis, i.e., assume A is true, just for now.

2. Apply this “fact” and other basic knowledge.

3. Show that B is true, based on all this.

Example 0.1.2 (direct pf). n odd =⇒ n2 odd.

1. Assume n is an odd integer.

2. Then n = 2k + 1, for some integer k, so

n2 = (2k + 1)2 = 4k2 + 4k + 1 = 2(2k2 + 2k︸ ︷︷ ︸m

) + 1 = 2m + 1, for some m ∈ Z.

3. Thus, n2 is odd.

Page 13: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

0.1 Logic and inference 13

Indirect proof: Proof by contrapositive.

(A =⇒ B) ≡ (¬B =⇒ ¬A),

so show ¬B =⇒ ¬A directly.

Example 0.1.3 (contrapositive). 3n + 2 odd =⇒ n odd.

The contrapositive is: n even =⇒ 3n + 2 even.

1. Assume n is an even integer.

2. Then n = 2k, for some integer k, so

3n + 2 = 3(2k) + 2 = 6k + 2 = 2(3k + 1) = 2m, for some m ∈ Z.

3. Thus, 3n + 2 is even.

Example 0.1.4 (contrapositive). n2 even =⇒ n even.

This is just the contrapositive of the prev. example.

Indirect proof: Proof by contradiction.

In order to show that A is true by contradiction,

1. assume that A is false (assume ¬A is true)

2. derive a contradiction (show that ¬A implies something which is clearly false/impossible)

Example 0.1.5 (contradiction).√

2 is irrational.

1. Assume the negative of the statement:√

2 = mn , for some m,n ∈ Z.

2. If m,n have a common factor, we can cancel it out to obtain

√2 =

a

b, in lowest terms (∗)

2 =a2

b2

2b2 = a2

This shows a2 is even. But we just showed in the prev ex that

a2 even =⇒ a even,

Page 14: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

14 Math 311 Course Overview

so a must be even. This means a = 2c for some integer c, so

2b2 = (2c)2

b2 = 2c2

This shows that b2 is even. But then b must also be even. <↙(*)

Mathematical (weak) induction: how to prove statements of the form

P (n) is true for every n.

1. Basis step: show that P (0) or P (1) is true.

2. Induction step: show that P (n) =⇒ P (n + 1).

Example 0.1.6 (induction). The sum of the first n odd positive integers is n2.

1. The sum of the first 1 positive integers is 1 = 12.

2. Induction step: show that

[1 + 3 + 5 + · · ·+ (2n− 1) = n2

]=⇒ [

1 + 3 + 5 + · · ·+ (2n− 1) + (2n + 1) = (n + 1)2].

This is a statement A =⇒ B which we show directly, so assume A is true:

1 + 3 + 5 + · · ·+ (2n− 1) = n2.

(This is the induction hypothesis.)

1 + 3 + 5 + · · ·+ (2n− 1) + (2n + 1) = (1 + 3 + 5 + · · ·+ (2n− 1)) + (2n + 1)

= n2 + (2n + 1)

= (n + 1)2.

Thus we have shown that B is true, based on the assumption A. Hence, we have

proven the statement: A =⇒ B.

Exercises: A.4.1, A.4.10 Problems: none Due: Jan. 29

1. Use the DeMorgan law to argue that ¬(A and ¬B) ≡ (¬A or B).

2. Use induction to show n! ≤ nn for every n ∈ N.

Page 15: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 1

Real Numbers and Monotone

Seqs

1.1 Introduction. Real Numbers.

1

R is the set of real numbers.

How to define/consider them? Points on a line, decimal expansions, or ... ?

Want elements of R to satisfy certain properties. Whenever a, b ∈ R, need to know:

1. Arithmetic: a + b, a− b, a× b, a/b

2. Order: a < b, a ≤ b

3. Completeness: a bounded sequence of increasing numbers has a limit.

Other desirable things (which will follow from the above):

4. Archimedean: if a > 0, then for any N (no matter how large), we can find b ∈ Rsuch that ab > N .

5. Distance: d(a, b) = |a− b|.

Points on a line and decimal expansions have problems: not so good for computation,

nonuniqueness, etc.

1April 18, 2007

Page 16: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

16 Math 311 Real Numbers and Monotone Seqs

Alternative: we have 1,2,4,5 already for the rational numbers Q, so start with them.

R = Q ∪ {limits of points of Q}.

Recall: for Q = {pq ..

. p, q ∈ Z, q 6= 0} we have

pq + r

s = ps+qrqs , p

q × rs = pr

qs .

This will give us (3), and the other.

Example of the main idea: understanding a + b, when a, b ∈ R.

1. By the defn of R, a = lim an, b = lim bn, where an, bn ∈ Q.

2. We know what an + bn is, since we known how + works in Q.

3. If lim an + lim bn = lim(an + bn), then define

a + b := lim(an + bn).

Same technique works for things more complicated than a + b.

(There are some technicalities, e.g., when is (3) true, indep of limit representation.)

1.2 Increasing sequences.

Definition 1.2.1. A sequence of numbers is an infinite ordered list

a1, a2, . . .

an is the nth term.

A sequence can be specified by giving

(i) the first few terms: {1, 12 , 1

3 , . . . }

(ii) an explicit formula for the nth term: { 1n}, or

(iii) a recurrence relation for the nth term: a1 = 1, an+1 = n−1n an.

Example 1.2.1. The Fibonacci numbers can be described by

Page 17: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

1.3 Limit of an increasing sequence. 17

(i) {1, 1, 2, 3, 5, 8, 13, 21, . . . }

(ii){

1√5

(1+√

52

)n

− 1√5

(1−√5

2

)n}, or

(iii) a0 = 1, a1 = 1, an+2 = an+1 + an.

Definition 1.2.2. {an} is increasing iff an ≤ an+1, ∀n.

{an} is strictly increasing iff an < an+1, ∀n.

{an} is decreasing, (strictly decreasing) iff an ≥ an+1(an > an+1), ∀n.

1.3 Limit of an increasing sequence.

Suppose we are using decimal representations.

Definition 1.3.1. A real number L is the limit of an increasing sequence {an} if, given

any integer k > 0, all the an after some point in the sequence agree with L to k decimal

places:

L = limn→∞

an, or ann→∞−−−−−→ L.

We say an converges (to L).

In symbols,

∀k ∈ N, ∃N such that, for n ≥ N, an agrees with L to k decimal places.

or,

∀k ∈ N,∃N such that n ≥ N =⇒ |an − L| < 10−k,

or,

∀ε > 0,∃N such that n ≥ N =⇒ |an − L| < ε.

This last one doesn’t refer to the decimal expansion.

If a sequence has a limit, that limit is unique. A sequence may not have a limit, like

{n} = {1, 2, 3, 4, . . . }, or the Fibo.

Definition 1.3.2. A sequence {an} is bounded above if there is a number B ∈ R such

that an ≤ B, ∀n. This B is an upper bound for the sequence {an}.

Page 18: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

18 Math 311 Real Numbers and Monotone Seqs

Theorem 1.3.3. An increasing sequence which is bounded above has a limit.

[({an} is bounded above) and ({an} is increasing)] =⇒ {an} has a limit.

We will see why this is true in Ch. 6, using the notion of sup. If β ∈ R satisfies

1. an ≤ β, ∀n (so β is an upper bound of {an}, and

2. (an ≤ b,∀n) =⇒ β ≤ b,

then we call β the least upper bound (or supremum) of {an}:

β = sup an.

We will see that for a pos, incr seq, lim an = sup an.

1.4 Example: e

We use two results from discrete math.

Binom formula:

(1 + x)k = 1 + kx + · · ·+(

k

i

)xi + · · ·+ xn

Geometric sum (finite):

1 + r + r2 + · · ·+ rn =1− rn+1

1− r.

When r = 12 , this gives 1 + 1

2 + 14 + · · ·+ 1

2n = 2(1− 1

2n+1

)< 2.

Theorem 1.4.1. The sequence an =(1 + 1

2n

)2n

has a limit. (The limit is e).

Proof. By Thm. 1.3.3, NTS an bdd & incr. Since n →∞, suffices to consider n ≥ 2.

an is incr: need(1 + 1

2n

)2n

<(1 + 1

2n+1

)2n+1

.

b 6= 0 =⇒ b2 > 0 =⇒ (1 + b)2 > 1 + 2b (why?ASSIGN)

=⇒ ((1 + b)2

)2n

> (1 + 2b)2n

=⇒ (1 + 1

2n+1

)2n+1

>(1 + 1

2n

)2n

b = 12n+1 .

Page 19: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

1.5 Harmonic sum and Euler’s gamma 19

an is bounded above. First, note that

k(k − 1) . . . (k − i + 1) ≤ ki, and1i!

=1i· 1i− 1

. . .12≤

(12

)i−1

.

Then

(1 +

1k

)k

= 1 + k

(1k

)+ · · ·+ k(k − 1) . . . (k − i + 1)

i!

(1k

)i

+ · · ·+ k!k!

(1k

)k

= 1 + k

(1k

)+ · · ·+ ki

(12

)i−1 (1k

)i

+ · · ·+ kk

(12

)k−1 (1k

)k

= 1 + 1 +12

+ · · ·+ 12i−1

+ · · ·+ 12k−1

< 1 + 2 = 3.

So 3 is an upper bound for an.

1.5 Harmonic sum and Euler’s gamma

Definition 1.5.1. The harmonic sum or harmonic series is the infinite sum

1 +12

+13

+14

+ · · · =∞∑

n=1

1n

.

Proposition 1.5.2. Let an = 1 + 12 + 1

3 + 14 + · · ·+ 1

n , so that an is the nth partial sum

of the harmonic series. Then an has no upper bound (hence the infinite sum diverges).

Proof. Write the (2k)th term

a2k = 1 +12

+(

13

+14

)+

(15

+16

+17

+18

)+ · · ·+

(1

2k−1 + 1+ · · ·+ 1

2k

)

> 1 +12

+(

14

+14

)+

(18

+18

+18

+18

)+ · · ·+

(12k

+ · · ·+ 12k

)

︸ ︷︷ ︸2k−1 terms

= 1 +12

+ (k − 1)12.

So an becomes arbitrarily large.

Theorem 1.5.3. Let bn = 1 + 12 + 1

3 + · · ·+ 1n − log(n + 1), n ≥ 1. Then {bn} converges.

Page 20: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

20 Math 311 Real Numbers and Monotone Seqs

Proof. Show bn is increasing and bounded above.

bn = area of rectangles − area under curve

= T1 + · · ·+ T2.

Each “trianglet” has positive area, so Ti ≥ 0 implies

bn+1 = bn + Tn+1 =⇒ {bn} increasing.

Each trianglet can be horizontally translated into the initial rectangle, so bn is bounded

above by 1.

If the curved “hypotenuses” were replaced by straight lines,

γ = T1 + T2 + . . .

would be exactly half the area of the original rectangle, so 12 . Thus, 1

2 ≤ γ ≤ 1. However,

γ ≈ 0.577 . . . .

1.6 Decreasing seqs, Completeness

Eventually (Chap. 6), R is complete because every nonempty subset A ⊆ R which is

bounded above has a least upper bound supA ∈ R. Note: sup A is an element of R; it may

not be an element of A.

Until then: we continue to use monotone sequences.

Definition 1.6.1. A real number L is the limit of a decreasing sequence {an} if, given

any integer k > 0, all the an after some point in the sequence agree with L to k decimal

places:

L = limn→∞

an, or ann→∞−−−−−→ L.

We say an converges (to L).

When rephrased in symbols, it is identical to previous:

∀ε > 0,∃N such that n ≥ N =⇒ |an − L| < ε.

Page 21: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

1.6 Decreasing seqs, Completeness 21

Note: for monotonic sequences, sometimes write

an ↗ L, or an ↘ L.

Definition 1.6.2. {an} is bounded below if there is a number B ∈ R such that an ≥ B, ∀n.

This B is an lower bound for the sequence {an}.

Theorem 1.6.3. A positive decr seq has a limit.

Note: ({an} positive) ≡ (an ≥ 0, ∀n) ≡ ({an} is bounded below by 0).

Definition 1.6.4. {an} is bounded iff it is bounded above and bounded below.

Definition 1.6.5. {an} is monotone iff it is increasing or decreasing.

Theorem 1.6.6 (Completeness). A bounded monotone sequence in R has a limit.

Exercises: 1.2.1, 1.3.1, 1.4.2, 1.5.1, 1.5.2 Problems: 1-1, 1-2

Due: Jan. 29

1. Prove that 1.0000 · · · = 0.9999 . . . using the geometric series formula:

1 + r + r2 + r3 + · · · =∞∑

n=0

rn =1

1− r, for |r| < 1.

Hint: use r = 110 .

2. Briefly explain why (1 + b)2 > 1 + 2b in the proof of the Thm. in §1.4.1.

3. Briefly explain why the conclusion of the proof of Thm. in §1.5.2 follows from the

Archimedean Property of R.

Page 22: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22 Math 311 Real Numbers and Monotone Seqs

Page 23: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 2

Estimates and approximation

2.1 Introduction. Inequalities.

1

Inequalities: for making comparisons.

Absolute values: for measuring size and distance.

a is (strictly) positive iff a > 0; a is nonnegative iff a ≥ 0.

Properties of <:

1. a ≮ a

2. a < b =⇒ b ≮ a

3. a < b and b < c implies a < c (transitivity)

Properties of ≤:

1. a ≤ a

2. Either a ≤ b or b ≤ a is true. If both are true, write a = b.

3. a ≤ b and b ≤ c implies a ≤ c (transitivity)

Note: the negation of a < b is b ≤ a, not b < a.

Arithmetic with inequalities.

• Addition: for any a, b ∈ R, a < b, c < d =⇒ a + c < b + d.

1April 18, 2007

Page 24: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

24 Math 311 Estimates and approximation

• Multiplication: for any a, b >, a < b, c < d =⇒ ac < bd.

• Negation: for any a, b ∈ R, a < b =⇒ −a > −b.

• Reciprocals: for any a, b >, a < b =⇒ 1/a > 1/b.

Note. This is not a theorem (yet) but it is a handy rule:

1. Functions with everywhere positive derivatives are monotonic increasing, and hence

order-preserving. E.g., f(x) = log(x) has derivative f ′(x) = 1x . Then

a < b =⇒ log a < log b.

2. Functions with everywhere negative derivatives are monotonic decreasing, and hence

order-reversing. E.g., f(x) = e−x has derivative f ′(x) = −e−x. Then

a < b =⇒ e−a > e−b.

3. Of course, some function are neither. E.g., f(x) = cos x. Then if a < b, you know

nothing about cos a or cos b.

The Negation rule demonstrates f(x) = −x, where f ′(x) = −1 < 0, ∀x.

The Reciprocal rule demonstrates f(x) = 1x , where f ′(x) = − 1

x2 < 0, for x > 0.

2.2 Estimations

Definition 2.2.1. If a < b < c, then a is a lower estimate for b, and c is an upper estimate

for b. The same is said if a ≤ b ≤ c.

Definition 2.2.2. If a < a′ < b < c′ < c, then a′ and c′ are stronger estimates (a′ is

stronger than a, etc.) and a, c are weaker. “stronger” = more info.

Example 2.2.1. If 0 < x < 1, then we have the estimates 0 < x2 < 1 and 1 < 1x .

Example 2.2.2. Give upper and lower estimates for 11+x2 .

x2 ≥ 0 =⇒ 1 + x2 ≥ 1 =⇒ 11 + x2

≤ 1

x2 ≥ 0 =⇒ 1 + x2 ≥ 1 > 0 =⇒ 11 + x2

> 0

Page 25: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

2.3 Proving boundedness 25

Example 2.2.3. If |1− 3x| < 2, give upper and lower estimates for x.

|1− 3x| < 2 ⇐⇒ −2 < 1− 3x < 2

⇐⇒ −3 < −3x < 1

⇐⇒ 3 > 3x > −1

⇐⇒ 1 > x > − 13 .

Example 2.2.4. Use 2 <√

5 < 3 to show 14 <

√5−1√5+1

< 23 .

√5− 1√5 + 1

<3− 1√5 + 1

=2√

5 + 1<

22 + 1

=23√

5− 1√5 + 1

>2− 1√5 + 1

=1√

5 + 1>

13 + 1

=14

2.3 Proving boundedness

To show that {an} is bounded: find one upper estimate an ≤ B, ∀n.

To show that {an} is not bounded: find lower estimate for each term, an ≥ Bn, with

Bn →∞.

Example 2.3.1. Earlier, we showed bk =(1 + 1

k

)k< 3.

Example 2.3.2. Earlier, we showed a2n =∑2n

k=11k > 3

2 + (n− 1)/2.

2.4 Absolute values. Estimating size.

Definition 2.4.1. The absolute value of a ∈ R is its size; i.e., its distance from 0:

|a| =

a, a ≥ 0

−a, a < 0.

The size of the difference between a and b is the distance from a to b:

|a− b| = dist(a, b).

Page 26: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

26 Math 311 Estimates and approximation

Note: |a| = |a− 0|.

In higher dimensions (say in the plane, R2) the defn remains the same. (sketch):

S1 = {x ∈ R2... |x| = 1}

B1 = {x ∈ R2... |x| < 1}

Theorem 2.4.2. Let a ∈ R.

(i) |a| ≥ 0 and |a| = 0 =⇒ a = 0.

(ii) |a| < M is the same thing as −M < a < M .

Note: |a− b| < M means that the distance from a to b is less than M :

a ∈ (b−M, b + M) or b ∈ (a−M, a + M).

If you want to show that a is close to b, show that |a− b| < ε, where ε > 0 is very small.

Theorem 2.4.3 (Absolute Value Laws).

(i) Product law: |ab| = |a||b|. This implies division law.

(ii) Triangle inequality: |a + b| ≤ |a|+ |b|.

Proof. Use Thm. 2.4.2 (ii) twice:

[Proof1]− |a| ≤ a ≤ |a|−|b| ≤ b ≤ |b|

−(|a|+ |b|) ≤ a + b ≤ |a|+ |b||a + b| ≤ ||a|+ |b|| = |a|+ |b|.

Proof 2. |a+ b|2 = (a+ b)(a+ b) = a2 +2ab+ b2 ≤ a2 +2|a||b|+ b2 = (|a|+ |b|)2.

(iii) |a− b| ≥ |a| − |b|, |a + b| ≥ |a| − |b|, and ||a| − |b|| ≤ |a− b|.

Example 2.4.1. Fourier analysis:

Sn = c1 cos t + c2 cos 2t + · · ·+ cn cos nt.

If ci = 1/2i, give an upper estimate for Sn.

Page 27: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

2.5 Approximation 27

Solution. Since | cos x| ≤ 1, ∀x ∈ R,

|Sn| ≤ |c1|| cos t|+ |c2|| cos 2t|+ · · ·+ |cn|| cos nt|≤ 1

2 · 1 + 14 · 1 + · · ·+ 1

2n · 1< 1

Theorem 2.4.4. {an} is bounded ⇐⇒ there is a B such that |an| ≤ B, ∀n.

Proof. (⇒) The hypothesis means that K ≤ an ≤ L for all n. Then B = max(|K|, |L|).(⇐) |an| ≤ B implies that B ≤ an ≤ B, so {an} is bounded.

2.5 Approximation

Theorem 2.5.1 (Density of Q in R). Let a < b be real numbers. Then

(i) ∃r ∈ Q such that a < r < b, and

(ii) ∃s ∈ R \Q such that a < s < b.

Proof. (i) b−a > 0, so we can find n such that n(b−a) > 1 by the Archimedean property.

Then 1 + na < nb. Let m be an integer such that m − 1 ≤ na < m. (This is possible,

since⋃Z[m− 1,m) = R.) Then m ≤ 1 + na.

na < m ≤ 1 + na < nb

a <m

n< b.

(ii) From (i) we have a < r < b, with r ∈ Q. Since√

2 is irrational, so is c√

2 for c ∈ R.

a < r +√

2n

< b, for some n ∈ N.

Definition 2.5.2. a ≈ε b means |a− b| < ε.

Example: 2 ≈2 3 but it is not true that 2 ≈1 3.

Given x ∈ R and ε > 0, Thm. 2.5.1 says one can thus always find r ∈ Q such that

x ≈ε r.

Theorem 2.5.3. (i) a ≈ε b and b ≈ε′ c =⇒ a ≈ε+ε′ c. (transitivity)

Page 28: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

28 Math 311 Estimates and approximation

(ii) a ≈ε a′ and b ≈ε′ b′ =⇒ a + b ≈ε+ε′ a′ + b′. (addition)

2.6 “for n large”

Definition 2.6.1. The sequence {an} has property P for n large (write “n >> 1”) if there

is a number N ∈ N such that an has property P for all n ≥ N .

Example 2.6.1. 1n < 0.001 for n >> 1.

Example 2.6.2. n2

2n is decreasing for n >> 1.

Example 2.6.3. If {an} is bounded above for n >> 1, then it is bounded above.

Proof. The hypothesis mean that there is a B and an N such that an ≤ B for n ≥ N. Let

M = max{a1, a2, . . . , an, B}. Then an ≤ M < ∞.

Example 2.6.4. {an} and {bn} increasing for n >> 1 =⇒ {an + bn} is, too.

Proof. By hypothesis,

an ≤ an+1 for n ≥ N1, and

bn ≤ bn+1 for n ≥ N2.

Choose N ≥ N1, N2. Then

an ≤ an+1 for n ≥ N, and

bn ≤ bn+1 for n ≥ N, so

an + bn ≤ an+1 + bn+1 for n ≥ N.

Question 1. What does it mean if |a− b| < 1n ,∀n ∈ N?

Exercises: 2.1.2, 2.2.1, 2.4.2, 2.4.7, 2.5.2, 2.6.1 Problems: 2-1 Due:

Feb. 5

1. Prove n! ≤ nn for every n ∈ N, without using induction.

2. Prove that ||x| − |y|| ≤ |x− y| for x, y ∈ R.

Page 29: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 3

The Limit of a Sequence

3.1 Definition of limit.

1

Definition 3.1.1. The number L is the limit of the sequence {an} iff given ε > 0, we

have an ≈ε L for n >> 1:

∀ε > 0, ∃N such that n ≥ N =⇒ |an − L| < ε.

That is, an is a good approximation to L for large n. And it can be made increasingly

better by taking n larger.

To prove a limit: typically, find a rule for N in terms of ε.

Example 3.1.1. Prove limn→∞ n−1n+1 = 1.

Solution. Given ε > 0, we need to show n−1n+1 ≈ε 1 for n >> 1. That is, we need

∣∣∣∣n− 1n + 1

− 1∣∣∣∣ =

∣∣∣∣n− 1n + 1

− n + 1n + 1

∣∣∣∣ =2

n + 1< ε.

Observe: 2n+1 < ε ⇐⇒ 2

ε < n+1. So for N = 2ε−1, the estimate will hold for n ≥ N .

Example 3.1.2. Prove limn→∞(√

n + 1−√n)

= 0.

1April 18, 2007

Page 30: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

30 Math 311 The Limit of a Sequence

Proof. Use the identity A−B = A2−B2

A+B to get

∣∣√n + 1−√n∣∣ =

1√n + 1 +

√n

<1

2√

n.

Then since1

2√

n< ε ⇐⇒ 1

2ε<√

n ⇐⇒ 14ε2

< n,

we will have the required estimate for n ≥ N = 14ε2 .

Theorem 3.1.2. an → 0 ⇐⇒ |an| → 0.

Proof. HW.

3.2 The uniqueness of limits. The K − ε principle.

Theorem 3.2.1 (Uniqueness of limits). A sequence an has at most one limit.

Proof. We must show that (an → L) and (an → L′) =⇒ L = L′. Assume that both L

and L′ are limits of an and suppose, by way of contradiction, that L 6= L′. Then we may

choose ε = 12 |L − L′|, so that ε > 0 and 2ε = |L − L′|. From the assumptions, we have

an ≈ε L and an ≈ε L′, for sufficiently large n. Thus,

L ≈ε an ≈ε L′ =⇒ L ≈2ε L′ ⇐⇒ |L− L′| < 2ε = |L− L′|. <↙a ≮ a.

Theorem 3.2.2 (Incr Seq Thm). {an} is increasing, lim an = L =⇒ an ≤ L,∀n.

Proof. We show the contrapositive (WHAT IS IT?), so suppose ak > L for some k. Now:

1. If {an} is increasing, then |an − L| ≥ |ak − L| > 0 for n ≥ k, so lim an 6= L.

2. If lim an = L, then for ε = (ak − L), we can find N such that an ≈ε L for n ≥ N .

But then an < ak and n ≥ k, for all these n.

Theorem 3.2.3. {an} is decreasing and lim an = L =⇒ an ≥ L for all n.

Example:

Theorem 3.2.4. lim an = a and lim bn = b =⇒ lim(an + bn) = a + b.

Page 31: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

3.3 Infinite limits. 31

Proof. Given ε > 0, we can find N1, N2 such that

n ≥ N1 =⇒ |an − a| < ε, and n ≥ N2 =⇒ |bn − b| < ε.

Then let N = max(N1, N2) and

n ≥ N =⇒ |(an + bn)− (a + b)| = |(an − a) + (bn − b)|≤ |an − a|+ |bn − b| < ε + ε = 2ε.

Theorem 3.2.5 (The K-ε principle.). {an} is a sequence and for any ε > 0, it is true

that an ≈Kε L for n >> 1, where K > 0 is a fixed constant. Then lim an = L.

3.3 Infinite limits.

Definition 3.3.1. lim an = ∞ means that for any M > 0, we have an > M for n >> 1.

Then an tends to infinity : an →∞.

Example 3.3.1. {log n} tends to ∞.

Since log x is monotone increasing, it is order-preserving:

a < b =⇒ log a < log b.

Given M > 0, we then have

n > eM =⇒ log n > log eM = M.

3.4 An important limit.

Theorem 3.4.1 (The limit of an).

limn→∞

=

∞, a > 1,

1, a = 1,

0, |a| < 1.

Page 32: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

32 Math 311 The Limit of a Sequence

Proof. Case a > 1: let M > 0 be given. Since a = 1 + x for some x > 0,

an = (1 + x)n = 1 + nx + · · ·+(

n

k

)xk + · · ·+ xn.

All the terms on the right are positive, so

an > 1 + nx > M ⇐⇒ n > (M − 1)/x.

So choose N = (M − 1)/x. Case a = 1: obvious.

Case |a| < 1: let ε > 0 be given. Then

|a| < 1 =⇒ 1|a| > 1 =⇒

(1|a|

)n

>1ε

=⇒ |a|n < ε.

It is also useful to note that for c > 0,

0 < a < 1 =⇒ 0 < ac < c =⇒ 0 < a(an) < an =⇒ an+1 < an.

So the limit is monotone decreasing for small a. Also,

a > 1 =⇒ ac > c =⇒ a(an) > an =⇒ an+1 > an,

so the limit is monotone increasing for large a.

Example 3.4.1. Let f1(x) = x, f2 = x2, ...fn(x) = xn be a sequence of functions, each

defined on [0, 1]. Define a new function as the pointwise limit :

f(x) := limn→∞

fn(x) = limn→∞

xn

=

0, 0 ≤ x < 1,

1, a = 1.

Each of the functions fn(x) = xn is continuous, but the limit f(x) is not continuous!

Page 33: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

3.5 Writing limit proofs. 33

3.5 Writing limit proofs.

Read on your own ... each student will get something different.

A < n(n + 1) rather than A < n(n + 1)

= n2 + n < n2 + n

Grading: note §3.5

3.6 Some limits involving integrals.

Example 3.6.1. Let an :=∫ 1

0(x2 + 2)n dx. Show that lim an = ∞.

Solution. Estimate the integrand from below:

x2 + 2 ≥ 2, ∀x(x2 + 2)n ≥ 2n, ∀x, n

∫ 1

0

(x2 + 2)n dx ≥∫ 1

0

2n dx = 2n. to be shown ...

Since 2n →∞, an →∞: given M > 0,

n > log2 M =⇒∫ 1

0

(x2 + 2)n dx ≥ 2n ≥ M.

Question 2. Let an :=∫ 1

0(x + 2)n dx. If you are to show that lim an = ∞, how would

the argument differ from the above?

Example 3.6.2. Show∫ 1

0(x2 + 1)n dx →∞.

Solution. The previous argument gives (x2 + 1)n ≥ 1n = 1, useless.

Since f(x) = x2 + 1 is increasing with f(0.1) = 1.01,

x2 + 1 ≥ 1.01 > 1, for 0.1 ≤ x ≤ 1

(x2 + 1)n ≥ (1.01)n, for 0.1 ≤ x ≤ 1∫ 1

0.1

(x2 + 1)n dx ≥∫ 1

0.1

(1.01)n dx =910

(1.01)n →∞.

Page 34: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

34 Math 311 The Limit of a Sequence

3.7 Another limit involving integrals.

Example 3.7.1. Let an :=∫ π/2

0sinn x dx. Determine lim an.

Solution. Observe:

0 ≤ x ≤ π2 =⇒ 0 ≤ sin x ≤ 1 =⇒ 0 ≤ sinn x ≤ 1.

Then fn(x) = sinn x → 0, for every value of x except x = π2 , since fn(π

2 ) = 1n → 1. We

expect lim∫

fn =∫

lim fn =∫

0 = 0 (NOT generally true!)

Given ε > 0, we will show the area under sinn x is less than 2ε on this interval, for

n >> 1. Divide the interval at the point a := π2 − ε.

left-hand area < L = area of flat rectangle = a sinn a

right-hand area < R = area of tall rectangle = ε.

Then sin a < 1 =⇒ sinn a < εa =⇒ a sinn a < ε for n >> 1.

∫ π/2

0

sinn x dx = total area < L + R < 2ε, n >> 1.

Exercises: 3.2.3, 3.3.2, 3.4.1, 3.4.3, 3.7.1 Rec: #3.3.1, 3.3.3, 3.4.2,

3.4.4 Problems: 3-1, Rec: 3-4 Due: Feb. 5

1. Show that en → 0 ⇐⇒ |en| → 0.

Page 35: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 4

Error Term Analysis

4.1 The error term

1

Previously: an → L.

Now: how fast does an → L?

Example 4.1.1.

an = 1− 12

+13− 1

4+ · · ·+ (−1)n−1

n→ log 2

bn =2

1 · 3 +2

3 · 33+

25 · 35

+ · · ·+ 2(2n− 1)32n−1

→ log 2

But a100 = a99− 1100 is still changing the second decimal place. By contrast, b3 is already

accurate to three decimal places.

Example 4.1.2. For a converging sequence an → L, the error term is en = an − L.

Theorem 4.1.1. Let an = L + en. Then an → L ⇐⇒ en → 0.

Note: en → 0 ⇐⇒ |en| → 0, so it’s fine to define the error term as en = L− an.

1April 18, 2007

Page 36: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

36 Math 311 Error Term Analysis

4.2 Geometric series error term.

Recall: for |r| < 1, we have

1 + r + r2 + · · ·+ rn + · · · =∞∑

k=0

rk =1

1− r.

Theorem 4.2.1. Take an := 1 + r + r2 + · · ·+ rn =∑n

k=0 rk. Then lim an = 11−r .

Proof.

an = 1 + r + r2 + · · ·+ rn

ran = r + r2 + r3 + · · ·+ rn+1

an − ran = 1− r + r − r2 + r2 − · · · − rn + rn − rn+1

an(1− r) = 1− rn+1

an =1− rn+1

1− r=

11− r

− rn+1

1− r

The error term is en = rn+1

1−r → 0 by Thm. 3.4.1. So an → L by Thm. 4.1.1.

Example 4.2.1. Show bn = 1− 12 + 1

3 − 14 + ... + (−1)n−1

n → log 2.

Solution. Put r = −u into the geometric series and then integrate:

1− u + u2 − u3 + · · ·+ (−1)n−1un−1 =1

1 + u− (−1)n un

1 + u, u 6= 1

1− 12

+13− 1

4+ · · ·+ (−1)n−1un−1 = log 2±

∫ 1

0

un

1 + udu.

(Since∫ 1

0uk du =

[uk+1

k+1

]1

0= 1

k+1 .) The error term is

en =∫ 1

0

un

1 + udu ≤

∫ 1

0

un dx =1

n + 1→ 0.

4.3 Newton’s method

Let α ∈ R. Then Newton’s method can (often) find a sequence an → α, where an is

defined in terms of an−1.

Page 37: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

4.4 The Fibonacci numbers 37

How to find a zero of a function f(x)? Pick a0 nearby. Then

f ′(an) =f(an)

an − an+1=⇒ an+1 = an − f(an)

f ′(an).

Example 4.3.1. Find a sequence an →√

2.

Solution. We need a zero of f(x) = x2 − 2, so

an+1 = an − a2n − 22an

=12

(an +

2an

).

Use Thm. 4.1.1: the error term is en = an −√

2, so

en+1 = an+1 −√

2

=12

(an +

2an

)−√

2

=12

((en +

√2) +

2en +

√2

)−√

2

=e2n

2(√

2 + en)

≤ e2n

2(√

2− |en|)|√

2 + en| ≥√

2− |en|

So if we pick a0 within ε = 12 of

√2, then |e0| < 1

2 , and

|en| < 12

=⇒ |en+1| ≤ e2n

2(√

2− |en|)≤ e2

n

2(1− 12 )

. ≤ e2n.

So |e0| < 12 =⇒ |en| → 0, by Thm. 3.4.1.

Why ε = 12?

4.4 The Fibonacci numbers

The Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, . . . is defined by

F0 = 0, F1 = 1, Fn+2 = Fn+1 + Fn.

Page 38: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

38 Math 311 Error Term Analysis

Consider the sequence an of rational numbers

11,12,23,35,58, . . . .

This sequence satisfies

an+1 =1

an + 1. (∗)

Let us assume that M = lim an. Then M should satisfy

M =1

M + 1=⇒ M2 + M − 1 = 0 (∗∗)

=⇒ M =√

5− 12

≈ 0.618 . . . (∗ ∗ ∗)

The error term is en = an −M , so

en+1 = an+1 −M =1

an + 1−M by (∗)

=1

en + M + 1−M

=1−M −M2 −Men

en + M + 1

= − M

en + M + 1en by (∗∗)

= −√

5− 12en +

√5 + 1

en by (∗ ∗ ∗)

|en+1| ≤√

5− 11 +

√5− 2|en|

en |2en +√

5 + 1| ≥√

5 + 1− 2|en|

=3− 1

1 + 2− 2|en|en 2 <√

5 < 3

=2

3− 2|en|en

Now suppose |en| < ε = 12 . Then

|en+1| < 23− 2|en|en <

23− 1

en.

Page 39: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

4.4 The Fibonacci numbers 39

Hmm ... this doesn’t help. Try ε = 110 . Then

|en+1| < 23− 2|en|en <

23010 − 2

10

en =2028

en =57en.

Then |en| ≤ 57 ( 5

7en−1) = · · · = ( 57 )n+1e0 → 0.

Exercises: 4.1.1, 4.3.3 Recommended: #4.2.1, 4.4.1

Problems: 4-1 Recommended: #4-2

Due: Feb. 5

1. Show that en → 0 ⇐⇒ |en| → 0.

Page 40: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

40 Math 311 Error Term Analysis

Page 41: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 5

The Limit Theorems

5.1 Limits of sums, products, quotients

1

Theorem 5.1.1. Let an → L and bn → M , where L,M ∈ R.

(i) ∀r, s ∈ R, ran + sbn → rL + sM . (linearity)

(ii) anbn → LM . (multiplicativity)

(iii) bn/an → M/L if L 6= 0 and an 6= 0, ∀n.

(Just like the proof of an + bn is increasing.)

(i). Given k ∈ N, we can find N1, N2 such that

n ≥ N1 =⇒ |an − L| < 1k

, and n ≥ N2 =⇒ |bn −M | < 1k

.

Then let N = max(N1, N2). For n ≥ N , we have

|(an + bn)− (L + M)| = |(an − L) + (bn −M)|≤ |an − L|+ |bn −M | ∆ ineq

<1k

+1k

=2k

.

1April 18, 2007

Page 42: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

42 Math 311 The Limit Theorems

(ii). (diff from book)

Given k ∈ N, we can again find N1, N2 such that

n ≥ N1 =⇒ |an − L| < 1k

, and n ≥ N2 =⇒ |bn −M | < 1k

.

Then let N = max(N1, N2) and

|anbn − LM | = |anbn − anM + anM − LM |≤ |anbn − anM |+ |anM − LM | ∆ ineq

≤ |an||bn −M |+ |M ||an − L| mult law for | · |

< |an|1k

+ |M |1k

estimates for n ≥ N

< (J + |M |) 1k

convergent sequences are bounded, Prob. 3-4

(iii). First, show 1an→ 1

L .

∣∣∣∣1an

− 1L

∣∣∣∣ =∣∣∣∣L− an

anL

∣∣∣∣ =|L− an||an||L|

So choose N such that

n ≥ N =⇒ |an − L| < 12|L|

=⇒ |an| > 12|L|

=⇒ |L− an||an||L| <

|L− an||L|2/2

<2|L|2 ε, n >> 1.

Example 5.1.1.

limn2 − 3n

n3 − 2n− 1= lim

1/n− 3/n2

1− 2/n2 − 1/n3

=lim(1/n− 3/n2)

lim(1− 2/n2 − 1/n3)(iii)

=lim 1/n− 3 lim 1/n2

lim 1− 2 lim 1/n2 − lim 1/n3)(i)

=0− 3 · 0

1− 2 · 0− 0= 0

Page 43: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

5.2 Comparison Theorems 43

Question 3. Prove that lim a2n = 0 =⇒ lim an = 0.

Solution. The contrapositive is lim a2n 6= 0 =⇒ lim an 6= 0, so suppose lim an = L 6= 0.

Then

lim a2n = lim an · lim an = L2 6= 0.

LIES! LIES! LIES! Cannot assume lim an exists.

Theorem 5.1.2. 1. If an →∞ and {bn} bounded below, then an + bn →∞.

2. If an →∞ and bn ≥ c > 0 for n >> 1, then anbn →∞.

3. If an →∞, then 1/an → 0.

4. If an > 0, n >> 1, and an → 0, then 1/an →∞.

Example 5.1.2. Examples of (2), above:n2−3n+1 = (n− 3)n+3

n+1 →∞n2−3n2+1 = (n− 3) n+3

n2+1 → 1

5.2 Comparison Theorems

Theorem 5.2.1. Let xn → x and yn → y. Then xn ≤ yn =⇒ x ≤ y.

Proof. Use contradiction: suppose not. Then xn ≤ yn but x > y. Then |x− y| > 0, so we

can use this for ε. Find N such that

|xn − x| < |x− y|2

and |yn − y| < |x− y|2

for n ≥ N.

For each xn, yn past the N th, yn < x + |x−y|2 < xn. <↙

Corollary 5.2.2. (Squeeze Thm) If xn → L, yn → L and xn ≤ zn ≤ yn for some

sequences {xn}, {yn}, {zn}, then lim zn = L.

NOTE: for both, even if xn < yn, can only conclude x ≤ y.

Example 5.2.1. cos n/n2.

DID THE PREV TWO THEOREMS WORK FOR xn →∞?

No: |x− y| would not be defined.

Page 44: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

44 Math 311 The Limit Theorems

Theorem 5.2.3. Let xn →∞ and xn ≤ yn. Then yn →∞.

Proof. Homework.

Example 5.2.2. a > 1 =⇒ an →∞.

1 < x =⇒ x = 1 + k, k > 0

xn > 1 + nk binomial thm

xn →∞

Using the Squeeze Thm with integrals.

Example 5.2.3. lim log n!n log n = 1.

Solution. First, note that

log n! = log(1 · 2 · 3 . . . n) = log 1 + log 2 + log 3 + . . . log n

≤ log n + log n + log n + . . . log n = n log n.

and also that

log n! = log 1 + log 2 + log 3 + . . . log n

≥∫ n

1

log x dx = [x log x− x]n1 = n log n− n + 1.

Therefore, we can Squeeze

n log n− n + 1 ≤ log n! ≤ n log n

1− 1log n

+1

n log n≤ log n!

n log n≤ 1.

Interpretation: log n! ' log nn.

5.3 Location theorems

Theorem 5.3.1 (Limit location theorem). If {an} is convergent, then

(i) an ≤ M, n >> 1 =⇒ lim an ≤ M , and

Page 45: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

5.4 Subsequences 45

(ii) an ≥ M, n >> 1 =⇒ lim an ≥ M .

(i). By hyp, let an → L. We need to show L ≤ M . For any k ∈ N, we have

an ≈ε L, n >> 1 =⇒ L− 1k

< an < L +1k

=⇒ L− 1k≤ M.

Letting k →∞, we get L ≤ M by Comparison Thm.

Corollary 5.3.2. If {an}, {bn} convergent, then an ≤ bn, n >> 1 =⇒ lim an ≤ lim bn.

Proof. Apply prev to an − bn ≤ 0.

Theorem 5.3.3 (Sequence location theorem). If {an} is convergent, then

(i) lim an < M =⇒ an < M,n >> 1, and

(ii) lim an > M =⇒ an > M,n >> 1.

(i). By hyp, let an → L, so L < M .

Choose ε = |L−M |/2. Then for n >> 1, each an is within |L−M |/2 of L.

IMPORTANT: be careful with < and ≤ here.

Example 5.3.1. Thm. 5.3.1(ii) cannot be written an > M,n >> 1 =⇒ lim an > M.

Counterxample: 1n > 0 but 1

n → 0 ≯ 0.

Thm. 5.3.3(ii) cannot be written lim an ≥ M =⇒ an ≥ M, n >> 1.

Counterxample: 1n > 0 but 1

n → 0 ≯ 0.

5.4 Subsequences

Definition 5.4.1. If {an} is a sequence, then a subsequence is a new sequence obtained

from the original by deleting some (possibly infinitely many) terms, but keeping the order

intact. The subsequence is denoted {ank}.

Example 5.4.1. The sequence {(−1)n(1+ 1n )} has the monotone decreasing subsequence

1 + 12n obtained by taking every second term. (SKETCH) Infinitely many deletions.

an = −2,32, −4

3,

54, −6

5,

76, . . .

Page 46: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

46 Math 311 The Limit Theorems

n = 1, 2, 3, 4, 5, 6, . . .

n1 = 2, n2 = 4, n3 = 6, . . .

an1 = a2 =32

an2 = a4 =54, an3 = a6 =

76,

Note: nk ≥ k and n1 < n2 < . . . .

Example 5.4.2. The sequence {1, 12 , 1

3 , 14 , . . . } has subsequence {1

2 , 13 , 1

4 , . . . } obtained by

deleting the first term. A single deletion.

{14 , 1

3 , 15 , 1

6 , . . . } is not a subsequence.

{12 , 1

2 , 13 , 1

3 , 14 , 1

4 , . . . } is not a subsequence.

Theorem 5.4.2 (Subsequence Thm). an → L iff ank→ L, for every subsequence {ank

}.

Proof. (⇒) By hyp, we can find N such that an ≈ε L for n ≥ N . Suppose we take any

subsequence {ank}. Since the indices are strictly increasing, n1 < n2 < . . . , we have

nk ≥ N for k >> 1. Then ank≈ε L for k >> 1.

(⇐) Choose the subsequence obtained by deleting no terms from the original.

USE: Subsequences are good for showing that a limit does not exist.

If you can find subsequences tending to two different limits, the original sequence doesn’t

converge.

Example 5.4.3. Consider

1 + 1,

1 +12, 2 +

13,

1 +14, 2 +

15, 3 +

16,

1 +17, 2 +

18, 3 +

19, 4 +

110

, . . .

For any n ∈ N, this sequence has a subsequence tending to n.

Obviously, the sequence doesn’t converge.

QUESTION: How to make a sequence with Q as the set of limit points?

Page 47: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

5.5 Two common mistakes 47

Example 5.4.4. lim sin n doesn’t exist. (note: no π)

Solution. We will find two subsequences and show that they cannot have the same limit.

Note that sin x >√

2/2 for x ∈ (π4 , 3π

4 ), and hence for any m ∈ N we have

x ∈ (2kπ +π

4, 2kπ +

4) =⇒ sin x >

√2/2.

(SKETCH). Each of these intervals has length π2 > 1, so there is an integer in each one.

Let nk be an integer in (2kπ + π4 , 2kπ + 3π

4 ). Then sinnk >√

2/2. Similarly,

x ∈ ((2k + 1)π +π

4, (2k + 1)π +

4) =⇒ sin x < −

√2/2.

Let mk be an integer in (2kπ + π4 , 2kπ + 3π

4 ). Then sin mk < −√2/2. Since | sin nk −sin mk| >

√2, they cannot tend to the same limit.

5.5 Two common mistakes

First mistake: trouble with inequalities.

Example 5.5.1. an → 0, bn bounded =⇒ anbn → 0.

Note: can’t use Product Thm, since don’t know that bn has a limit. Attempt:

L ≤ bn ≤ M

anL ≤ anbn ≤ anM

↓ ↓ ↓0 0 0

Ã

0 ≤ |bn| ≤ K

0 ≤ |an||bn| ≤ |an|K↓ ↓ ↓0 0 0

What if an is sometimes negative? Solution: use | · | instead.

Second mistake: repeating previous results. Whenever possible, cite a previous theo-

rem to make life easier.

Exercises: #5.1.4, 5.2.4, 5.3.6, 5.4.2 Recommended: #5.1.5, 5.2.3,

5.3.2, 5.3.4, 5.3.5

Problems: 5-1, 5-2, 5-3 Recommended: #

Page 48: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

48 Math 311 The Limit Theorems

1. If xn →∞ and xn ≤ yn, then yn →∞.

2. xn → x iff every neighbourhood of x of the form (x − ε, x + ε), ε > 0 contains all

but finitely many points xn.

3. Can you have a sequence {an} which, for any given rational number p ∈ Q has a

subsequence ank→ p? Construct one or prove it is impossible.

Page 49: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 6

The Completeness Property

6.1 Introduction. Nested intervals.

1

Definition 6.1.1. Let A1, A2, . . . be a sequence of sets in R. This sequence is nested iff

A1 ⊇ A2 ⊇ . . . .

Example 6.1.1. If this is a sequence of intervals An = [an, bn] = {x ... 0 ≤ x ≤ 1}, then

this means

an ≤ an+1 ≤ bn+1 ≤ bn, ∀n.

Note:⋂∞

n=1 An = {x ... x ∈ An, ∀n}.

Theorem 6.1.2 (Nested Intervals Thm). Suppose that An = [an, bn] is a nested sequence

of intervals with lim(an − bn) = 0. Then⋂∞

n=1 An = {L}. Also, an → L and bn → L.

Proof. There are 5 steps.

(i) an ≤ bm for any n,m. Suppose an > bm. Then

n > m =⇒ bn ≤ bm < an<↙

n < m =⇒ bm < an ≤ am<↙

(ii) {an} is increasing and convergent, so let L = lim an.

{an} is increasing by nestedness, and bounded by (i), so converges by completeness.

1April 18, 2007

Page 50: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

50 Math 311 The Completeness Property

(iii) ∀n, an ≤ L ≤ bn.

Part (ii) shows an ≤ L (this is exactly a thm: an ↗ L =⇒ an ≤ L)

an ≤ bm =⇒ L ≤ bm by Limit Location Thm.

(iv) L = lim bn and L is the only number common to all intervals.

We add the two convergent sequences {an} and {bn − an} to get

lim bn = lim(an + (bn − an)) = lim an + lim(bn − an) = L + 0 = L.

This shows that the intersection can contain only L, for suppose it also contained

some number slightly larger than L: call it L + ε, ε > 0. Then bn → L implies that

bn < L + ε for n >> 1, so L + ε cannot be in the intersection. The same is true for

any number slightly less than L.

Example 6.1.2. an = 1− 12 + 1

3 − · · ·+ (−1)n−1

n converges.

Proof. Let a0 = 0. Due to the alternating sign (which begins positive),

|an − an+1| = 1n→ 0 =⇒ a2n ≤ a2n+2 ≤ a2n+3 ≤ a2n+1.

Then we have a decreasing nested sequence of intervals

[a0, a1] ⊇ [a2, a3] ⊇ . . . ,

so⋂∞

n=0[a2n, a2n+1] = {L} and an → L.

6.2 Cluster points

Consider the sequence 1, 2, 13 , 4, 1

5 , 6, 17 , . . . . This contains two subsequences,

1, 2, 4, 6, · · · → ∞

1,13,15,17, · · · → 0,

so it clearly doesn’t converge. But it has a subsequence which does converge.

Page 51: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

6.3 Bolzano-Weierstrass Theorem 51

Definition 6.2.1. K ∈ R is a cluster point (or a limit point) of {an} iff

∀ε > 0, an ≈ε K for infinitely many n.

Example 6.2.1. (−1)n has two cluster points: ±1.

(−1)n(1 + 1n ) has the same two cluster points.

Neither of these sequences has limit.

Theorem 6.2.2 (Cluster Point Thm). K is a cluster point of {an} iff K is the limit of

a subsequence of {an}.

Proof. (⇒) Assuming that K is a cluster point, we will construct a subsequence converging

to it. Given ε > 0, we can find an ≈ε K, so:

for ε = 1, choose an1 ≈1 K

for ε =12, choose an2 ≈ 1

2K and n2 > n1

...

for ε =1j, choose anj ≈ 1

jK and nj > nj−1

Then ank→ K.

(⇐) Given ε > 0, find J such that j ≥ J =⇒ |anj −K| < ε.

Then an ≈ε K for n ∈ {nJ , nJ+1, . . . }.

Together, the Subsequence Thm and Cluster Point Thm give an easy way to prove

that a sequence doesn’t converge:

Any sequence with more than one cluster point does not converge.

6.3 Bolzano-Weierstrass Theorem

Theorem 6.3.1 (Bolzano-Weierstrass). A bounded sequence in R has a convergent sub-

sequence.

Proof. Suppose {xn} is bounded, so that

a0 ≤ xn ≤ b0, ∀n.

Page 52: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

52 Math 311 The Completeness Property

By Cluster Pt Thm, suffices to find a cluster point of the sequence.

Apply the bisection method (aka divide-and-conquer): let c e the midpoint of [a0, b0]. Then

at leat one of [a, c] or [c, b0] contains infinitely many points xn; call it [a1, b1]. (Choose

the first one, if both have infinitely many.) Continuing, we get a nested sequence

[a0, b0] ⊇ [a1, b1] ⊇ . . . ⊇ [am, bm] ⊇ . . .

Since

|bn − an| = |b0 − a0|2n

→ 0,

the Nested Intervals Thm gives

∃!L ∈⋂

[an, bn].

Claim: L is a cluster point of {xn}. Given ε > 0, choose N large enough that |bn−an| < ε.

Then [an, bn] ⊆ (L− ε, L + ε) and contains infinitely many of the xn.

6.4 Cauchy sequences

Definition 6.4.1. A sequence {an} in R is a Cauchy sequence iff

∀ε > 0, am ≈ε an, for m,n >> 1.

This means |am − an| m,n→∞−−−−−−−→ 0, or

∀ε > 0, ∃Nε such that m,n ≥ N =⇒ |am − an| < ε.

Example 6.4.1 (Nonexample). 1, 2, 2 12 , 3, 3 1

3 , 3 23 , 4, . . . .

Definition 6.4.2 (Alternative). A sequence {an} in Q is a Cauchy sequence iff

∀ε > 0,∃(a, b) such that |b− a| < ε and {xN , xN+1, xN+1, . . . } ⊆ (a, b), for some N.

Example 6.4.2. The tail of the nonCauchy sequence has no upper bound, so any interval

containing it is of the form [x,∞).

The definition of Cauchy sequence makes no claim about convergence (to L ∈ R, e.g.)!

How to know when a Cauchy sequence converges?

Page 53: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

6.4 Cauchy sequences 53

Theorem 6.4.3 (Cauchy Criterion). A sequence in R has a limit ⇐⇒ it is Cauchy.

Proof. (⇒) Homework (6.4.1)

(⇐) Let {an} be Cauchy. Then

(i) {an} is bounded.

Find N such that

m,n ≥ N =⇒ an ≈ε am

n ≥ N =⇒ an ≈ε aN .

Then for n ≥ N, aN − ε ≤ anaN + ε, so bound {an} by

max{|a1|, |a2|, . . . , |aN−1|, |aN − ε|, |aN + ε|}.

(ii) {an} has a convergent subsequence {ani}, by Bolzano-Weierstrass.

(iii) Define L := lim ani . Then

|an − L| = |an − ani + ani − L|≤ |an − ani |+ |ani − L| ∆ ineq.

Since Cauchy, can ensure |an − ani | < ε, for n, i >> 1.

Since subsequence converges, can ensure |ani − L| < ε, for i >> 1. So an → L.

Example 6.4.3. Prove the convergence of the sequence of Fibonacci fractions

a1 = 1, an+1 =1

an + 1.

Solution. Want to estimate |am − an|. Wlog, let m > n. Consider

|an − an+1| =∣∣∣∣

1an−1 + 1

− 1an + 1

∣∣∣∣ =|an−1 − an|

(an + 1)(an−1 + 1)= cn|an−1 − an|,

where cn := 1/(an + 1)(an−1 + 1). If we could show 0 ≤ cn ≤ C, then we’d have a bound

|an − an+1| ≤ C|an−1 − an| ≤ C2|an−2 − an−1| ≤ · · · ≤ Cn−1|a1 − a2|.

Then we could sum up the terms |an − an+1| as a geometric series. Looking at the seq

Page 54: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

54 Math 311 The Completeness Property

1, 12 , 2

3 , 35 , 5

8 , . . . , one would guess an ≥ 12 , implying

1cn

= (an + 1)(an−1 + 1) ≥ 32· 32

> 2 =⇒ 0 ≤ cn ≤ 12.

Claim: an ≥ 12 .

Proof of Claim. By induction: The basis step is a1 = 1 ≥ 12 and the induction step is

12≤ an ≤ 1 =⇒ 3

2≤ an + 1 ≤ 2, + 1

=⇒ 12≤ 1

an + 1≤ 2

3, ≤ rules

=⇒ 12≤ an+1 ≤ 2

3≤ 1, defn of an+1.

Thus we have |an − an+1| < 1/2n, which implies

|an − am| ≤ |an − an+1|+ |an+1 − an+2|+ · · ·+ |am−1 − am|

<12n

+1

2n+1+ · · ·+ 1

2m−1

<12n

(1 +12

+14

+ . . . )

=1

2n−1< ε, for m > n >> 1.

This shows the sequence is Cauchy, so converges by Cauchy Criterion.

6.5 Completeness Property for sets

Definition 6.5.1. Let S ⊆ R. An upper bound for S is a number b such that x ∈ S =⇒x ≤ b. S is said to be bounded above iff S has an upper bound. b is a sharp upper bound

for S if no number less than b is an upper bound, i.e., if b is best possible.

Definition 6.5.2. m ∈ R is the maximum of S iff m is an upper bound of S and m ∈ S.

Example 6.5.1. A bounded set may not have a maximum: (0, 1) is bounded above by

1, and by no number less than 1. But 1 /∈ (0, 1).

NOTE: the max of a set must be contained in the set. What about when the set

doesn’t contained the element that “ought” to be the max? There is a workaround:

Definition 6.5.3. The supremum of S is a sharp upper bound for S.

Page 55: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

6.5 Completeness Property for sets 55

(i) β is an upper bound for S; x ∈ S =⇒ x ≤ β.

(ii) β is the least upper bound; b is an upper bound for S =⇒ β ≤ b.

A (nonempty) bounded set may not have a maximum, but it WILL have a supremum:

this is a characterizing property of R.

Theorem 6.5.4 (Completeness of R). If S ⊆ R is nonempty and bounded above, then

supS exists in R.

Proof. Two steps: use bisection and nested intervals to locate a candidate for sup S, then

prove it using Limit Location.

1. Since S is bounded above, let b0 be an upper bound.

Since S is nonempty, let a0 ∈ S. Then expect sup S to be somewhere in [a0, b0].

Bisect S; let c be the midpoint of [a0, b0]. Then expect sup S to be in [a0, c] or [c, b0].

[a1, b1] :=

[a0, c], c is an upper bound for S,

[c, b0], else.

Iterating the bisection procedure gives a sequence of nested intervals [an, bn] such

that

(i) [an, bn] contains a point of S,

(ii) bn is an upper bound of S, and

(iii) |bn − an| → 0.

By the Nested Intervals Thm, ∃!β ∈ ⋂∞n=1[an, bn], and

lim an = lim bn = β.

2. β = sup S.

(i) β is an upper bound.

x ∈ S =⇒ x ≤ bn,∀n, S ≤ bn

=⇒ x ≤ lim bn = β, Limit Loc Thm.

Page 56: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

56 Math 311 The Completeness Property

(ii) β is the least upper bound. Let b be any other upper bound of S. Then

an ≤ b,∀n =⇒ β = lim an ≤ b.

Definition 6.5.5. Let S ⊆ R. An lower bound for S is a number b such that x ∈ S =⇒x ≥ b. S is said to be bounded below iff S has an lower bound. b is a sharp lower bound

for S if no number greater than b is an lower bound, i.e., if b is best possible.

Definition 6.5.6. m ∈ R is the minimum of S iff m is an lower bound of S and m ∈ S.

Definition 6.5.7. The infimum of S is a sharp lower bound for S.

(i) β is an lower bound for S; x ∈ S =⇒ x ≤ β.

(ii) β is the greatest lower bound; b is an lower bound for S =⇒ β ≥ b.

Theorem 6.5.8. If −S = {−x ... x ∈ S}, then inf S = − sup(−S).

Exercises: #6.4.2, 6.5.3(aceg) Recommended: #6.2.2, 6.4.1

Problems: #6-2 Recommended: #6-3

Due: Feb.

1. Prove the equivalence of the two definitions of Cauchy sequence.

2. If {xn} is Cauchy in R and some subsequence {xnk} converges to x ∈ R, then prove

the full sequence {xn} also converges to x.

Page 57: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 7

Infinite Series

7.1 Series and sequences

1

Definition 7.1.1. An (infinite) series is a sum of a sequence {ak}:

∞∑

k=0

ak = a0 + a1 + a2 + . . . .

To make it clear that the terms of the sequence {ak} are added in order, define

∞∑

k=0

ak = limn→∞

n∑

k=0

ak = lim sn,

where sn :=∑n

k=0 ak. The series∑

ak converges or diverges as the sequence sn does.

Thus, a series is a any sequence which can be written in a certain simple recursive

form:

sn = sn−1 + f(n).

Example 7.1.1. Geometric series: 1 + r + r2 + · · · = ∑∞k=0 rk is the limit of

sn = 1 + r + r2 + · · ·+ rn = sn−1 + rn.

1April 18, 2007

Page 58: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

58 Math 311 Infinite Series

Example 7.1.2. Harmonic series: 1 + 12 + 1

3 + · · · = ∑∞k=1

1k is the limit of

sn = 1 +12

+13

+ · · ·+ 1n

= sn−1 +1n

.

Definition 7.1.2. A telescoping series is one that can be written in the form

∞∑

k=0

(ak+1 − ak).

A telescoping series has partial sums

sn = (a1 − a0) + (a2 − a1) + (a3 − a2) + · · ·+ (an−1 − an−2) + (an − an−1) = an − a0.

So the sum can be found as

∞∑

k=0

(ak+1 − ak) = lim sn = lim an − a0.

Given any sequence, this provides a way to write a series which has as partial sums, the

terms of the original sequence:

1. Start with any sequence {xn}.

2. Define a0 = x0 and for n ≥ 1, let an := xn − xn−1.

3. Then xn is the nth partial sum

n∑

k=0

ak = x0 + (x1 − x0) + (x2 − x1) + · · ·+ (xn − xn−1) = xn.

Example 7.1.3 (Euler’s gamma). Let sn = 1 + 12 + 1

3 + · · ·+ 1n − log(n + 1), n ≥ 1.

To make this into a series, let

an = sn − sn−1 =1n− log(n + 1) + log n =

1n− log

n + 1n

, n ≥ 1

=⇒∞∑

n=1

(1n− log

n + 1n

)= γ.

Note that ∫ n+1

n

dx

x= [log x]n+1

n = log(n + 1)− log n,

Page 59: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.2 Elementary convergence tests 59

so we have

γ = limN→∞

( ∞∑n=1

1n−

∫ ∞

1

1x

dx

)“ =′′

∞∑n=1

1n−

∫ ∞

1

1x

dx.

7.2 Elementary convergence tests

Almost always impossible to find the actual sum of a series (exceptions: geometric, tele-

coping, Fourier), so more important to know if converges.

Theorem 7.2.1.∑

an converges =⇒ an → 0.

Proof. Let sn be a partial sum of the series and S = lim sn. Then

sn = sn−1 + an

an = sn − sn−1

lim an = lim(sn − sn−1) = lim sn − lim sn−1 = S − S = 0.

NOTE:

1. The converse is FALSE:

1n→ 0 but

∑ 1n

diverges.

2. Most often used as contrapositive: an 9 0 =⇒ ∑an diverges.

∞∑n=1

(−1)n,

∞∑n=1

(n

n + 1

)n

Theorem 7.2.2 (Tail-convergence).∑∞

n=N0an converges ⇐⇒ ∑∞

n=0 an converges ⇐⇒∑∞

n=N an, ∀N .

Proof. Idea: lim sn = lim sn+N .

Theorem 7.2.3 (Cauchy Criterion for series).∑

an converges iff

∀ε > 0, m, n >> 1 =⇒∣∣∣∣∣

m∑

k=n

ak

∣∣∣∣∣ < ε.

Page 60: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

60 Math 311 Infinite Series

Theorem 7.2.4 (Linearity). ∀p, q ∈ R, if∑

an and∑

bn converge, then∑

(pan + qbn)

converges and∑

(pan + qbn) = p∑

an + q∑

bn.

Proof. lim(psn + qtn) = p lim sn + q lim tn.

So we have∑

an +∑

bn =∑

(an + bn) and∑

can = c∑

an. However,

∑anbn 6=

∑an

∑bn, because

1 + a1b2 + a2b2 + . . . anbn 6= (1 + a1 + · · ·+ an)(1 + b2 + · · ·+ bn).

(More cross terms on right.)

Theorem 7.2.5 (Increasing & bounded). If 0 ≤ an,∀n, then∑

an converges iff the

partial sums are bounded.

Proof. (⇒) lim sn exists =⇒ {sn} bounded.

(⇐) sn = sn−1+an ≥ sn−1, so monotone. Then {sn} bounded implies {sn} convergent

by completeness.

Theorem 7.2.6 ((Direct) Comparison Thm). If 0 ≤ an ≤ bn,∀n, then

∑bn converges =⇒

∑an converges.

In this case,∑

an ≤∑

bn.

The converse of this statement is also quite helpful:

0 ≤ an ≤ bn,∑

an diverges =⇒∑

bn diverges .

Proof. Define the partial sums

sn =n∑

k=1

ak and tn =n∑

k=1

bk.

By hyp, T := lim tn exists, and it is an upper bound for {tn} by the Incr Seq Thm, so

0 ≤ an ≤ bn =⇒ sn ≤ tn ≤ T,

Page 61: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.3 Series with negative terms 61

which implies∑

an converges, by the Incr & bounded Thm. Finally, S =∑

an ≤ T by

Limit Location Thm.

IMPORTANT:

completeness of R =⇒ increasing & bounded thm =⇒ Comparison Thms.

7.3 Series with negative terms

The above Comparison Thm is only for positive-term series.

Definition 7.3.1.∑

an is absolutely convergent iff∑ |an| converges.

∑an is conditionally convergent iff

∑ |an| diverges but∑

an converges.

Example 7.3.1. 1. For a positive-term series, convergence ≡ absolute convergence.

2.∑ (−1)n

2n and∑ (−1)n

n! are absolutely convergent, since∑

12n and

∑1n! are conver-

gent.

3.∑ (−1)n

n is conditionally convergent, since the harmonic series diverges.

Example 7.3.2. If f : R→ R is any function, then define

f+(x) := max{f(x), 0}, and f−(x) := −min{f(x), 0}.

Then f+(x) ≥ 0 and f−(x) ≥ 0, ∀x. Also:

f(x) = f+(x)− f−(x) and |f(x)| = f+(x) + f−(x).

A sequence is just a function a : N→ R where we usually write an for a(n), but it can

be similarly decomposed

a+n := max{an, 0}, a−n := −min{an, 0},

so that both {a+n } and {a−n } are positive sequences and

an = a+n − a−n and |an| = a+

n + a−n .

Page 62: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

62 Math 311 Infinite Series

(SKETCH EXAMPLE:)∑ (−1)n

n .

Theorem 7.3.2 (Absolute convergence thm).∑ |an| converges =⇒ ∑

an converges.

Proof 1. Split the series into positive and negative components as above: {a+n } and {a−n }.

Then

0 ≤ a+n ≤ |an| =⇒

∑a+

n ≤∑

|an|,

by Comp Thm, and same for a−n . Linearity Thm lets us add two convergent series.

Proof 2. Apply the Cauchy criterion to |∑mk=n ak| ≤

∑mk=n |ak|.

Completeness is what implies the Comparison Thm, so it also implies this Thm.

Strangely, this property is equivalent to completeness! First, need a couple of definitions.

Definition 7.3.3. A vector space is a set X where any two element of X can be added,

or multiplied by a number in R. (There are more details, but this is all we’ll need.)

Definition 7.3.4. A norm on a vector space is a function from X to R that satisfies

(i) ‖x‖ ≥ 0, ‖x‖ = 0 ⇐⇒ x = 0.

(ii) ‖ax‖ = |a| · ‖x‖,∀a ∈ R.

(iii) ‖x− z‖ ≤ ‖x− y‖+ ‖y − z‖, ∀x, y, z ∈ X.

NOTE: the scalars in these two definitions can be replaced by Q, C, or any other field.

Example 7.3.3.

Rn with ‖x‖ =(∑n

i=1 x2i

)1/2.

Mn(R) with ‖A‖ =∑n

i=1 |aij |.the continuous functions on an interval C(I) with ‖f‖ = supx∈I |f(x)|.the continuous functions on an interval C(I) with ‖f‖1 =

∫I|f(x)| dx.

the continuous functions on an interval C(I) with ‖f‖2 =(∫

I|f(x)|2 dx

)1/2.

Now we can show that in any normed vector space, completeness (defined as conver-

gence of Cauchy sequences) is equivalent to summability of absolutely convergent series.

Page 63: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.3 Series with negative terms 63

Theorem 7.3.5. Suppose we have a vector space (X, ‖ · ‖). Then X is complete iff every

absolutely convergent series in X converges.

Proof. (⇒) Suppose that every Cauchy sequence in X converges and that∑∞

k=1 ‖xk‖converges. Must show that

∑∞k=1 xk converges.

Show that the sequence of partial sums is Cauchy, hence converges.

Let sn =∑n

k=1 xk. Then for n > m, we have

‖sn − sm‖ =

∥∥∥∥∥n∑

k=1

xk −m∑

k=1

xk

∥∥∥∥∥ =

∥∥∥∥∥n∑

k=m+1

xk

∥∥∥∥∥

≤n∑

k=m+1

‖xk‖ by ∆ ineq

< ε, for m >> 1,

since∑

k=1 ‖xk‖ converges,∑∞

k=N ‖xk‖ N→∞−−−−−−→ 0 by Tail-Conv Thm.

(⇐) Suppose that∑∞

k=1 ‖xk‖ converges =⇒ ∑∞k=1 xk converges. Use this to show

that any Cauchy sequence converges.

Let {xn} be Cauchy. Then

∀ε > 0, ∃N, such that m,n ≥ N =⇒ ‖xn − xm‖ < ε, or

∀j ∈ N, ∃nj , such that m,n ≥ nj =⇒ ‖xn − xm‖ <12j

.

So we can find a subsequence {xnj}, choosing n1 < n2 < . . . . Define

y1 = xn1 ,

yj = xnj − xnj−1 , j > 1.

Then∑k

j=1 yj = xnk(by telescoping), and

∞∑

j=1

‖yj‖ ≤ ‖y1‖+∞∑

j=1

12j

= ‖y1‖+ 1 < ∞.

So lim xnk=

∑yj exists, i.e., xnj → x ∈ X. Since {xn} is Cauchy, it must also converge

to the same limit (REC HW from §6); for m,n ≥ N , have

‖xn − x‖ = ‖xn − xnk+ xnk

− x‖ ≤ ‖xn − xnk‖+ ‖xnk

− x‖ < 2ε,N >> 1.

Page 64: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

64 Math 311 Infinite Series

Recap: (⇒) comes by writing a series as a sequence, (⇐) comes by writing a sequence

as a series, using the telescoping trick.

7.4 Ratio and Root tests

Theorem 7.4.1 (Ratio test). Suppose an 6= 0, n >> 1, and lim∣∣∣an+1

an

∣∣∣ = L. Then

L < 1 =⇒∑

|an| converges, L > 1 =⇒∑

|an| diverges.

If L = 1 or L doesn’t exist, the test tells nothing.

Proof. Compare∑ |an| to geometric series.

case (1) 0 ≤ L < 1. Then pick M such that L < M < 1.

(SKETCH WHY M is necessary for the following ineq).

By Seq Loc Thm, ∣∣∣∣an+1

an

∣∣∣∣ → L =⇒∣∣∣∣an+1

an

∣∣∣∣ < M,

for all n larger than some N . Then we get a recursion relation

∣∣∣∣an+1

an

∣∣∣∣ < M =⇒ |an+1| < |an|M.

Applying this to aN+k and iterating,

|aN+k| < |aN+k−1|M < |aN+k−2|M2 < · · · < |aN |Mk

∞∑

k=0

|aN+k| <∞∑

k=0

|aN |Mk = |aN |∞∑

k=0

Mk

RHS converges by geom, so Comp Thm gives convergence of LHS.

Then Tail-Conv Thm gives convergence of∑ |an|

case (2) L > 1. Exercise 7.4.2.

Theorem 7.4.2 (Root test). Suppose lim |an|1/n = L. Then

L < 1 =⇒∑

|an| converges,

L > 1 =⇒∑

an diverges.

Page 65: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.5 Integral, p-series, and asymptotic comparison 65

If L = 1 or doesn’t exist, then test tells nothing.

Proof. By cases, like for Ratio Test.

The Ratio Test is easier and more common, but the Root Test is applicable more

broadly (Prob 7-4).

7.5 Integral, p-series, and asymptotic comparison

Theorem 7.5.1 (Integral Test). Suppose f(x) ≥ 0 and decreasing for x ≥ N ∈ N. Then∑

f(n) converges iff∫∞

Nf(x) dx is finite.

Proof. Define the area An :=∫ n+1

nf(x), dx. The area of the rectangle under the graph is

f(n + 1)× |n− (n + 1)| = f(n + 1). Thus,

0 ≤ f(n + 1) ≤ An, for n ≥ N.

Shifting one unit to the right, the region under the graph is contained in the rectangle, so

0 ≤ An ≤ f(n), for n ≥ N.

This gives

N+n∑

k=N+1

f(k) ≤∫ N+n

N+1

f(x) dx ≤N+n−1∑

k=N

f(k) ≤∫ N+n−1

N

f(x) dx

and so the two sequences converge or diverge together.

Theorem 7.5.2 (p-series).∑

1np converges iff p > 1.

Proof. For p ≥ 0, 1np is increasing, so apply the integral test to

∫ ∞

1

1xp

dx =

limr→∞ r1−p−11−p , p 6= 1,

limr→∞ log r, p = 1.

For p = 1, log r →∞ and both diverge.

For p > 1, r1−p → 0, so both are finite.

For 0 ≤ p < 1, r1−p →∞, so both diverge.

Finally, consider p < 0 and put q = −p > 0. Then∑

nq diverges by nth term test.

Page 66: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

66 Math 311 Infinite Series

Theorem 7.5.3 (Asymptotic comparison test). If lim |an||bn| = 1, then

∑|an| converges ⇐⇒

∑|an| converges.

Proof. Homework 7.4.4.

7.6 Alternating series test

Theorem 7.6.1. If {an} is positive and strictly decreasing, then∑

(−1)nan converges.

Proof. Since the signs alternate, we get

s2k = s2k−1 + a2k, s2k+1 = s2k − a2k+1,

so s2k+1 = s2k − a2k+1 < s2k.

Also, s2k = s2k−1 + a2k =⇒ s2k−1 < s2k.

Also, s2k+1 = s2k − a2k+1 = s2k−1 + (a2k − a2k+1) =⇒ s2k−1 < s2k+1.

Thus, we have a nested sequence [s2k−1, s2k] with |s2k−1 − s2k| = |a2k| → 0. Thus, {sn}is a Cauchy sequence.

Corollary 7.6.2. For an alternating series, en = |sn−S| < an+1, where S =∑

(−1)nan.

Proof. HW: since S ∈ [s2k−1, s2k], apply inequalities from prev proof.

Theorem 7.6.3 (Cauchy’s subsequence test). Suppose a1 ≥ a2 ≥ · · · ≥ 0. Then

∞∑n=1

an converges ⇐⇒∞∑

k=0

2ka2k converges.

a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, . . .

a1, 2a2, 4a4, 8a8, . . .

Proof. Since positive-term, enough to show boundedness of partial sums. Define

sn := a1 + a2 + · · ·+ an,

tk := a1 + 2a2 + · · ·+ 2ka2k .

Page 67: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.7 Rearrangements 67

For n < 2k,

sn ≤ a1 + (a2 + a3) + (a4 + a5 + a6 + a7) + · · ·+ (a2k + · · ·+ a2k+1−1)

≤ a1 + 2a2 + 4a4 + · · ·+ 2ka2k = tk.

This shows sn ≤ tk. Meanwhile, for n > 2k,

sn ≥ a1 + a2 + (a3 + a4) + (a5 + a6 + a7 + a8) + · · ·+ (a2k−1+1 + · · ·+ a2k)

≥ a1 + a2 + 2a4 + 4a8 + · · ·+ 2k−1a2k

≥ 12a1 + a2 + 2a4 + 4a8 + · · ·+ 2k−1a2k =

12tk.

This shows 2sn ≥ tk, so that

sn ≤ tk ≤ 2sn.

Thus {sn} and {tk} are both bounded or both unbounded.

7.7 Rearrangements

Theorem 7.7.1. If∑

an is absolutely convergent, then any rearrangement of it is also

convergent, and has the same sum.

Proof. First, suppose an ≥ 0. Let∑

an′ denote a rearrangement of the original series;

the partial sums are sn and s′n. Fix ε > 0 and show |sn − s′n| < ε.

The hypothesis means that (by Tail-conv) for some N ,

∞∑

k=N

|ak| < ε.

By going far enough in the rearranged series, can ensure that

{a1, a2, . . . , aN} ⊆ {a′1, a′2, . . . , a′p},

so that ∞∑

k=p+1

|a′k| ≤∞∑

k=N

|ak| < ε.

Page 68: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

68 Math 311 Infinite Series

Theorem 7.7.2. If∑

an is conditionally convergent, then for any x ∈ R, there is a

rearrangement which sums to x (or even diverges to ±∞).

Nonproof. To be conditionally convergent, must have infinitely many positive and negative

terms, so separate into∑

a+n and

∑a−n . For x ≥ 0, form rearrangement as follows:

1. Add positive terms until sum exceeds x. Stop as soon as∑J

j=1 a+j ≥ x.

2. Add negative terms until x exceeds sum. Stop as soon as∑J

j=1 a+j −

∑Kk=1 a−k ≤ x.

3. Repeat.

Since a+n → ∞ and a−n → ∞, neither step (i) nor step (ii) can ever go on for infinitely

many steps. Since an is conditionally convergent, an → 0 and the procedure generates a

nested sequence of intervals.

7.8 Multiplication of Series

Theorem 7.8.1 (Cauchy Product). Suppose that∑

an = A and∑

bn = B, and at least

one of them converges absolutely. Define cn =∑n

k=0 akbn−k. Then∑

cn = AB.

Proof. Wlog, suppose it is∑

an that converges absolutely, and define

An :=n∑

k=0

an, Bn :=n∑

k=0

bn, Cn :=n∑

k=0

cn, βn = Bn −B.

Then we use an error term estimate:

Cn = a0b0 + (a0b1 + a1b0) + · · ·+ (a0bn + · · ·+ anb0)

= a0Bn + a1Bn−1 + · · ·+ anB0

= a0(B + βn) + a1(B + βn−1) + · · ·+ an(B + β0)

= AnB + a0βn + · · ·+ anβ0.

So let en := a0βn + · · ·+ anβ0 and it will suffice to show en → 0.

To use the absolute convergence of∑

an, let α :=∑ |an|.

Fix ε > 0. Since∑

bn converges, choose N such that n ≥ N =⇒ |βn| < ε, so that

|en| ≤ |a0βn + . . . an−N−1βN+1|+ |βNan−N + · · ·+ anβ0|

Page 69: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

7.8 Multiplication of Series 69

≤ εα + |βNan−N + · · ·+ anβ0|.

Now since an → 0, for N fixed and n >> 1, we can make |βNan−N + · · ·+anβ0| < ε. Then

|en| < (α + 1)ε.

Theorem 7.8.2 (Dirichlet’s test). Suppose that the partial sums An =∑n

i=1 ai form a

bounded sequence, and suppose there is a sequence {bi} with bi ≥ bi+1 and bi → 0. Then

show∑

aibi converges.

Exercises: #7.2.2, 7.4.2, 7.4.4 Recommended: #7.2.3, 7.3.1, 7.4.1,

7.4.3

Problems: #7-4 Recommended: #7-5, 7-7

Due: Mar.

1. If∑

an converges and {bn} is monotonic and bounded, then∑

anbn converges.

2. Give an example to show that you can have∑

xn diverge and∑

yn diverge, but∑

xnyn converges.

3. (a) Show that if∑

an converges absolutely, then∑

a2n does, too. Is this true

without the hypothesis of absolute convergence?

(b) If∑

an converges and an ≥ 0, what can be said about∑√

an?

4. (a) (Dirichlet’s Test) Suppose that the partial sums An =∑n

i=1 ai form a bounded

sequence, and suppose there is a sequence {bi} with bi ≥ bi+1 and bi → 0. Then

show∑

aibi converges. (Hint:∣∣∣∑q

i=p aibi

∣∣∣ =∣∣∣∑q−1

i=p An(bn − bn+1) + Aqbq −Ap−1bp

∣∣∣.)

(b) Use Dirichlet’s test to prove the alternating series test.

5. For an alternating series, en = |sn − S| < an+1, where S =∑

(−1)nan.

Page 70: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

70 Math 311 Infinite Series

Page 71: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 8

Power Series

8.1 Intro, radius of convergence

1

Definition 8.1.1. A power series is a series of the form∑

anxn, where x is a variable.

The nth term of a power series is anxn (rather than just an).

∑anxn is a family of series, one for each value of x. We are interested in the subfamily

corresponding to

A = {x ∈ R ...

∑|anxn| converges}.

Then we can define a function

f : A → R, f(x) =∑

anxn.

Example 8.1.1. Where does∑∞

n=1x2n

2nn converge?

Solution.

∣∣∣∣an+1

an

∣∣∣∣ =∣∣∣∣

x2n+1

2n+1(n + 1)· 2nn

x2n

∣∣∣∣ =n

2(n + 1)|x|2 −→ |x|2

2,∀x ∈ R.

Ratio test implies convergence if this final quantity is < 1, so find where this is true:

|x|22

< 1 ⇐⇒ |x| <√

2.

1April 18, 2007

Page 72: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

72 Math 311 Power Series

So converges for −√2 < x <√

2 and diverges for |x| > √2.

(What happens for x = ±√2? Evaluate:)

∞∑n=1

(±√2)2n

2nn=

∞∑n=1

2n

2nn=

∞∑n=1

1n

.

This behavior is typical:

Theorem 8.1.2. For any power series∑

anxn, ∃!R ≥ 0 such that

∑anxn converges absolutely for |x| < R,

∑anxn diverges for |x| > R.

R is the radius of convergence of the power series. (Note: may have R = 0.) By conven-

tion, R = ∞ iff the series converges ∀x ∈ R.

Proof. (I) First step: show convergence for x = c implies convergence for |x| < |c|.This show the domain of convergence is an interval (−c, c).

For c = 0, it is trivial, so assume c > 0.

If∑

ancn converges, then ancn → 0, so |ancn| ≤ M for some fixed M > 0. Then

for |x| < c,

|anxn| = |ancn|∣∣∣xc

∣∣∣n

≤ M∣∣∣xc

∣∣∣n

.

Since∑

M∣∣x

c

∣∣n is a geometric series with∣∣x

c

∣∣ < 1, it converges and by Comparison,∑

anxn converges absolutely.

(II) Second step: define R as the sup of the c’s in the First step, and show that it

separates convergent from divergent.

A := {x ∈ R ...

∑|anxn| converges} = (−c, c).

If A = R then let R = ∞. Otherwise, ∃b /∈ A, hence also −b /∈ A. Then |b| is an

upper bound for A, because c ∈ A =⇒ (−c, c) ⊆ A, by first part.

So sup A exists; define R = sup A.

(III) |x| < R =⇒ ∑ |anxn| converges. This is because |x| < R =⇒ x ∈ (−c, c) ⊆ A.

Page 73: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

8.2 Convergence at endpoints, Abel summation 73

(IV) |x| > R =⇒ ∑anxn diverges. This is because for R < c < |x|, convergence

of∑

anxn implies absolute convergence at c, which violates the definition of R as

upper bound of A.

8.2 Convergence at endpoints, Abel summation

Convergence of power series can be difficult to determine at R,−R.

Definition 8.2.1. Abel summation. Suppose that a power series∑

anxn converges to a

continuous function f on (−1, 1). If f is defined and continuous at x = 1, then say the

series is Abel-summable to f(1), even if the series diverges at x = 1:

∑anxn = f(x) =⇒

∑an = f(1).

This can be used to find the value of uncooperative numerical series.

Example 8.2.1. The divergent series∑

(−1)n = 1−1+1−1+1− . . . is Abel-summable

to 12 .

1− x + x2 − x3 + · · · =∞∑

n=0

xn =1

1 + x, |x| < 1.

This function is continuous at x = 1, so the Abel sum of∑

(−1)n is 12 .

8.3 Linearity of power series

Theorem 8.3.1. If f(x) =∑

anxn and g(x) =∑

bnxn, then for any p, q ∈ R,

pf(x) + qg(x) =∑

(pan + qbn)xn

is valid on the common domain of convergence of f, g.

Proof. Since this is true for every fixed value of x by prev thm, done.

Example 8.3.1.

1 + x + x2 + x3 + . . . =1

1− x, |x| < 1

Page 74: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

74 Math 311 Power Series

1− x + x2 − x3 + . . . =1

1 + x, |x| < 1

2(1 + x2 + x4 + . . . ) =1

1− x+

11 + x

=2

1− x2

1 + x2 + x4 + . . . =1

1− x2=

∑(x2)n.

8.4 Multiplication of power series

Q: In what order to take the terms of an infinite FOIL-expansion? (No “L”!)

A: Group terms by powers of x (convention).

Theorem 8.4.1 (Cauchy product). If f(x) =∑

anxn and g(x) =∑

bnxn, then

f(x)g(x) =∞∑

n=1

i+j=n

aibj

xn =

∞∑n=1

(n∑

i=0

aibn−i

)xn

on their common domain of convergence.

Proof. The result holds at any fixed x ∈ (−R, R) by Cauchy Prod thm for numerical

series.

Exercises: #8.1.1(beh), 8.3.1 Recommended: #8.2.2

Problems: #8-1 Recommended: #

Due: Mar.

1. For how many points x ∈ R can a power series converge conditionally?

Page 75: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 9

Functions of One Variable

9.1 Functions

1

Definition 9.1.1. A function from a set D to a set R is a subset f ⊆ D × R for which

each element d ∈ D appears in exactly one element (d, ·) ∈ f . Write f : D → R. If

(x, y) ∈ f , then we usually write f(x) = y.

D is the domain of the function; the subset of R of elements x for which the function is

defined.

R is the range; a subset of R which contains all the points f(x). Generally, assume R = R.

Definition 9.1.2. A function is a rule of assignment x 7→ f(x), where for each x in the

domain, f(x) is a unique and well-defined element of the range.

f(x) = y means “f maps x ∈ D to f(x) ∈ R”.

NOTE: a function whose domain is N is generally called a sequence, and a : N→ R is

denoted by an := a(n).

Definition 9.1.3. The image of f is the subset

Im f := {y ∈ R ... ∃x ∈ D, f(x) = y} ⊆ R.

1April 18, 2007

Page 76: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

76 Math 311 Functions of One Variable

The function f is surjective or onto iff Im f = R, that is,

∀y ∈ R, ∃x ∈ D, f(x) = y.

Definition 9.1.4. For f : D → R, the preimage of B ⊆ R is the subset

f−1(B) := {x ... f(x) ∈ B} ⊆ D.

Example 9.1.1. The preimage of [0, 1] under f(x) = x2 is [−1, 1].

The preimage of [−1, 1] under f(x) = sin x is R.

The preimage of {1} under f(x) = sin x is {2kπ}k∈Z.

The preimage of [0, 1] under log x is [1, e].

NOTE: one can discuss the preimage of any function but the preimage is not necessarily

a function. In fact, the preimage f−1 is a function iff f is both injective and surjective.

Definition 9.1.5. A function f is injective or one-to-one iff no two distinct points in D

get mapped onto the same point in R, i.e.

f(x) = f(y) =⇒ x = y.

Example 9.1.2. f(x) = x2 is injective on (0,∞) but not on R.

f(x) = 1x is injective on R \ {0}.

Theorem 9.1.6. TFAE:

1. f is invertible.

2. f is bijective.

3. f−1 is a function.

4. ∃g such that g◦f = idD and f ◦g = idR.

Suppose f−1 exists. Then a point (x, y) is in the graph of f iff (y, x) is in the graph

of f−1.

NOTE: an easy way to see that f is injective is to prove it is strictly increasing (or

decreasing) on its domain.

NOTE: if f is continuous, an easy way to see that f is surjective onto [a, b] is to find

x such that f(x) = a and y such that f(y) = b.

Page 77: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

9.2 Algebraic operations on functions 77

Theorem 9.1.7. A function which is continuous and strictly increasing (or decreasing)

is invertible.

Theorem 9.1.8. If f is injective, then its inverse can be defined on its image.

9.2 Algebraic operations on functions

Definition 9.2.1.

f + g(f + g)(x) := f(x) + g(x)

fg(fg)(x) := f(x)g(x)

cf(cf)(x) := cf(x)

g◦f(g◦f)(x) := g(f(x))

Definition 9.2.2. Translation: for a > 0,

f(x + a)left-shift by a

f(x− a)right-shift by a.

Change of scale: for a > 1,

f(x/a) horizontal expansion by factor a

f(ax) horizontal contraction by factor a.

Vertical expansion/contraction is cf .

9.3 Properties of functions

All properties like increasing, decreasing, monotone, etc, of sequences apply directly to

functions: replace n = a, n + 1 = b, where b > a throughout the defns.

Symmetries of functions:

Definition 9.3.1. f is even iff f(−x) = f(x), ∀x ∈ D. f is odd iff f(−x) = −f(x), ∀x ∈D.

Page 78: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

78 Math 311 Functions of One Variable

Theorem 9.3.2. An even function times an even function is even. An odd function times

an even function is odd. What about an odd function times an odd function? (Compare

with the rules for integers.)

Proof. HW

Theorem 9.3.3. Suppose the domain of f is symmetric about 0. Then f has a unique

representation as the sum of an even and an odd function:

f(x) = E(x) + O(x), E(x) even and O(x) odd.

Proof. HW 9.3.1

Definition 9.3.4. f is periodic iff ∃T > 0 such that f(x + T ) = f(x), ∀x ∈ D. The

smallest such T is called the period of f , if it exists.

Example 9.3.1. A function which is even an monotone must be constant.

9.4 Elementary functions

1. Rational functions p(x)q(x) , where p, q are polynomials.

2. The six trigonometric functions sin x, cosx, . . . , and their inverses.

3. The exponential ex and its inverse log x.

4. The algebraic functions y = y(x) which satisfy some

yn + an−1(x)yn−1 + . . . a1(x)y + a0(x) = 0,

where ak(x) are rational functions. (eg,√

x)

Exercises: #9.2.1,9.2.2,9.3.4, Recommended: #9.3.5

Problems: #9-1,9-2 Recommended: #

Due: Mar.

1. An even function times an even function is even. An odd function times an even

function is odd. What about an odd function times an odd function? (Compare

with the rules for integers.)

Page 79: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 10

Local and Global Behavior

10.1 Intervals. Estimating functions

1

Intervals are “building blocks” of sets, especially domains.

(a, b) open (a,∞) open (−∞, b) open

[a, b] closed [a,∞] wrong

[a, b) neither (a, b] neither.

Definition 10.1.1. A δ-neighbourhood of a is (a− δ, a + δ), and TFAE

x ∈ (a− δ, a + δ), a− δ < x < a + δ, |x− a| < δ, x ≈δ a.

Definition 10.1.2. B is an upper bound for f on an interval I iff B is an upper bound

of f(I). Similarly,

supI

f(x) = sup{f(x) ... x ∈ I}

maxI

f(x) = max{f(x) ... x ∈ I}.

Similarly for inf, min.

NOTE: NONE of these need exist!

1April 18, 2007

Page 80: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

80 Math 311 Local and Global Behavior

Theorem 10.1.3 (Completeness for functions). Suppose f is bounded above on I. Then

supI f(x) exists.

NOTE: there may not actually exists c ∈ I for which f(c) = supI f(x)!

Boundedness/estimates: |f(x)| ≤ B.

This mean f is bounded above by the constant function B(x) ≡ B.

More generally, use a bound like |f(x)| ≤ g(x).

NOTE:

|f(x)g(x)| = |f(x)||g(x)|, |f(x) + g(x)| ≤ |f(x)|+ |g(x)|.

USE: apply to integrals.

Theorem 10.1.4. If f < g on I and the integrals exist, then for any a, b ∈ I,

∫ b

a

f(x) dx <

∫ b

a

g(x) dx, a < b.

Proof. coming ...

Corollary 10.1.5. If f is bounded on [a, b] and its integral exists, then

∫ b

a

f(x) dx ≤ M(b− a).

Proof. Use g(x) ≡ M , where M is a bound for f .

Example 10.1.1. Show that erf x =∫ x

0e−t2/2 dt is bounded above on the interval [0,∞).

Solution. We have an upper bound for the (positive) integrand given by

t ≤ t2 =⇒ e−t2/2 ≤ e−t/2,

however this is only true for t ≥ 1. But suffices to only consider this domain!

erf x =∫ x

0

e−t2/2 dt

=∫ 1

0

e−t2/2 dt +∫ x

1

e−t2/2 dt

≤ M +∫ x

1

e−t/2 dt

Page 81: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

10.2 Approximating functions 81

≤ M +−2[e−t/2

]x

1

≤ M + 2e−1/2, x ≥ 1.

More generally:

Theorem 10.1.6. f ≈ε g on [a, b] =⇒ ∫ b

af(x) dx ≈ε(b−a)

∫ b

ag(x) dx.

Proof. HW; apply the definition of ≈ε and the corollary above.

10.2 Approximating functions

Definition 10.2.1. Similar to prev, TFAE for x ∈ I

g(x)− δ < f(x<g(x) + δ, |f(x)− g(x)| < δ, f(x) ≈δ g(x),

f(x) = g(x) + e(x), |e(x)| < ε.

Example 10.2.1. To find a δ-neighbourhood of 0 where sin x ≈ε x, ε = .001, apply the

corollary to Alternating Series test to the series expn

sin x = x− x3

3!+

x5

5!− . . .

| sin x− x| ≤ x3

3!, 0 < x < 1

< 0.001, x3 < .006, so x < 0.18.

Since sinx is symmetric about 0 (odd fn), let δ = 0.18.

10.3 Local behavior

The local behavior of f is what can be seen by studying f in a neighbourhood of x.

Definition 10.3.1. f is locally increasing at x means f is increasing on I = (x−δ, x+δ),

for some δ.

f is locally bounded at x means f is bounded on I = (x− δ, x + δ), for some δ.

Page 82: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

82 Math 311 Local and Global Behavior

Example 10.3.1. sin x is locally increasing at every 2πn, n ∈ Z.

NOTE: f need not actually be defined at x.

Example 10.3.2. 1x is locally bounded at any x 6= 0. However, 1

x is not bounded near 0;

it’s unbounded in any neighbourhood of 0.

sin 1x is not monotonic near 0.

Theorem 10.3.2. f, g locally bounded near x =⇒ so is f + g.

Proof. Straightforward: just use δ = min{δf , δg}.

Definition 10.3.3. A property of f is true for x >> 1 or “for x near ∞” or “at ∞” if it

holds on some interval (a,∞).

Example 10.3.3. 1x and e−x are functions that “vanish at ∞”, i.e., satisfy |f(x)| < ε on

(N,∞) for sufficiently large N . Every nonconstant polynomial is unbounded near ∞.

10.4 Local and global properties

Definition 10.4.1. f is locally bounded on I = (a, b) iff it is locally bounded at any

point in I.

Example 10.4.1. 1x is locally bounded on (0,∞) but not bounded. NOTE: must use

increasingly δ for a near 0.

Definition 10.4.2. f is locally increasing on I iff it is locally increasing on every interval

in its domain.

Example 10.4.2. tanx is locally increasing but not increasing on R \ {(n + 12 )π}n∈Z.

Most important examples: continuity and differentiability. These are local but not

pointwise properties: need some neighbourhood of the point in question.

Example 10.4.3. f is positive at c. This is a pointwise property, and it satisfied iff

f(c) > 0. Don’t need a neighbourhood.

Exercises: #10.1.7,10.1.9,10.3.2 Recommended: #10.1.1,10.3.1

Problems: #10-2 Recommended: #10-3

Due: Mar.

1. If f and g are bounded on I, show that f + g and fg are bounded on I.

Page 83: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 11

Continuity and limits

11.1 Continuous functions

1

IDEA: if x and y are related by a function f(x) = y, then we want to say that y

varies continuously with respect to x iff small changes in x produce small changes in y;

no jumping!

Example 11.1.1. Define the Heaviside function

h(x) =

0, x ≤ 0

1, x > 0.

Then for x in any small neighbourhood of 0, varying x by δ can produce a sudden jump

of distance 1; cannot make this jump less than, e.g., ε = 12 by restricting to smaller δ.

Definition 11.1.1. f is continuous at c iff it is defined at c and

∀ε > 0, x ≈ c =⇒ f(x) ≈ε f(c)

∀ε > 0, ∃δ, x ≈δ c =⇒ f(x) ≈ε f(c)

∀ε > 0, ∃δ, |x− c| < δ =⇒ |f(x)− f(c)| < ε.

1April 18, 2007

Page 84: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

84 Math 311 Continuity and limits

Definition 11.1.2. f has a limit from the left at x (or is left-continuous at x) iff

∀ε > 0, x ≈δ c, x < c =⇒ f(x) ≈ε f(c)

∀ε > 0, ∃δ, 0 < c− x < δ =⇒ |f(x)− f(c)| < ε.

Write f(c−) := limx→c− f(x). Similarly for f(c+) := limx→c+ f(x):

∀ε > 0, ∃δ, 0 < x− c < δ =⇒ |f(x)− f(c)| < ε.

So clearly f is continuous at c iff f(c−) = f(c+) (where of course both limits must

exist).

Definition 11.1.3. f is continuous on I iff I is an interval (possibly infinite) and f is

continuous at every point c ∈ I.

Definition 11.1.4. f is Lipschitz (or strongly continuous) iff it satisfies a Lipschitz con-

dition:

∀x, y |f(x)− f(y)| ≤ C|x− y|, C > 0.

In this case, C is the Lipschitz constant of f .

Theorem 11.1.5. f is Lipschitz =⇒ f is continuous.

Proof. Let δ = εC .

Example 11.1.2. Show sin x is Lipschitz continuous with constant C = 1.

Solution. Let P0 is a point on the unit circle, corresponding to the arc AP0 of length c.

Then P0 = (cos c, sin c) in coordinates. We need to see that for x near c, P = (cos x, sinx)

is near P0. The difference between the heights of the two triangles is

| sin x− sin c| = |PR| ≤ |PP0| = |x− c|,

since the line PR is the shortest curve from P to the horizontal line P0R (PP0 is the arc

along the circle).

Theorem 11.1.6.∣∣∫ f(t) dt

∣∣ ≤ ∫ |f(t)| dt.

Theorem 11.1.7.∫

f(t) dt +∫

g(t) dt =∫

(f(t) + g(t)) dt.

Page 85: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

11.2 Limits of functions 85

Example 11.1.3. Show that the integral∫ π

0sin xt

t dt depends continuously on x, and is

in fact Lipschitz continuous with constant π.

Solution. We prove f(x) :=∫ π

0sin xt

t dt is a Lipschitz continuous function:

|f(x)− f(y)| =∣∣∣∣∫ π

0

sin xt

tdt−

∫ π

0

sin yt

tdt

∣∣∣∣

=∣∣∣∣∫ π

0

sin xt− sin yt

tdt

∣∣∣∣

≤∫ π

0

| sinxt− sin yt|t

dt

≤∫ π

0

|xt− yt|t

dt

= |x− y| ·∫ π

0

1 dt

= π|x− y|.

11.1.1 Discontinuities

Definition 11.1.8. f has a simple (or removable) discontinuity at c iff (re)defining f(c)

could make f continuous at c. In this case, f(c−) = f(c+), but f(c) is something else.

Example 11.1.4. f(x) = x2−9x−3 .

Definition 11.1.9. f has a jump discontinuity at c iff f(c−) 6= f(c+) (in which case f(c)

is immaterial).

Example 11.1.5. f(x) = x|x| .

Definition 11.1.10. f has an infinite discontinuity at c iff f(c−) = ±∞ or f(c+) = ±∞.

Example 11.1.6. f(x) = 1x .

Definition 11.1.11. f has an essential singularity at c iff it is not one of the other kinds.

Example 11.1.7. f(x) = sin 1x .

11.2 Limits of functions

If f(c) is not defined, we may still be able to talk about what it “ought” to be.

Definition 11.2.1. f(x) has the limit L as x → c iff

∀ε > 0, x ≈δ c, x 6= c =⇒ f(x) ≈ε L

Page 86: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

86 Math 311 Continuity and limits

∀ε > 0,∃δ, 0 < |x− c| < δ =⇒ |f(x)− L| < ε.

Example 11.2.1. limx→0 x sin 1x = 0.

Solution. Fix ε > 0. Then for 0 < |x| < ε, |x sin 1x | = |x| · | sin 1

x | ≤ |x| < ε.

Example 11.2.2. limx→1−√

1− x2 = 0.

Solution. The function is not defined for x > 1. Fix ε > 0. Then for 0 < 1− x < ε2

2 ,

√1− x2 =

√1− x

√1 + x

<√

2√

1 + x for x < 1

< ε.

Definition 11.2.2. f has a limit at ∞ iff given ε > 0, we have f(x) ≈ε L for x >> 1.

Example 11.2.3. lim 11+x2 = 0.

Solution. Fix ε > 0. Then for x > ε−1/2, 1 + x2 > 1ε =⇒ 1

1+x2 < ε.

Definition 11.2.3. If f is defined in (c − δ, c + δ) except at c, for some δ > 0. Then

limx→c f(x) = ∞ iff given N ∈ N,

∃δ0, x ≈δ0 c =⇒ f(x) > N.

Example 11.2.4. limx→01x2 = ∞.

Solution. Fix N ∈ N. For any x 6= 0 with |x| < 1√N

, we have x2 < 1N =⇒ 1

x2 > N .

11.3 Limit theorems for functions

11.4 Limits and continuity

Continuous functions are useful because they preserve limits:

limx→c

f(x) = f( limx→c

x) = f(c).

The reason is that continuous functions map convergent sequences to convergent se-

quences.

Page 87: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

11.4 Limits and continuity 87

Theorem 11.4.1. f is continuous at x iff limx→c f(x) = f(c).

Proof. We must show the following two lines are equivalent:

∀ε > 0, x ≈δ c, x 6= c =⇒ f(x) ≈ε f(c)

∀ε > 0, x ≈δ c, =⇒ f(x) ≈ε f(c).

Since x = c =⇒ f(x) = f(c), this is trivial.

Theorem 11.4.2 (Sequential continuity). limt→x f(t) = L iff limn→∞ f(xn) = L for

every {xn} with xn → x.

Proof. (⇒) Choose a sequence xn → x, and fix ε > 0. Since f is continuous, ∃δ > 0 for

which

|x− t| < δ =⇒ |f(t)− L| < ε.

Also, there is N such that

n ≥ N =⇒ |xn − x| < δ.

Thus, n ≥ N =⇒ |f(xn)− L| < ε.

(⇐) Contrapositive: suppose it is false that limt→x f(t) = L. Then:

∃ε > 0,∀δ > 0,∃t, |x− t| < δ and |f(t)− L| ≥ ε

∃ε > 0, ∀n ∈ N,∃tn, |x− tn| < 1n and |f(tn)− L| ≥ ε

This produces tn → x for which it is false that limn→∞ f(xn) = L.

By this theorem, our entire “limit toolkit” transfers to functions: the comparison

theorems, linearity, products & quotients, error form, K − ε principle, Squeeze Theorem,

Limit Location, Function Location (instead of Sequence Location).

Corollary 11.4.3. f is continuous at x, iff limn→∞ f(xn) = f(x) for every {xn} with

xn → x.

In other words, for a continuous function f ,

xn → x =⇒ f(xn) → f(x),

so that continuous functions map convergent sequences to convergent sequences.

Page 88: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

88 Math 311 Continuity and limits

Some of the basic limit theorems for functions (the new “function limit toolkit”) can

be extended for continuous functions.

Theorem 11.4.4 (Positivity). If f is continuous at c and f(c) > 0, then f(x) > 0 for

x ≈ε c.

Proof. Let xn → c. Then f(xn) → f(c). Since f(c) > 0, the Sequence Location Theorem

gives f(xn) > 0 for n >> 1.

This proof amounts to saying:

Since limx→c f(x) = f(c) > 0, the Function Location Theorem gives f(x) > 0 for x ≈ε c.

Theorem 11.4.5. If f, g are continuous, then so are f + g, f · g, and (if g 6= 0) f/g.

Proof. Continuous functions preserve sequences, and limits are linear & multiplicative for

sequences.

Theorem 11.4.6. Let x = g(t), c = g(b). If g(t) is continuous at b and f(x) is continuous

at c, then f ◦g(t) = f(g(t)) is continuous at b.

Proof. Given ε > 0, ∃δ > 0 such that

f(x) ≈ε f(c) for x ≈δ c continuity of f

g(t) ≈δ g(b) for t ≈α b continuity of g.

Then t ≈α b =⇒ x = g(t) ≈δ g(b) = c =⇒ f(x) ≈ε f(c).

Example 11.4.1. f(x) = cos 1x has an essential discontinuity at 0.

Solution. HW: Consider the sequences

xn :=1

2nπf(xn) = 1

yn :=1

(2n + 1)πf(yn) = −1.

By Sequential Continuity, f(0+) cannot exist.

Page 89: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

11.4 Limits and continuity 89

Theorem 11.4.7 (Pasting Lemma). Let f be continuous on [a, b] and g be continuous on

[b, c]. If f(b) = g(b), then

h(x) :=

f(x) x ∈ [a, b]

g(x) x ∈ [b, c]

is continuous on [a, c].

Proof. HW

Since f , g are continuous on their domains, only remains to check continuity at b. Choose

δ1 so that

0 < b− x < δ1 =⇒ |f(x)− f(b) < ε,

and choose δ2 so that

0 < x− b < δ2 =⇒ |g(x)− g(b) < ε.

Define δ := min{δ1, δ2}. Then

x ≈δ b =⇒ f(x) ≈ε f(b) = g(b).

Exercises: #11.1.4, 11.2.2, 11.3.3, 11.4.2, 11.5.1, 11.5.4 Recommended:

#11.3.5, 11.3.6, 11.4.4, 11.5.5, 11.5.6

Problems: #11-1, 11-2 Recommended: #11-3

Due: Mar.

Page 90: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

90 Math 311 Continuity and limits

Page 91: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 12

Intermediate Value Theorem

12.1 Existence of zeros

1

Theorem 12.1.1 (Bolzano’s Thm). Let f be continuous on [a, b] with f(a) < 0 < f(b).

Then there is a point c ∈ (a, b) where f(c) = 0.

Proof. Define

C := {x ∈ [a, b] ... f(x) ≤ 0}.

Since a ∈ C and b is an upper bound for C, we may define

c := sup C.

Suppose f(c) > 0. Then by the Positivity Theorem, we could find ε > 0 with f(x) > 0 on

(c− ε, c), so that c would not be the smallest upper bound (e.g. c−ε2 would be smaller).

Suppose f(c) < 0. Then by the Negativity Theorem, we could find ε > 0 with f(x) < 0

on (c, c + ε), so that c would not be an upper bound.

Theorem 12.1.2 (IVT). Let f be continuous on [a, b] and f(a) ≤ f(b). Then for k ∈ R,

f(a) ≤ k ≤ f(b) =⇒ ∃c ∈ [a, b], f(c) = k.

1April 18, 2007

Page 92: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

92 Math 311 Intermediate Value Theorem

Proof. If f(a) = f(b), then with k = a or k = b we are done, so assume wlog f(a) < k <

f(b). Then f(a)− k < 0 < f(b)− k, so apply Bolzano’s Thm to get f(c)− k = 0.

12.2 Applications of Bolzano

Example 12.2.1. A polynomial of odd degree has a real zero.

Proof. Consider x2k+1 = x(x2)k. Then

x < 0 =⇒ x(x2)k < 0, x > 0 =⇒ x(x2)k > 0.

Apply this to the polynomial a0 + a1x + . . . anxn, assuming an 6= 0 (so it has degree

n = 2k + 1). The lesser terms will not matter for |x| >> 1:

∣∣an−1xn−1 + · · ·+ a0

∣∣ ≤ |an−1|∣∣xn−1

∣∣ + · · ·+ |a0|

≤ |anxn|(∣∣∣∣

an−1

anx

∣∣∣∣ +∣∣∣∣an−2

anx2

∣∣∣∣ + · · ·+∣∣∣∣

a0

anxn

∣∣∣∣)

< |anxn|.

Then the polynomial satisfies the conditions for Bolzano’s Thm.

Theorem 12.2.1 (Intersection Principle). (a) The solutions of f(x) = g(x) are the val-

ues of x for which the graphs intersect.

(b) If f, g are continuous on [a, b] and f(a) ≤ g(a) but f(b) ≥ g(b), then the graphs

intersect at some c ∈ [a, b].

Proof of (a). A point in the graph of f looks like (x, f(x)) ∈ R2. So a point lies on each

graph iff (c, f(c)) = (c, g(c)) in R2.

Proof of (b). Apply Bolzano’s Theorem to the continuous function f(x)− g(x).

12.3 Monotonicity and the IVP

Definition 12.3.1. A function f has the Intermediate Value Property on [a, b] iff it is

defined on [a, b] and takes on all values between f(a) and f(b) as x varies between a and

b.

Page 93: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

12.4 Inverse functions 93

Theorem 12.3.2 (Continuity for monotone fns). If the function f is strictly increasing

and has the IVP on [a, b], then it is continuous on [a, b].

Proof. Wlog, let f be increasing. Let c ∈ (a, b); show f continuous at c. Fix ε > 0 and

find x1, x2 such that

f(x1) = f(c)− ε and f(x2) = f(c) + ε.

Since f is strictly increasing, a small enough ε will ensure f(x1) > f(a) and f(x2) < f(b).

Then IVP ensures these two points exist, and strictly increasing gives

a < x1 < c < x2 < b and f(a) < f(x1) < f(c) < f(x2) < f(b),

so that these points x1, x2 are unique. Define δ := min{|c− x1|, |c− x2|}. Then we have

(c− δ, c + δ) ⊆ (x1, x2) and so

x ≈δ c =⇒ f(x) ≈ε f(c)

|x− c| < δ =⇒ |f(x)− f(c)| < ε.

NOTE: it is clear how the proof goes through for a strictly decreasing function, but

it also extends to a piecewise strictly monotone function, e.g. sin x. So to prove sin x is

continuous, you could apply this theorem and it would only remain to check that sinx is

continuous at the points x = (2k + 1)π/2, which follows immediately from the Pasting

Lemma.

12.4 Inverse functions

Theorem 12.4.1 (Inverse of increasing functions). If y = f(x) is continuous and strictly

increasing on [a, b], then it has an inverse function x = g(y) which is also continuous and

strictly increasing.

Proof. (1) g is defined on [f(a), f(b)].

Fix any point y in [f(a), f(b)]. By IVT, there is an x ∈ [a, b] with f(x) = y, and this

point is unique because f is strictly increasing. Thus, g(y) := x is well-defined.

Page 94: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

94 Math 311 Intermediate Value Theorem

(2) g is strictly increasing.

Let x1 := g(y1) and x2 := g(y2). Since f is strictly increasing,

x1 ≤ x2 =⇒ f(x1) ≤ f(x2)

g(y1) ≤ g(y2) =⇒ y1 ≤ y2

y1 > y2 =⇒ g(y1) > g(y2).

(3) g is continuous.

We’ve seen g is monotone. To see g has IVP, note that

a ≤ x ≤ b =⇒ g(y) = x,

where y := f(x). Then done by prev thm.

NOTE: of course, this theorem is also true with “increasing” replaced by “decreasing”

throughout, mutatis mutandis.

Exercises: #12.1.1, 12.2.1 (make a map),12.4.1 Recommended: #12

Problems: #12-2,12-7 Recommended: #12

Due: Mar.

Page 95: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 13

Continuity and Compact Intervals

13.1 Compact intervals

1

Definition 13.1.1. A set S ⊆ R is sequentially compact iff every sequence in S has a

subsequence converging to a point of S.

Given an interval (a, b), how could this fail?

b− 1n ∈ (a, b) for n >> 1, b− 1

n → b, but b /∈ (a, b).

Fix: use a closed interval instead! That fixes this flaw. But [a,∞) is a closed interval, so

consider {a + n}.Fix: also require boundedness.

Definition 13.1.2. A interval in R is compact iff it is closed and bounded. More generally,

a set is compact if it is obtained from taking finite unions or arbitrary intersections of

compact intervals.

Example 13.1.1. The discrete set {0, 1, 2} is compact, since it can be written as a union

of finitely many (ie, three) singleton sets. Each of these is compact, since, eg,

{0} =∞⋂

n=1

[0, 1

n

].

1April 18, 2007

Page 96: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

96 Math 311 Continuity and Compact Intervals

Note: obviously, cannot take infinite unions, or else

[a,∞) =∞⋃

n=1

[a + n, a + n + 1]

would be a compact interval which is not bounded. <↙

Theorem 13.1.3 (Sequential Compactness Thm). A compact interval [a, b] is sequentially

compact.

Proof. Let {xn} ⊆ [a, b]. Then a ≤ xn ≤ b, so the sequence is bounded and B-W thm

applies to give a convergent subsequence xni → c. Then the Limit Location thm gives

a ≤ xni ≤ b =⇒ a ≤ limi→∞

xni ≤ b.

NOTE: the following converse also holds: if S ⊆ R is sequentially compact, then it is

also compact. However, proving this requires developing the full definition of compactness

in terms of open covers, etc, so we just take it on faith.

Theorem 13.1.4. If S ⊆ R is a sequentially compact set, then S is compact.

13.2 Bounded continuous functions

Theorem 13.2.1 (Boundedness Thm). If f is continuous on a compact interval I, then

f is bounded on I.

Proof. Show f is bounded above via contrapositive:

f has no upper bound on I =⇒ f not continuous on I.

Define a sequence in I by choosing xn such that f(xn) > n, ∀n ∈ N. Since f is unbounded,

this is possible. Since I is compact, sequential compactness gives a subsequence {xni}which converges, so that xni → c ∈ I. Further,

f(xn) →∞ =⇒ f(xni) →∞.

Page 97: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

13.3 Extrema of continuous functions 97

So f cannot be continuous, or else we’d have

f(c) = limi→∞

f(xni) = ∞.

<↙

The proof that f must be bounded below follows, mutatis mutandis.

NOTE: compactness allows us infer a global property (boundedness) from a local

property (continuity).

NOTE: compactness is necessary. Consider x2 on [0,∞) or 1x on (0, 1).

13.3 Extrema of continuous functions

Theorem 13.3.1 (Maximum Thm). Let f be continuous on the compact interval I. Then

f attains its maximum and minimum on I: ∃α, β ∈ I such that

f(α) = infx∈I

f(x), and f(β) = supx∈I

f(x).

Proof. Boundedness Thm shows f is bounded above, so completeness gives existence of

M := supx∈I

f(x).

So f(x) ≤ M for every x ∈ I, and M is the smallest number for which this inequality is

true. Thus, we can pick

xn ∈ I with f(xn) ∈ [M − 1

n ,M] ⊆ I, n >> 1.

By Sequential Compactness Thm, {xn} has a convergent subsequence with limit β :=

limi→∞ xni ∈ I. Then the Squeeze Thm gives

M − 1n ≤ f(xni) ≤ M =⇒ M ≤ lim

i→∞f(xni) ≤ M,

and continuity of f gives f(β) = limi→∞ f(xni) = M .

NOTE: here, compactness allows us to infer a global property (having a max) from a

local property (continuity).

NOTE: compactness is necessary. Else x has no max or min on (0, 1) and no max on

[0,∞).

Page 98: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

98 Math 311 Continuity and Compact Intervals

13.4 The mapping viewpoint

Consider f : D → R as a map of D. We are interested in the image f(D) = {f(x) ... x ∈ D}.

The continuous image of a compact set is compact.

Theorem 13.4.1 (Continuous mapping thm). If f is defined and continuous on a compact

interval I, then f(I) is a compact interval.

Proof. By Maximum Thm, there are α, β with

f(α) = m := infx∈I

f(x) and f(β) = M := supx∈I

f(x).

We show f(I) = [m,M ] by double inclusion.

(⊆) By the definition of m,M ,

x ∈ I =⇒ m ≤ f(x) ≤ M =⇒ f(x) ∈ [m,M ].

(⊇) Let y ∈ [m,M ]. Since f is continuous, IVT gives

f(α) ≤ y ≤ f(β) =⇒ ∃x ∈ [α, β], f(x) = y.

But f(x) = y means y ∈ f(I).

NOTE: the proof of this theorem used Max Thm and IVT. However, the Contin

Mapping Thm also implies these two (HW 13.4.1).

13.5 Uniform continuity

Definition 13.5.1. f is continuous on I iff

∀ε > 0,∀x ∈ I, ∃δ > 0, y ≈δ x =⇒ f(y) ≈ε f(x).

This is a local property: it is verified by considering f on small neighbourhoods of

points in I. Here, δ depends on x and ε, since δ is found after these are fixed.

Definition 13.5.2. f is uniformly continuous on I iff

∀ε > 0,∃δ > 0, ∀x ∈ I, y ≈δ x =⇒ f(y) ≈ε f(x).

Page 99: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

13.5 Uniform continuity 99

This is a global property: the δ is independent of x and therefore must work for all x

in I simultaneously. Here, δ depends on ε only.

NOTE: it is nonsense to ask if f is uniformly continuous at a point.

NOTE: it is nonsense to say “f is uniformly continuous”. You must specify the domain,

e.g., f is uniformly continuous on I.

Example 13.5.1. f(x) = 1x is uniformly continuous on ( 1

n ,∞) but not on (0,∞).

Theorem 13.5.3. f is uniformly continuous on I implies that f is continuous on I.

Proof. Too easy to assign for rec HW.

Theorem 13.5.4. On a compact interval, continuity implies uniform continuity.

Proof. Assume the domain I is a compact interval, and prove the contrapositive. Suppose

f is not uniformly continuous. Then

∃ε0, ∀δ,∃x, x ≈δ y and f(x) ≈ε/ f(y),

Applying this to δn = 1n , we can find an xn, yn for each δn. That is, we can construct

sequences {xn}, {yn} where for each n,

xn ≈δn yn and |f(xn)− f(yn)| ≥ ε0.

By Sequential Compactness, obtain a subsequence xni → c ∈ I and let {yni} be the

corresponding subsequence (same indices as in {xni}).By construction, (xni − yni) → 0. By Linearity of Limits, yni → c, too.

Now f is not continuous at x = c, since for any δ > 0, can find 1n < δ, and then

|x− y| < 1n

and |f(x)− f(y)| ≥ ε0.

NOTE: here, compactness allows us to infer a global property (uniform continuity)

from a local property (continuity).

Required: Ex #13.1.1, 13.2.1, 13.3.1, 13.4.1, 13.5.6, Prob 13-1, 13-3, 13-6

Recommended: Ex #13.1.2, 13.3.3, 13.5.2, Prob 13-2, 13-4

1. In addition to #13-6, show that 1x is uniformly continuous on ( 1

n ,∞), for n ∈ N.

Page 100: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

100 Math 311 Continuity and Compact Intervals

Page 101: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 14

Differentiation: Local Properties

14.1 The derivative

1

Definition 14.1.1. Let f be defined for x ≈ a. The derivative of f at a is the limit

f ′(a) := limx→a

f(x)− f(a)x− a

= limh→0

f(a + h)− f(a)h

,

provided it exists. In this case, we say f is differentiable at a.

Note: let x = a + h to see the equivalence of these two formulas.

derivative difference quotient

f ′(a) slope of tangent line to graph at af(x)− f(a)

x− aslope of secant through a

dy

dxROC of y w/r x when x = a

∆y

∆xavg ROC of y over [a, a + ∆x]

f ′(a) is a number.

Now think of f ′(a) as the value of a function f ′(x) when evaluated at x = a.

Definition 14.1.2. f is differentiable on an open interval I if the limit f ′(x) exists for

every x ∈ I. The function so defined is f ′, the derivative of f .

Definition 14.1.3. If f ′ is continuous on I then we say f is continuously differentiable

and write f is C1 or f ∈ C1(I). Similarly, if f ′′ := (f ′)′ is continuous, we say f ∈ C2, etc.

1April 18, 2007

Page 102: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

102 Math 311 Differentiation: Local Properties

Another way to say f is continuous is f ∈ C0.

Definition 14.1.4. The right-hand derivative of f is

f ′(a+) := limx→a+

f(x)− f(a)x− a

,

and the left-hand derivative of f is

f ′(a−) := limx→a−

f(x)− f(a)x− a

,

whenever the limits exist.

Example 14.1.1. f(x) = |x| is continuous everywhere, but not differentiable at 0. Rea-

son: f ′(0−) 6= f ′(0+).

Example 14.1.2. f(x) =√

x is differentiable.

Domain is only [0,∞), so understood that “differentiable” means f ′(x) exists on (0,∞)

and f ′(0+) exists.

Theorem 14.1.5. f is differentiable at x0 implies f is continuous at x0.

Proof. f(t)− f(x) =f(t)− f(x)

t− x· (t− x) → f ′(x) · 0 = 0.

In fact, f must be Lipschitz.

Definition 14.1.6. f is differentiable at x0 iff

∀ε > 0, ∃δ > 0, 0 < |x− x0| < δ =⇒∣∣∣∣f(x)− f(x0)

x− x0− L

∣∣∣∣ < ε,

in which case f ′(x0) = L.

Since x 6= x0, multiply the inequality to obtain

Definition 14.1.7. f is differentiable at x0 iff

∀m > 0,∃n > 0, 0 < |x−x0| < 1n

=⇒ |f(x)− (f(x0) + f ′(x0)(x− x0))| < |x− x0|m

.

So f ≈ g for g(x) = f(x0) + f ′(x0)(x− x0).

Definition 14.1.8. If f(x)g(x)

x→x0−−−−−→∞, then f “blows up” faster than g. If f(x)g(x)

x→x0−−−−−→ 0,

then g “blows up” faster than f ; write f(x) = o(g(x)).

Write f(x) = O(g(x)) iff f(x)g(x) ≤ b < ∞ as x → x0.

Page 103: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

14.2 Differentiation formulas 103

Then “f is differentiable” means f(x) − g(x) = o(|x − x0|), where g is the affine

approximation to f : g(x) = f(x0) + f ′(x0)(x− x0).

14.2 Differentiation formulas

Notation for derivatives: read it in the book!

Meanwhile, it is useful to think of differentiation as an operator D, i.e., a function on

functions:

D : {functions} → {functions}, D : f 7→ f ′, D(f) = f ′.

Solving a differential equation, like

3f ′′(x)− x2f ′(x)− 4f(x) = g(x)

then amounts to inverting an operator:

(3D2 − x2D − 4)f = g

f = (3D2 − x2D − 4)−1g.

So it will be helpful to study properties of the operator (3D2 − x2D − 4). This amounts

to studying properties of the derivative.

Theorem 14.2.1 (Differentiation algebra). Let f, g be differentiable on an interval I on

which g 6= 0. Then

(i) [Linearity] D(af + bg) = aD(f) + bD(g), ∀a, b ∈ R.

(ii) [Product Rule] D(fg) = D(f)g + fD(g).

(iii) [Quotient Rule] D(f/g) = (D(f)g − fD(g))/g2.

Proof of (i). HW: follows immediately from limit defn of derivatives.

Proof of (ii). Let h = fg so that

h(t)− h(x) = f(t)g(t)− f(t)g(x) + f(t)g(x)− f(x)g(x)

Page 104: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

104 Math 311 Differentiation: Local Properties

h(t)− h(x)t− x

=f(t)[g(t)− g(x)]

t− x+

g(x)[f(t)− f(x)]t− x

.

Proof of (iii). HW: Let h = f/g so that

h(t)− h(x)t− x

=1

g(t)g(x)

[g(x)

f(t)− f(x)t− x

− f(x)g(t)− g(x)

t− x

].

Theorem 14.2.2 (Chain Rule). If f is differentiable at x and g is differentiable at f(x),

then g◦f is differentiable at x with (g◦f)′(x) = g′(f(x))f ′(x).

Proof. Let y = f(x) and s = f(t) and define h(t) = g(f(t)). By the defn of derivative,

f(t)− f(x) = (t− x)[f ′(x) + o(1)] as t → x

g(s)− g(y) = (s− y)[g′(y) + o(1)] as s → y.

So : h(t)− h(x) = g(f(t))− g(f(x))

= (s− y)[g′(y) + o(1)]

= [f(t)− f(x)] [g′(f(x)) + o(1)]

= [(t− x)[f ′(x) + o(1)]] [g′(f(x)) + o(1)]

h(t)− h(x)t− x

= [f ′(x) + o(1)][g′(f(x)) + o(1)].

Note: s → y as t → x, by continuity of f .

Theorem 14.2.3 (Baby Inverse Fn Thm). f : (a, b) → (c, d) is C1 and f ′(x) > 0 on

(a, b). Then f is invertible and is C1 with (f−1)′(y) = 1/f ′(x), if y = f(x).

Proof. Use the chain rule to differentiate both sides of the identity f−1(f(x)) = x.

14.3 Derivatives and local properties

Already: differentiability implies continuity.

Theorem 14.3.1 (Continuity of derivatives). f is differentiable on (a, b). Then f ′ as-

sumes every value between f ′(s) and f ′(t), for a < s < t < b.

Page 105: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

14.3 Derivatives and local properties 105

Proof. (SKIP?)

Let λ ∈ (f ′(s), f ′(t)), and define g(t) = f(t)− λt so that

g′(a) = f(a)− λ < 0 =⇒ g(t1) < g(a) for some a < t1 < b, and

g′(b) = f(b)− λ > 0 =⇒ g(t2) < g(b) for some a < t2 < b.

Then g attains its min on [a, b] at some point x such that a < x < b. It follows that

g′(x) = 0, hence f ′(x) = λ.

Corollary 14.3.2. If f is differentiable on [a, b], then f ′ cannot have any simple or jump

discontinuities on [a, b].

Theorem 14.3.3. Suppose f ∈ C1(I), I open.

1. f is locally increasing on I ⇐⇒ f ′(x) ≥ 0.

2. f is locally decreasing on I ⇐⇒ f ′(x) ≤ 0.

Proof. Let a ∈ I. Then f locally increasing at a means that for 0 < x− a < δ,

f(x) ≥ f(a) ⇐⇒ f(x)− f(a)x− a

≥ 0

⇐⇒ f ′(a+) ≥ 0.

Since f ′ exists by hyp, we have f ′(a) = f ′(a+).

For part (ii), apply part (i) to the increasing function −f(x) to obtain −f ′(x) ≥ 0.

Definition 14.3.4. Let f be defined on the open interval I. Then

1. c ∈ I is a local maximizer (local maximum point) of f iff f(c) ≥ f(x) for x ≈ε c.

2. c ∈ I is a local minimizer (local minimum point) of f iff f(c) ≤ f(x) for x ≈ε c.

NOTE: a local extremizer must be an interior point; endpoints don’t count!

Theorem 14.3.5. Suppose f ∈ C1(I), I open. If a ∈ I is a local extremizer, then

f ′(a) = 0.

Proof. Choose δ such that a < a− δ < a < a + δ < b. Then for a− δ < t < a, we have

f(t)− f(a)t− a

≥ 0.

Letting t → a, get f ′(a) ≥ 0. Similarly for the other inequality.

Page 106: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

106 Math 311 Differentiation: Local Properties

Definition 14.3.6. c is a critical point of f iff f ′(x) = 0.

Theorem 14.3.7. If c is a local extremizer of f , then c is a critical point of f .

We would like the converse, but it isn’t true: f(x) = x3 has a critical point at x = 0,

which is not a local extremizer. When is a critical point an extremizer?

Theorem 14.3.8 (Isolation principle). If we can find an open I on which f ∈ C1 and

(i) I contains exactly one critical point c of f , and

(ii) the Max Thm indicates that f has an extremizer on I,

then f has a unique extremum on I at c.

Example 14.3.1. Let f(x) = xe−x. Find & classify the extremizers.

Solution. The function is differentiable on I = (−∞,∞). Since

f ′(x) = e−x(1− x),

we have

f ′(x) = 0 ⇐⇒ x = 1.

(SKETCH)

From the sketch, see f(0) = 0, f(1) = 1e , f(2) < 1

e .

Hence f has a local max inside of [0, 2].

Hence f has a unique local max at x = 1.

Required: Ex #14.1.2, 14.2.4, 14.3.2, Prob 14-2, 14-3, 14-4

Recommended: Ex #14.1.4, 14.1.5, 14.2.5, Prob 14-5, 14-6

1. Prove from the limit definition that D(xn) = nxn−1 for n = 0, 1, 2, 3, . . . . Hint:

multiply out (x− a)∑n

k=0 xkan−k and avoid induction.

Page 107: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 15

Differentiation: Global Properties

15.1 The Mean-Value Theorem

1

Theorem 15.1.1 (Rolle’s). If f is continuous on [a, b] and differentiable on the interior

and f(a) = f(b), then ∃x ∈ (a, b) such that f ′(x) = 0.

Proof. If f is constant on the interval, we are done, so wlog let h(t) > h(a) somewhere.

Then f attains its max at some point x ∈ (a, b) (since [a, b] is compact), and a prev thm

gives f ′(x) = 0.

Theorem 15.1.2 (Cauchy mean value thm). f, g are continuous on [a, b] and differen-

tiable on the interior. Then ∃x ∈ (a, b) for which

[f(b)− f(a)]g′(x) = [g(b)− g(a)]f ′(x).

Proof. For a ≤ t ≤ b, define h(t) := [f(b)− f(a)]g(t)− [g(b)− g(a)]f(t),

so that h is continuous on [a, b] and differentiable on the interior and

h(a) = f(b)g(a)− f(a)g(b) = h(b).

By Rolle’s Thm, get h′(x) = 0 for some x.

Interp: x = f(s), y = g(t) (SKETCH).

1April 18, 2007

Page 108: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

108 Math 311 Differentiation: Global Properties

Corollary 15.1.3 (“The” Mean Value Thm). If f continuous on [a, b] and differentiable

on the interior, then ∃x ∈ (a, b) for which

f(b)− f(a) = (b− a)f ′(x).

Proof. Let g(x) = x in prev.

Recall: “f is differentiable” means

f(t) ≈ f(s) + f ′(s)(t− s) = g(t),

in the sense that f(t)−g(t) = o(|t− s|). The MVT says that there is some point x nearby

(i.e., x ∈ (s, t)) for which this becomes an actual equality:

f(t) = f(s) + f ′(x)(t− s).

Theorem 15.1.4. f is differentiable on (a, b).

1. If f ′(x) ≥ 0,∀x ∈ (a, b), then f is increasing.

2. If f ′(x) ≤ 0,∀x ∈ (a, b), then f is decreasing.

3. If f ′(x) = 0,∀x ∈ (a, b), then f is constant.

Proof. These can all be read off from the equation

f(t)− f(s) = (t− s)f ′(x),

which is always valid for some x ∈ (s, t).

15.4 L’Hopital’s rule for indeterminate forms

We have seen that the differentiability of f(x) at x = c is equivalent to the fact that

f(c) + f ′(c)(x− c) for x ≈ c, or more precisely,

f(x) = f(c) + f ′(c)(x− c) + o(|x− c|).

We use this to give a conceptually clear proof of L’Hopital’s rule.

Page 109: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

15.4 L’Hopital’s rule for indeterminate forms 109

Theorem 15.4.1. Let f, g ∈ C1(A) where A is a neighbourhood of a. If limx→a f(x) =

limx→a g(x) = 0 and g′(x) 6= 0 for x ∈ A \ {a}, then if the RHS limit exists,

limx→a

f(x)g(x)

= limx→a

f ′(x)g′(x)

.

Proof. We only need to take the limit of

f(x)g(x)

=f(c) + f ′(c)(x− c) + o(|x− c|)g(c) + g′(c)(x− c) + o(|x− c|) defn of diff

=f ′(c)(x− c) + o(|x− c|)g′(c)(x− c) + o(|x− c|) continuity

=f ′(c) + o(1)g′(c) + o(1)

cancellation.

Theorem 15.4.2. Let f, g ∈ C1(A) where A is a neighbourhood of a. If limx→a f(x) =

limx→a g(x) = ∞ and g′(x) 6= 0 for x >> 1, then if the RHS limit exists,

limx→a

f(x)g(x)

= limx→a

f ′(x)g′(x)

.

Proof. Fix ε > 0 and wlog, let g > 0 for x >> 1. When the RHS limit finite, thenf ′(x)g′(x)

x→a−−−−→ L, and for our ε we can choose δ such that

x ≈δ a =⇒ f ′(x)g′(x)

≈ε L.

In fact, for for x, t ≈δ a and some c between x and t,

f ′(c)g′(c)

=f(x)− f(t)g(x)− g(t)

≈ε L

L− ε <f(x)− f(t)g(x)− g(t)

< L + ε. (∗)

Hold t fixed for now. For small enough δ1,

x ≈δ1 a =⇒ g(x) > 0, g(x) > g(t).

We need to isolate f(x)/g(x) and get a bound for it. To this end, note that g(x)−g(t)g(x) > 0,

so we can multiply the inequality (*) above to obtain

(L− ε)g(x)− g(t)

g(x)<

f(x)− f(t)g(x)

< (L + ε)g(x)− g(t)

g(x)

Page 110: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

110 Math 311 Differentiation: Global Properties

(L− ε)(

1− g(t)g(x)

)+

f(t)g(x)

<f(x)g(x)

< (L + ε)(

1− g(t)g(x)

)+

f(t)g(x)

L− ε− (L− ε)g(t)− f(t)g(x)

<f(x)g(x)

< L + ε− (L + ε)g(t)− f(t)g(x)

Choose δ2 ≤ δ1 so that

x ≈δ2 a =⇒ (L− ε)g(t)− f(t)g(x)

,(L + ε)g(t)− f(t)

g(x)< ε.

Then for x ≈δ2 a, we have

L− 2ε <f(x)g(x)

< L + 2ε.

When the RHS is ∞, the proof is similar to the prev.

Required: Ex #15.1.3, 15.2.1, 15.2.2, 15.4.4 Prob 15-1, 15-2

Recommended: Ex #15.1.1(a), 15.1.4, Prob 15-3, 15-5

1. Let f : R→ R satisfy |f(x)− f(y)| ≤ (x− y)2 for all x, y ∈ R. Prove f is constant.

Page 111: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 16

Linearization and Convexity

16.1 Linearization

1

Recall: “f is differentiable” means

f(t) ≈ g(t) = f(s) + f ′(s)(t− s),

in the sense that f(t)− g(t) = o(|t− s|). Higher derivatives give better approximation.

Theorem 16.1.1 (Linearization Error Term). Suppose f ∈ C2(I) and a ∈ I. For each

x ∈ I, there is a point c between x and a for which

f(x) = f(a) + f ′(a)(x− a) +f ′′(c)

2(x− a)2.

This is a special case of Taylor’s Thm (n = 2), which we will prove shortly. We will see

that a differentiable function can be approximated by its derivatives, and that sometimes

it can even be written as a power series. In either case, a polynomial makes a decent local

approximation.

Note: this case is often used for approximation/optimization with multivariable sys-

tems, since it makes sense for matrices:

f(x) ≈ f(c) +∇f(c)(x− c) + 12 (x− c)T H(f)(x− c).

1April 18, 2007

Page 112: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

112 Math 311 Linearization and Convexity

16.2 Applications to convexity

Theorem 16.2.1 (2nd Deriv Test). Suppose f ∈ C2(I), a ∈ I, and f ′(a) = 0.

f ′′(a) > 0 =⇒ f(x) has a strict local min at a

f ′′(a) < 0 =⇒ f(x) has a strict local max at a.

Proof. Applying the Linearization Error Term, we get a c for which

f(x) = f(a) + f ′(a)(x− a) +f ′′(c)

2(x− a)2

f(x)− f(a) =f ′′(c)

2(x− a)2.

If f ′′(a) > 0, then f ′′(x) > 0 for x ≈ε a and hence f ′′(c) > 0 =⇒ f(x) > f(a).

At a critical point, the graph of f has a horizontal tangent, and f ′′ tells if the graph

lies above or below the tangent.

At other points, the tangent may not be horizontal, but one can still use f ′′ to see if the

graph is above or below the tangent.

Definition 16.2.2. f is convex on I iff for 0 ≤ t ≤ 1, we have

f((1− t)x + ty) ≤ (1− t)f(x) + tf(y), ∀x, y.

Note that any point between x and y is written (1 − t)x + ty, where 0 ≤ t ≤ 1. So this

inequality just states that the function value in the interior of (x, y) cannot be greater

than the corresponding point on the secant line from (x, f(x)) to (y, f(y)).

Theorem 16.2.3. f is convex on I ⇐⇒ for every x ∈ [a, b] ⊆ I,

f(x)− f(a)x− a

≤ f(b)− f(a)b− a

.

This theorem just states that for a convex function, secant lines from a fixed starting

point have increasing slopes.

Theorem 16.2.4. Let f ∈ C1(I). Then f is convex ⇐⇒ f(x) ≥ f(c) + f ′(c)(x− c).

Proof. HW 16.2.2.

Theorem 16.2.5 (2nd Deriv Test). If f ∈ C2(I), then f ′′ ≥ 0 =⇒ f is convex on I.

Page 113: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

16.2 Applications to convexity 113

Proof. From prev, we have

f(x) = f(c) + f ′(c)(x− c) +f ′′(c)

2(x− c)2,

so f ′′(c) ≥ 0 gives the inequality.

Theorem 16.2.6 (1st Deriv Test). Let f ∈ C1(I). Then f convex on I ⇐⇒ f ′(x) is

increasing on I.

Proof. (⇒) Pick a < b in I. Then

f(b) ≥ f(a) + f ′(a)(b− a) ⇐⇒ f ′(a) ≤ f(b)− f(a)b− a

f(a) ≤ f(b) + f ′(a)(b− a) ⇐⇒ f ′(b) ≥ f(b)− f(a)b− a

.

Definition 16.2.7. Let f be a function on (a, b) and pick c ∈ (a, b). The line y(x) =

m(x− c) + f(c) is called a supporting line at c iff it always lies below the graph of f :

f(x) ≥ m(x− c) + f(c).

Required: Ex #16.1(a), 16.1.2, 16.2.2, 16.2.4 Prob 16-1, 16-2

Recommended: Ex #16.2.1, Prob 16

1. Prove that for f ∈ C1, f is convex iff f(x) ≥ f(c) + f ′(c)(x− c).

2. Prove that f is convex iff every point x in the domain of f has a supporting line.

Page 114: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

114 Math 311 Linearization and Convexity

Page 115: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 17

Taylor Approximation

17.1 Taylor polynomials

1

Definition 17.1.1. If f ∈ Cn and f (n) = (f (n−1))′, the nth Taylor polynomial of f at a

is

Tn(a, x) = f(a) + f ′(a)(x− a) +12f ′′(a)(x− a)2 + · · ·+ 1

n!f (n)(a)(x− a)n.

Definition 17.1.2. Two functions f, g ∈ Cn(I) have nth-order agreement at a iff f (k)(a) =

g(k)(a) for k = 0, 1, . . . , n.

Theorem 17.1.3. If f (n) exists at a, then Tn(a, x) is the unique polynomial of degree n

in powers of (x− a) having nth-order agreement with f(x) at a.

Proof. Suppose p(x) = c0 + c1(x− a) + · · ·+ cn(x− a)n is a polynomial in (x− a). After

k differentiations, the terms c0, . . . , ck−1 vanish:

p(k)(x) = k!ck + (terms with (x− a) as a factor).

So p(k)(a) = k!ck. If f(x) and p(x) have nth-order agreement,

f (k)(a) = k!ck =⇒ ck =f (k)(a)

k!, ∀k = 1, . . . , n.

1April 18, 2007

Page 116: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

116 Math 311 Taylor Approximation

17.2 Taylor’s theorem with Lagrange remainder

Theorem 17.2.1 (Taylor’s Thm). Suppose f ∈ Cn+1(I) for some open interval I con-

taining a and x. Then for some c between a and x,

f(x) = f(a) + f ′(a)(x− a) +12f ′′(a)(x− a)2 + · · ·+ 1

n!f (n)(a)(x− a)n + Rn(x),

Rn(x) =f (n+1)(c)(n + 1)!

(x− a)n+1.

NOTE: c depends on x, so RHS of f is not a polynomial; f (n+1)(c) is not a constant.

Proof. We show the theorem holds at x = b. Let P be defined by

P (x) = Tn(a, x) + C(x− a)n+1,

where C is chosen so that f(b) = P (b), i.e., C = f(b)−Tn(a,b)(b−a)n+1 . Let

g(x) = f(x)− P (x)

so that g(a) = g(b) = 0. Must show f (n+1)(c) = (n + 1)!C for some c ∈ (a, b). Since

g(n+1)(x) = f (n+1)(x)− (n + 1)!C, suffices to find a zero of g(n+1) on (a, b).

Since T(k)n (a) = f (k) for k = 0, . . . , n, we have

g(a) = g′(a) = · · · = g(n+1)(x) = 0.

Applying the MVT n times to the derivatives of g,

g(x) = g′(x1)(x− a)

g′(x1) = g′′(x2)(x1 − a)

...

g(n)(xn) = g′(xn+1)(xn − a),

each time using g(k)(xk) = 0 and the fact that f ∈ Cn =⇒ g ∈ Cn. Then g(n+1)(xn) = 0

for xn ∈ (a, b).

Corollary 17.2.2. f = Tn + o(|x− a|n) as x → a.

Page 117: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

17.3 Estimating error in Taylor’s approximation 117

17.3 Estimating error in Taylor’s approximation

The expression for the error term in f(x) ≈ Tn(x) is exact:

Rn(x) =f (n+1)(c)(n + 1)!

(x− a)n+1

but who knows where c ∈ (x, a) really is? Use Rn to estimate the error instead.

Example 17.3.1. The error in the third-order approximation to ex at 0 is R3(x) = x4

4! .

17.4 Taylor series

Definition 17.4.1. The Taylor series at 0 of a function f ∈ Cn(I) is

∞∑n=0

f (n)(0)n!

xn = f(0) + f ′(0)x +f ′′(0)

2!x2 + · · ·+

Definition 17.4.2. f is analytic at 0 if f(x) =∑∞

n=0f(n)(0)

n! xn on (−R, R), where R is

the radius of convergence of the Taylor series.

Since f(x) = Tn(x)+ f(n+1)(c)(n+1)! xn+1 for some c ∈ (−x, x), this is equivalent to requiring

f (n+1)(c)(n + 1)!

xn+1 → 0.

It can be very hard to show that the Taylor series actually converges to the function it’s

supposed to converge to. In the complex case, things are easier.

f is analytic ⇐⇒ f ∈ C∞ ⇐⇒ f satisfies C-R eqns.

Elementary functions can also sometimes be checked explicitly.

Example 17.4.1. ex =∑∞

n=0f(n)(0)

n! xn =∑∞

n=0xn

n! .

Solution. Given a (fixed) x ∈ R, must show the remainder is small for n >> 1. Choose N

such that |x| < N2 . Then we have

|Rn(x)| =∣∣∣∣ec

n!xn

∣∣∣∣ ≤ e|x||x|nn!

→ e|x| · 0 = 0.

Page 118: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

118 Math 311 Taylor Approximation

by Exercise 3.4.2.

Required: Ex #17.1.1, 17.2.1, 17.3.3, 17.4.1 Prob 17-1

Recommended: Ex #17.3.4 Prob 17

1.

Page 119: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 18

Integrability

18.1 Introduction. Partitions.

1

Let f(x) be a function defined on [a, b]. We want to define its integral∫ b

af(x) dx.

Definition 18.1.1. A partition P of the interval [a, b] is a finite set of points {a =

x0, x1, x2, . . . , xn = b}, where xi < xi+1.

The mesh of a partition is the size of the largest subinterval:

|P | := maxi

(xi − xi−1).

An n-partition is one containing n subintervals.

18.2 Integrability

Definition 18.2.1. On each subinterval [xi, xi+1], of the partition P , define

Mi = sup{f(x) ... xi−1 ≤ x ≤ xi}

mi = inf{f(x) ... xi−1 ≤ x ≤ xi}

1April 18, 2007

Page 120: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

120 Math 311 Integrability

U(f, P ) =n∑

i=1

Mi(xi − xi−1)

L(f, P ) =n∑

i=1

mi(xi − xi−1)

U(f, P ) is the upper sum of f on the partition P , and L(f, P ) is the lower sum of f on

the partition P .

Definition 18.2.2. If supP L(f, P ) = infP U(f, P ), then the integral of f on [a, b] is

defined to be the common value, denoted∫ b

af(x) dx. We say f is (Riemann-) integrable

on [a, b] and write f ∈ R[a, b].

Definition 18.2.3. The partition P ′ is a refinement of P iff P ⊆ P ′.

Note that adding points to the partition has two effects:

1. the lengths of the subintervals decreases, and

2. L(f, P ) increases and U(f, P ) decreases.

Lemma 18.2.4 (Upper and lower sums). For P ⊆ P ′, L(f, P ′) ≥ L(f, P ) and U(f, P ) ≤U(f, P ).

Proof. Do the case where P ′ = P ∪ {y} first; then the sums only change on the one

subinterval. General case by repetition.

Corollary 18.2.5. For f : [a, b] → R, any lower sum is less than any upper sum.

Proof. Let P1, P2 be any two partitions of the interval. Let P ′ be the common refinement.

Then

L(f, P1) ≤ L(f, P ′) ≤ U(f, P ′) ≤ U(f, P2).

Suppose we have a nested sequence of partitions P1 ⊆ P2 ⊆ . . . . Then {L(f, Pi)} and

{U(f, Pi)} are monotonic sequences, each bounded by any element of the other. So both

converge. f is integrable when they converge to the same value.

Definition 18.2.6. The oscillation of f is

Osc(f, P ) = U(f, P )− L(f, P ) =n∑

i=1

(Mi −mi)(xi − xi−1).

Page 121: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

18.2 Integrability 121

We now show that∫ b

af(x) dx exists iff Osc(f, Pn) → 0, for some nested sequence of

partitions {Pn}.

Theorem 18.2.7 (Oscillation). f ∈ R[a, b] iff ∀ε > 0,∃P such that Osc(f, P ) < ε.

Proof. (⇒) Let f ∈ R and fix ε > 0. Then there are partitions P1, P2 such that

U(f, P1)−∫

f < ε and∫

f − L(f, P2) < ε.

Let P = P1 ∪ P2 be the common refinement. Then

U(f, P ) ≤ U(f, P1) <

∫f + ε < L(f, P2) + 2ε ≤ L(f, P ) + 2ε,

so that U(f, P )− L(f, P ) < 2ε.

(⇐) Apply the inequality U(f, P )− L(f, P ) < ε to

L(f, P ) ≤ supP

L(f, P ) ≤ infP

U(f, P ) ≤ U(f, P )

to get 0 < infP U(f, P )− supP L(f, P ) < ε. Since this is true for any ε > 0, done.

Corollary 18.2.8. If f ∈ R[a, b], then for any sequence {Pi} of partitions of [a, b] with

|Pi| → 0,

limi→∞

U(f, Pi) =∫ b

a

f(x) dx and limi→∞

L(f, Pi) =∫ b

a

f(x) dx.

Proof. Fix ε > 0. Since f is integrable, the Oscillation theorem gives N such that

i ≥ N =⇒ U(f, Pi)− L(f, Pi) < ε.

Since we always have U(f, Pi) <∫

f < L(f, Pi), it is clear that U(f, Pi) −∫

f < ε and∫

f − L(f, Pi) < ε.

Previously, we saw that an upper bound x of {an} is actually the supremum iff given

ε > 0, we can always find an an with |an − x| < ε. If we know that {an} is monotonic,

then an → x iff we can always find an an with |an − x| < ε.

In the current situation, the definition of∫ b

af(x) dx in terms of sups & infs means that

f ∈ R[a, b] iff given ε > 0, we can find a partition P for which Osc(f, P ) = U(f, P ) −

Page 122: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

122 Math 311 Integrability

L(f, p) < ε. Since a nested sequence of partitions gives monotonic sequences {U(f, Pn)}and {L(f, Pn)}, this is equivalent to the condition

limn→∞

U(f, Pn) = limn→∞

L(f, Pn).

18.3 Integrability of monotone or continuous f

Next, two sufficient conditions for integrability: continuity and monotonicity.

Theorem 18.3.1. f monotonic on [a, b] =⇒ f ∈ R[a, b].

Proof. Suppose f is monotonic increasing (the proof is analogous in the other case) so

that on any partition,

Mi = f(xi), mi = f(xx−i), i = 1, . . . , n.

For any n, choose a partition by dividing [a, b] into n equal subintervals; then xi−xi−1 =

(b− a)/n for each i = 1, . . . , n. Then can get

Osc(f, P ) = U(f, P )− L(f, P ) =n∑

i=1

(f(xi)− f(xi−1))b− a

n

=b− a

n

n∑

i=1

(f(xi)− f(xi−1))

≤ b− a

n· [f(b)− f(a)]

< ε

for large enough n. Then apply Oscillation thm.

Theorem 18.3.2. If f is continuous on [a, b] then f ∈ R[a, b].

Proof. Fix ε > 0 and choose γ > 0 such that 0 < (b − a)γ < ε. Since f is uniformly

continuous on [a, b], there is δ > 0 such that |f(x)− f(t)| < γ whenever

|x− t| < δ, x, t ∈ [a, b].

Page 123: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

18.4 Basic properties of integrable functions 123

If P is any partition of [a, b] such that |xi − xi−1| < δ,∀i, then

|x− t| < δ =⇒ |f(x)− f(t)| < γ =⇒ Mi −mi < γ.

Therefore,

Osc(f, P ) =n∑

i=1

(Mi −mi)(xi − xi−1)

< γ

n∑

i=1

(xi − xi−1)

= γ|b− a|< ε.

By Oscillation thm, f ∈ R.

18.4 Basic properties of integrable functions

Theorem 18.4.1. Let f ∈ R[a, b] and suppose m ≤ f ≤ M . If ϕ is continuous on [m,M ]

and h(x) := ϕ(f(x)) for x ∈ [a, b], then h ∈ R[a, b].

Proof. Fix ε > 0. Since ϕ is uniformly continuous on [m,M ], find 0 < δ < ε such that

|s− t| < δ =⇒ |ϕ(s)− ϕ(t)| < ε, s, t ∈ [m,m].

Since f ∈ R, find P = {x0, . . . , xn} such that

Osc(f, P ) < δ2.

Mi,mi are extrema of f , M ′i ,m

′i are for h. Subdivide the set of indices {1, . . . , n} into

two classes:

i ∈ A ⇐⇒ Mi −mi < δ,

i ∈ B ⇐⇒ Mi −mi ≥ δ.

For i ∈ A, have M ′i−m′

i ≤ ε by choice of δ. For i ∈ B, have M ′i−m′

i ≤ 2 supm≤t≤M |ϕ(t)|.

Page 124: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

124 Math 311 Integrability

By prev bound of δ2,

δ∑

i∈B

(xi − xi−1) ≤∑

i∈B

(Mi −mi)(xi − xi−1) < δ2

i∈B

(xi − xi−1) < δ.

Then

Osc(h, P ) =∑

i∈A

(M ′i −m′

i)(xi − xi−1) +∑

i∈B

(M ′i −m′

i)(xi − xi−1)

≤ ε(b− a) + 2δ sup |ϕ(t)|< ε(b− a + 2 sup |ϕ(t)|).

We will skip the rest of this section, as it is essentially repeated in the next chapter.

Required: Ex #18.1.2, 18.2.2, 18.3.2, 18.3.4

Recommended: Ex #18.3.1, 18.3.3

1.

Page 125: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 19

The Riemann Integral

§19.1 Refinement of partitions: we have already covered it.

§19.2 Definition of the Riemann integral: we have already covered it. 1

19.3 Riemann sums

Definition 19.3.1. For f ∈ R[a, b], a Riemann sum for f(x) over P is any sum of the

form

Sf (P ) =n∑

i=1

f(x′i)(xi − xi−1), where x′i ∈ (xi−1, xi).

So the upper and lower sums are just special Riemann sums, and we always have

mi ≤ f(x′i) ≤ M =⇒ L(f, P ) ≤ Sf (P ) ≤ U(f, P ).

Theorem 19.3.2 (Riemann sums). Let f ∈ R[a, b] and suppose {Pk} is a sequence of

partitions of [a, b] such that |Pk| → 0. Then

limk→∞

Sf (Pk) =∫ b

a

f(x) dx.

Proof. This is immediate from the Squeeze Thm applied to the inequalities:

L(f, Pk) ≤ Sf (Pk) ≤ U(f, Pk).

1April 18, 2007

Page 126: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

126 Math 311 The Riemann Integral

19.4 Basic properties of the integral

Theorem 19.4.1. Let f, g ∈ R[a, b].

(i) (Linearity) f+g ∈ R[a, b] and cf ∈ R[a, b], ∀c ∈ R, with∫

(f+g) dx =∫

f dx+∫

g dx

and∫

cf dx = c∫

f dx.

(ii) max{f, g}, min{f, g} ∈ R[a, b].

(iii) fg ∈ R[a, b].

(iv) f ≤ g on [a, b] =⇒ ∫ b

af(x) dx ≤ ∫ b

ag(x) dx.

(v) |f | ∈ R[a, b] and∣∣∣∫ b

af(x) dx

∣∣∣ ≤∫ b

a|f(x)| dx.

(vi) If |f(x)| ≤ M on [a, b], then∫ b

af(x) dx ≤ M(b− a).

Proof of (i). Fix ε > 0 and suppose f = f1 + f2. Find partitions P1, P2 such that

Osc(f1, P1) < ε, and Osc(f2, P2) < ε

Let P = P1 ∪ P2 be the common refinement. Then these inequalities are still true, and

L(f1, P ) + L(f2, P ) ≤ L(f, P ) ≤ U(f, P ) ≤ U(f1, P ) + U(f2, P ),

which implies Osc(f, P ) < 2ε. Hence, f ∈ R, and

U(fj , P ) <

∫fj(x) dx + ε,

which implies (by the long inequality above) that

∫f dx ≤ U(f, P ) <

∫f1 dx +

∫f2 dx + 2ε.

Since ε was arbitrary, this gives∫

f dx ≤ ∫f1 dx +

∫f2 dx. Similarly for the other in-

equality. For cf , use∑

cMi(xi − xi−1) = c∑

Mi(xi − xi−1), etc.

Proof of (ii). Let h(x) := max{f(x), g(x)}. Since f(x), g(x) ≤ h(x),

Osc(h, P ) ≤ Osc(f, P ) + Osc(g, P ) < ε.

Page 127: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

19.5 Interval addition property 127

Proof of (iii). Use ϕ(t) = t2 to get f2 ∈ R (compositions of integrable functions are

integrabe), then observe 4fg = (f + g)2 − (f − g)2.

Proof of (iv). HW. Use g − f ≥ 0 and part (i).

Proof of (v). Use ϕ(t) = |t| to get |f | ∈ R, then choose c = ±1 to make c∫

f ≥ 0 and

observe

cf ≤ |f | =⇒∣∣∣∣∫

f

∣∣∣∣ = c

∫f =

∫cf ≤

∫|f |.

Proof of (vi). HW. Use (iv) and show∫ b

a1 dx = b− a.

19.5 Interval addition property

Theorem 19.5.1 (Interval addition). For a < c < b,∫ c

af(x) dx+

∫ b

cf(x) dx =

∫ b

af(x) dx.

Proof. Note that you can always refine the partition by adding c. Once c is in the partition,

taking the sup over values of f on a subinterval of [a, c] is completely independent of

whatever f does on subintervals of [c, b].

Definition 19.5.2. We define

∫ a

a

f(x) dx = 0, ∀a and∫ a

b

f(x) dx = −∫ b

a

f(x) dx, ∀a, b.

Corollary 19.5.3. For any a, b, c,∫ c

af(x) dx +

∫ b

cf(x) dx =

∫ b

af(x) dx.

19.6 Piecewise properties

Definition 19.6.1. A property of f holds piecewise on [a, b] if there is a partition of the

interval such that the property holds for f on each open subinterval (xi−1, xi).

Example 19.6.1. f(x) = tan x is piecewise continuous and piecewise monotone on R.

The Heaviside function is piecewise constant. (Hence also piecewise continuous and piece-

wise monotone.)

Theorem 19.6.2 (Finite discrepancies). Suppose f, g ∈ R[a, b] and suppose f(x) = g(x)

for all but finitely many values of x ∈ [a, b]. Then∫ b

af(x) dx =

∫ b

ag(x) dx

Page 128: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

128 Math 311 The Riemann Integral

Proof. First, suppose f differs from g only for x = c. Then define a sequence of partitions

{Pn} which each contain the subinterval (pn, qn) := (c − 1n , c + 1

n ). Since f = g outside

this subinterval, suffices to note that

limn→∞

f(c)(qn − pn) = limn→∞

g(c)(qn − pn) = 0.

By the Riemann Sum Thm, this case is complete. For the general case, just use induction.

Theorem 19.6.3. If f is bounded and either piecewise continuous or piecewise monotone

on [a, b] (with respect to a partition P = {x0, x1, . . . , xn}), then f ∈ R[a, b] and

∫ b

a

f(x) dx =n∑

i=1

∫ xi

xi−1

f(x) dx.

Proof. If f is monotone, suffices to apply the Interval Addition property n times.

Otherwise, we build a clever partition by working around the “bad points” of f . Away

from the bad points, we use continuity to get integrability. At the bad points, we use the

bound on f (and a “horizontal squeeze” to estimate the oscillation.

Let α1 < α2 < · · · < αn be the discontinuities of f on [a, b]. Let Ik = (αk− ε2n , αk+ ε

2n )

be an open interval about αk. Then the total measure of these intervals is

∣∣∣∣∣n⋃

k=1

Ik

∣∣∣∣∣ ≤n∑

k=1

|Ik| =n∑

k=1

ε

n= ε.

If we remove the intervals Ik from [a, b], the remaining set K = [a, b] \ ⋃Ik is compact,

and so f is uniformly continuous on K. Choose δ > 0 such that

‖s− t| < δ =⇒ |f(s)− f(t)| < ε.

Now let P = {a = x0, x1, . . . , xm = b} be any partition of [a, b] such that

(a) P contains the endpoints {u1, . . . , un, v1, . . . , vn} of the “removed” intervals Ik.

(b) P contains no point x which lies inside an interval Ik.

(c) Whenever xi is not one of the uk, then xi − xi−1 < δ.

Put M = sup |f(x)|. Then Mk −mk ≤ 2M . In fact, we have Mi −mi ≤ ε unless xi−1

Page 129: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

19.6 Piecewise properties 129

sa

t

b

(1)

(2)

t = s p-1

Figure 19.1: The graph of t = sp−1

is one of the uk.

Osc(f, P ) = U(f, P )− L(f, P ) =n∑

i=1

(Mi −mi)(xi − xi−1)

=∑

contin

(Mi −mi)(xi − xi−1) +∑

k

(Mk −mk)(vk − uk)

≤ ε∑

contin

(xi − xi−1) + 2M∑

k

(vk − uk)

= ε(b− a) + 2Mε.

By Oscillation thm and K-ε (with K = (b− a) + 2M), f ∈ R.

Theorem 19.6.4 (Young’s Inequality). For conjugate exponents p, q ( 1p + 1

q = 1, p, q > 1)

and a, b ≥ 0,

ab ≤ ap

p+

bq

q.

Proof. Consider the graph of t = sp−1: Since

1p + 1

q = 1 =⇒ 1p = 1− 1

q = q−1q =⇒ p− 1 = 1

q−1 ,

this is also the graph of s = tq−1.

Now (1) =∫ a

0sp−1 = sp

p

]a

0= ap

p , and (2) =∫ b

0tq−1 = tq

q

]b

0= bq

q .

Thus the area of the entire shaded region is (1)+(2) = ap

p + bq

q , which is clearly always

larger than the box of area ab.

Page 130: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

130 Math 311 The Riemann Integral

Theorem 19.6.5 (Holder’s Inequality). For f, g ∈ R[a, b] and 1p + 1

q = 1, p, q > 1,

∣∣∣∣∣∫ b

a

f(x)g(x) dx

∣∣∣∣∣ ≤(∫ b

a

|f(x)|p dx

)1/p (∫ b

a

|g(x)|q dx

)1/q

.

Proof. Put A =(∫ b

a|f(x)|p dx

)1/p

, B =(∫ b

a|g(x)|q dx

)1/q

. Note: A, B 6= 0 or else trivial.

Then let a = |f(x)|A , b = |g(x)|

B and apply Young’s:

ab = |f(x)g(x)|AB ≤ |f(x)|p

pAp + |g(x)|qqAq =

ap

p+

bq

q

1AB

∫|f(x)g(x)|dx ≤ 1

pAp

∫|f |pdx + 1

qBq

∫|g|qdx

but Ap =∫ |f |pdx and Bq =

∫ |g|qdx, so this is

1AB

∫ b

a

|f(x)g(x)| dx ≤ Ap

pAp + Bq

qBq =1p

+1q

= 1

∫ b

a

|f(x)g(x)| dx ≤(∫ b

a

|f(x)|p dx

)1/p (∫ b

a

|g(x)|q dx

)1/q

.

For p = q = 2, this is called the Cauchy-Schwartz Inequality.

Recall from Chap 16:

The line y = m(x− c)+g(c) is a supporting line to g at c iff it always lies below the graph

of g,

g(x) ≥ m(x− c) + g(x),

and g is convex iff every point x in the domain of g has a supporting line.

Theorem 19.6.6 (Jensen’s Inequality). Let g be convex on R and let f ∈ sR[a, b]. Then∫

g(f(t)) dt ≥ g(∫

f(t) dt).

In Chapter 20, we will see that ex is convex.

Theorem 19.6.7. For f ∈ R[a, b],∫

ef(t) dt ≥ e∫

f(t) dt.

Required: Ex #18.4.1, 18.4.2, 19.2.1, 19.3.1, 19.4.2, 19.6.3 Prob

Recommended: Ex #19.2.3, 19.3.2, 19.4.3, 19.4.4, 19.5.1 Prob 19-2, 19-3

1. Prove Jensen’s Inequality. Hint : Let α =∫

f(t) dt and pick a supporting line

y(x) = m(x−α)+ϕ(α) at α. Deduce that∏

bann ≤ ∑

anbn for∑

an = 1; an, bn ≥ 0.

Page 131: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 20

Derivatives and Integrals

20.1 First fundamental theorem of calculus

1

“The fundamental theorem(s) of calculus” (next two thms) shows that integration and

differentiation are almost inverse operations.

Definition 20.1.1. A primitive of f (or antiderivative) is a function F such that f = F ′.

Theorem 20.1.2. Any two primitives of f differ only by a constant.

Proof. Let F, G both be primitives of f . Then

(F −G)′ = F ′ −G′ = f − f = 0 =⇒ F −G = c, for some c ∈ R.

Putting this minor result together with the next two shows that

D(I(f)) = f, but I(D(f)) = f + c,

so integration and differentiation are almost inverse operations.

Theorem 20.1.3 (Integration of derivative, FToC1). If f ∈ R[a, b] and f has a primitive

F which is differentiable on [a, b], then∫ b

af(x) dx = F (b)− F (a).

1April 18, 2007

Page 132: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

132 Math 311 Derivatives and Integrals

Proof. Fix ε > 0 and choose a partition P = {x0, . . . , xn} of [a, b] such that Osc(f, P ) < ε.

By the MVT, get ti ∈ [xi−1, xi] such that

f(ti)(xi − xi−1) = F (xi)− F (xi−1), i = 1, . . . , n

n∑

i=1

f(ti)(xi − xi−1) =n∑

i=1

F (xi)− F (xi−1) = F (b)− F (a).

Since L(f, P ) ≤ ∑f(ti)(xi − xi−1) ≤ U(f, P ),

Osc(f, P ) < ε =⇒∣∣∣∣∣

n∑

i=1

f(ti)(xi − xi−1)−∫ b

a

f(x) dx

∣∣∣∣∣ < ε.

20.2 Second fundamental theorem of calculus

Theorem 20.2.1 (Differentiation of integral, FToC2). If f ∈ R[a, b], then F (x) =∫ x

af(t) dt ∈ C0[a, b] and F ′(c) = f(c) if f is continuous at c.

Proof. Since f ∈ R, |f(t)| ≤ M for a ≤ t ≤ b. For a ≤ x < y ≤ b,

|F (y)− F (x)| =∣∣∣∣∫ y

x

f(t) dt

∣∣∣∣ ≤∫ y

x

|f(t)| dt ≤ M(y − x).

This Lipschitz condition gives uniform continuity of F on [a, b].

If f is continuous at c, then given ε > 0, have δ > 0 such that

|t− c| < δ =⇒ |f(t)− f(c)| < ε, ∀t ∈ [a, b].

If we choose s < t ∈ [a, b] such that c− δ < s ≤ c ≤ t < c + δ, then

∣∣∣∣F (t)− F (s)

t− s− f(c)

∣∣∣∣ =∣∣∣∣

1t− s

∫ t

s

[f(u)− f(c)] du

∣∣∣∣ < ε

shows that F ′(c) = f(c).

NOTE: f ∈ C0[a, b] =⇒ F ∈ C1[a, b].

Theorem 20.2.2 (Existence and uniqueness for ODE). Let f ∈ C0(I) and a ∈ I. Then

Page 133: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

20.3 Other relations between integrals and derivatives 133

the initial value problem

y′ = f(x),

y(a) = b,

has the unique solution y = F (x), where F (x) = b +∫ x

af(t) dt.

Proof. For existence, note that the given function satisfies the given IVP, by the FToC2.

For uniqueness, suppose G(x) also satisfies the IVP. Then F ′ = G′, so F (x) = G(x) + c.

Then the initial condition gives G(a) = b = F (a), so c = 0.

Corollary 20.2.3 (FToC2 implies FToC1). Let F (x) have the continuous derivative f(x)

on [a, b]. Then∫ b

af(t) dt = F (b)− F (a).

Proof. Let G(x) :=∫ x

af(t) dt, so that

G′(x) = f(x) = F ′(x), x ∈ [a, b]

by FToC2 and the uniqueness of primitives. Then by Uniqueness for ODE, we get G(x) =

F (x) + c so that ∫ x

a

f(t) dt = F (x) + c

for some c ∈ R. Then setting x = a gives c = −F (a), and setting x = b gives the

result.

20.3 Other relations between integrals and derivatives

Theorem 20.3.1 (Integration by parts). If f, g ∈ C1[a, b], then

∫ b

a

f(x)g′(x) dx = f(b)g(b)− f(a)g(a)−∫ b

a

f ′(x)g(x) dx.

Proof. Put h(x) = f(x)g(x), so h, h′ ∈ R by Integral properties thm. Use the Integration

of derivative thm on h′.

NOTE: IBP is product rule in reverse, just like CoV is chain rule in reverse.

Page 134: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

134 Math 311 Derivatives and Integrals

Theorem 20.3.2 (Change of variable). If g ∈ C1[a, b], g is increasing, and f ∈ R[g(a), g(b)],

then f ◦g ∈ R[a, b] and

∫ b

a

f(g(x))g′(x) dx =∫ g(b)

g(a)

f(y) dy.

Proof. Let F be a primitive of f , so F ′ = f . Then the chain rule gives

f(g(x))g′(x) = (F (g(x)))′

∫ b

a

f(g(x))g′(x) dx =∫ b

a

(F (g(x)))′ dx

= F (g(b))− F (g(a))

=∫ g(b)

g(a)

f(y) dy,

where the last two equalities come by FToC1.

20.4 Logarithm and exponential

Definition 20.4.1. The natural logarithm function is log x :=∫ x

1dtt , for x > 0.

Theorem 20.4.2. (i) log x is differentiable on R+ and (log x)′ = 1x .

(ii) log x is strictly increasing and concave.

(iii) log xy = log x + log y and log(1/x) = − log x.

(iv) log xr = r log x.

(v) limx→∞ log x = ∞ and limx→0+ log x = −∞.

(vi) log x : (0,∞) → R is a bijection, i.e., it is invertible.

(vii) There is a unique number e such that log e = 1.

Proof of (i). Immediate from FToC.

Proof of (ii). It is clear that log x is increasing because 1x > 0 for x > 0. It is concave

because the second derivative is − 1x2 < 0.

Proof of (iii). log(xy) =∫ xy

1dtt =

∫ x

1dtt +

∫ xy

xdtt = log x+

∫ y

1dtt = log x+ log y by change

of variable: t = xu. Then 0 = log 1 = log(x 1x ) = log x + log 1

x shows log 1x = − log x.

Page 135: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

20.4 Logarithm and exponential 135

Proof of (iv). If r ∈ Z, apply the previous part. If r = 1n ,

log x = log(x1/n)n = n log x1/n =⇒ 1n

log x = log x1/n.

Now for r ∈ Q, it follows from (iii) and what we’ve just shown. So let x ∈ R. Then for

any sequence of rationals rn → r,

log xr = log xlimn→∞ rn = limn→∞

log xrn continuity of log x

= limn→∞

rn log x first part; rn ∈ Q

= r log x.

Proof of (v). Consider the sequence log xn. By previous part, this is n log x → ∞. Since

log x is strictly increasing, this suffices to give the first limit. The second limit comes the

same way.

Proof of (vi). Immediate from strictly increasing; surjective comes from the previous part.

Proof of (vii). Immediate from (vi).

Definition 20.4.3. The exponential function exp x is defined as the inverse of log x, as

proven to exist in the previous thm.

Theorem 20.4.4. (i) exp x : R → R+ is convex and differentiable with (exp x)′ =

exp x.

(ii) exp(x + y) = exp x · exp y, and exp(−x) = 1exp x , and exp(rx) = (exp x)r.

(iii) exp 0 = 1, exp 1 = e, exp r = er.

(iv) ax = ex log a, a > 0.

Proof of (i)-(ii). Immediate from Inverse function thm (for the first two) and the rule for

log x, e.g.,

log exp(x + y) = x + y = log exp x + log exp y = log(expx · exp y)

exp(x + y) = exp x · exp y.

Page 136: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

136 Math 311 Derivatives and Integrals

Proof of (iii). The first two are immediate from results for log x. To see er = exp r,

proceed in the same fashion as for log x: true for integers by prev, true for rationals, then

use continuity to extend to reals.

Proof of (iv). ex log a = exp(x log a) = exp log ax = ax.

20.5 Stirling’s Formula

Theorem 20.5.1 (Stirling’s Formula). limn→∞ n!

(ne )n√

2πn= 1.

Proof. Later.

This is usually written n! ∼ (ne

)n√2πn and indicates that the RHS provides an

approximation to n!.

20.6 Growth rate of functions

Already done.

Required: Ex #20.2.4, 20.3.2, 20.3.3, 20.3.4, 20.6.2 Prob 20-1, 20-3

Recommended: Ex #20.3.1, 20.4.2, 20.4.3, 20.6.1 Prob 20-2

Page 137: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 21

Improper Integrals

21.1 Basic Definitions

1

Definition 21.1.1. Define the improper integral

∫ ∞

a

f(t) dt := limN→∞

∫ N

a

f(t) dt.

If limx→b− f(t) = ±∞ then we define the improper integral

∫ b

a

f(t) dt := limu→b−

∫ u

a

f(t) dt,

or similarly if limx→a+ f(t) = ±∞.

The improper integral is said to converge or exist iff the limit exists; otherwise, diverges.

NOTE: if f is integrable on [a,∞), it means that f ∈ R[a, b] for any b > a. To say

f ∈ R[a,∞) means both that f is integrable on [a,∞) AND that the improper integral

converges.

Example 21.1.1. The standard example:

∫ ∞

1

1xp

dx converges iff p > 1,

1April 18, 2007

Page 138: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

138 Math 311 Improper Integrals

∫ 1

0

1xp

dx converges iff p < 1.

We know that both diverge for p = 1 by the integral test applied to∑

1n (using CoV

u = 1x for

∫ 1

0). Then the other divergence follows by Comparison. For convergence,

∫ ∞

1

1xp

dx = limN→∞

∫ N

1

x−p dx = limN→∞

[x1−p

1− p

]N

1

= limN→∞

(N1−p − 1

1− p

)=

1p− 1

Definition 21.1.2. The Cauchy Principal Value is a doubly improper integral, where the

limits are taken simultaneously.

Example 21.1.2. Consider∫

R

t

1 + t2dt

Solution. Using CPV, this can be evaluated

∫ ∞

−∞

t

1 + t2dt = lim

R→∞

∫ R

−R

t

1 + t2dt

= limR→∞

[12 log(1 + t2)

]R

−R= 1

2 limR→∞

(log(1 + R2)− log(1 + R2)

)

If this were broken into separate integrals,

∫ ∞

−∞

t

1 + t2dt = lim

P→∞

∫ P

0

t

1 + t2dt + lim

Q→∞

∫ 0

−Q

t

1 + t2dt = ∞−∞,

and we could not evaluate.

21.2 Comparison theorems

Analogues of the results for series, and proved similarly.

Theorem 21.2.1 (Tail-convergence). If f is integrable on (any compact subinterval of)

[a,∞), then

f ∈ R[a,∞) ⇐⇒ f ∈ R[b,∞), ∀b > a.

Theorem 21.2.2. 1. If f is increasing for x >> 1 and limx→∞ f(x) = L, then f(x) ≤L for x >> 1.

2. If f is increasing and if f(x) ≤ B for x >> 1, then limx→∞ f(x) exists and

limx→∞ f(x) ≤ B.

Page 139: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

21.2 Comparison theorems 139

Theorem 21.2.3 (Comparison for Improper Integrals). If 0 ≤ f(x) ≤ g(x) for f, g

integrable on [a, b) (where b may be ∞), then g ∈ sR[a,∞) implies

∫ b

a

f(x) dx ≤∫ b

a

g(x) dx.

Proof. On any finite interval, we have∫ R

af(t) dt ≤ ∫ R

ag(t) dt, and since g ≥ 0, the integral

can only increase as R →∞. Now apply the previous theorem to the function

F (R) =∫ R

a

f(t) dt ≤∫ ∞

a

g(t) dt.

We already saw this example:

Example 21.2.1. Show that erf x =∫ x

0e−t2/2 dt is bounded above on the interval [0,∞).

(So now we know this means the improper integral limx→∞ erf x exists.

Solution. We have an upper bound for the (positive) integrand given by

t ≤ t2 =⇒ e−t2/2 ≤ e−t/2,

however this is only true for t ≥ 1. But suffices to only consider this domain!

∫ ∞

0

e−t2/2 dt =∫ 1

0

e−t2/2 dt +∫ ∞

1

e−t2/2 dt ≤ M + limx→∞

∫ x

1

e−t/2 dt ≤ M + 2e−1/2.

Theorem 21.2.4 (Asymptotic Comparison). Suppose f, g are integrable on [a,∞) and

f ∼ g as x →∞. Then

∫ ∞

a

f(t) dt converges ⇐⇒∫ ∞

a

g(t) dt converges.

Example 21.2.2. Does∫ ∞

0

dx√1(1 + x3)

converge?

Solution. Both endpoints are improper. For f(x) = 1√1(1+x3)

,

f(x) ∼ 1√x

, x ≈ 0+, and∫ 1

0

dx

x1/2converges,

f(x) ∼ 1x2

, x >> 1, and∫ ∞

1

dx

x2converges,

so it is convergent.

Page 140: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

140 Math 311 Improper Integrals

21.3 The Gamma function

Definition 21.3.1. The Gamma function is

Γ(x) =∫ ∞

0

tx−1e−t dt, x > 0.

Theorem 21.3.2. The Gamma function has the following properties.

1. Γ(n + 1) = n!.

Proof. Induct on n. Basis:∫∞0

e−t dt = 1 = 0!. Then IBP:

∫ R

0

tne−t dt = −Rne−R + n

∫ R

0

tn−1e−t dtR→∞−−−−−→ 0 + nΓ(n).

2. Γ(x) is defined for any x > 0; limx→0+ Γ(x) = limx→∞ Γ(x) = ∞.

Proof. Each of∫ 1

0+ tx−1e−t dt and∫∞1

tx−1e−t dt converges by comparison. Then

Γ(x) x→∞−−−−−→∞ because n! n→∞−−−−−→∞. The other limit is HW 21.3.1.

3. Γ(x + 1) = xΓ(x).

Proof. Note that in the proof of (i) we didn’t actually use n ∈ N.

4. Γ( 12 ) =

√π.

Proof. For 0 < a < b < ∞, we have

∫ b

a

e−t

√tdt =

∫ √b

√a

e−s2

s2s ds t = s2, dt = 2s ds

a→0,b→∞−−−−−−−−→ 2∫ ∞

0

e−s2ds

=√

π

5. Γ ∈ C∞(0,∞). Also, Γ′(1) = γ.

Proof. Later; we’d need to differentiate under the integral.

6. Γ(x) is convex and so is log Γ(x).

Page 141: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

21.4 Absolute and conditional convergence 141

Proof. To see Γ(x) is convex, use the positivity of Γ′(x) =∫∞0

tx−1 log tet dt. Holder’s

Inequality gives

Γ(

x

p+

y

q

)≤ Γ(x)1/pΓ(y)1/q

Theorem 21.3.3. If f > 0 on (0,∞) and (i) f(x + 1) = xf(x), (ii) f(1) = 1, and (iii)

log f(x) is convex. Then f(x) = Γ(x).

21.4 Absolute and conditional convergence

Definition 21.4.1.∫∞

af(x) dx converges absolutely iff

∫∞a|f(x)| dx converges; it con-

verges conditionally iff it converges but not absolutely.

Theorem 21.4.2. If f is integrable on [a,∞) and |f | ∈ R[a,∞), then f ∈ R[a,∞).

Proof. Write f as the difference of two nonnegative functions f = f+f−,

f+(x) = 12 (|f(x)|+ f(x)) = max{0, f(x)}

f−(x) = 12 (|f(x)| − f(x)) = max{0,−f(x)}.

Then 0 ≤ f+(x), f−(x) ≤ |f(x)|, so the convergence of∫ |f | gives the convergence of

∫f(x) dx =

∫f+(x) dx−

∫f−(x) dx

by Comparison Test.

Example 21.4.1.∫∞0

sin xx dx converges conditionally.

Solution. Since f(x) = sin xx → 1 as x → 0, the integrand is bounded and continuous on

(0, 1) and thus has a finite integral. So we restrict to∫∞1

f(x) dx. Then

∫ R

1

sin x

xdx =

[−cosx

x

]R

1−

∫ R

1

cos x

x2dx

≤M +∫ R

1

∣∣∣cos x

x2

∣∣∣ dx

≤M +∫ R

1

1x2

dx,

which is finite. To see that the integral does not converge absolutely, do HW 21.4.2.

Page 142: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

142 Math 311 Improper Integrals

Required: Ex #21.2.1(begh), 21.2.4, 21.3.1, 21.4.2 Prob 21-4, 21-5

Recommended: Ex #21.1.3, 21.1.2, 21.2.2, 21.2.3 Prob 21

1.

Page 143: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

Chapter 22

Sequences and Series of Functions

22.1 Pointwise and uniform convergence

1

What does it mean to say a sequence of functions {fn} converges? I.e., how to define

when lim fn(x) = f(x)? There are different (nonequivalent) ways to define such a limit.

What does it mean to say a sum of functions {fn} converges? I.e., how to define∑

fn(x) = f(x)? We have seen power series, where fn(x) = anxn, but what about other

kinds of functions?

We want to know when things are valid, like for f(x) =∑

fn(x),

f ′(x)? =∑

f ′n(x)∫

f(x) dx? =∑ ∫

fn(x) dx,

Or even if it is valid to compute things like

Γ′(x)? =∫ ∞

0

∂∂x (tx−1e−t) dt =

∫ ∞

0

tx−1 log t

etdt.

These operations all involve interchanging the order of limits; series, integrals and

derivatives are all defined in terms of limits.

Definition 22.1.1. Let {fn} be a sequence of functions all defined on some common

1April 18, 2007

Page 144: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

144 Math 311 Sequences and Series of Functions

domain I. Then fn converges pointwise iff limn fn(x) exists for evey x ∈ I. In this case,

we can define the limit function by

f(x) := limn

fn(x),

and write fnpw−−−→ f . The defn is equivalent to:

∀ε > 0, ∀x ∈ I, ∃N, n ≥ N =⇒ fn(x) ≈ε f(x).

Example 22.1.1. Let fn(x) = xx+n on R. Then

limx→∞

limn→∞

fn(x) = limx→∞

0 = 0,

limn→∞

limx→∞

fn(x) = limn→∞

1 = 1.

Example 22.1.2. Let fn(x) = xn on I = [0, 1]. Then {fn} converges pointwise and

f(x) =

0, 0 ≤ x < 1,

1, x = 1.

So a sequence of continuous functions can converge pointwise to something which is not

continuous!

Even worse:

Example 22.1.3. Let fk(x) = limn→∞(cos k!xπ)2n. Then whenever k!x is an integer,

fk(x) = 1. If x = p/q is rational, then for k ≥ q, fn(x) = 1. If k!x is not an integer (for

example, if x is irrational), then fk(x) = 0. We have an everywhere discontinuous limit

function

f(x) = limk→∞

limn→∞

(cos k!xπ)2n =

0, x ∈ R \Q,

1, x ∈ Q.

Example 22.1.4. Let fn(x) = x2

(1+x2)n on R and consider

f(x) =∞∑

n=0

fn(x) =∞∑

n=0

x2

(1 + x2)n=

0, x = 0,

1 + x2, x 6= 0,

since the series is geometric for x 6= 0. So a series of continuous functions can converge

Page 145: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.1 Pointwise and uniform convergence 145

pointwise to something which is not continuous! (Not even integrable!)

Example 22.1.5. Let fn(x) = sin nx√n

on R. Then

f(x) = limn→∞

fn(x) = 0, ∀x ∈ R,

so f ′(x) = 0. On the other hand, f ′n(x) =√

n cos(nx) so that limn→∞ f ′n(x) 6= f ′(x).

Example 22.1.6. Let fn(x) = n2x(1 − x2)n on [0, 1]. Then limn→∞ fn(x) = 0 for any

x ∈ [0, 1]. Thus, trivially∫ 1

0limn→∞ fn(x) dx = 0. However,

limn→∞

∫ 1

0

fn(x) dx = limn→∞

n2

2n + 2= ∞.

CONCLUSION: pointwise convergence sucks.

Definition 22.1.2. Let {fn} be a sequence of functions with a common domain I. The

sequence converges uniformly to f on I iff there exists some function f for which

∀ε > 0, ∃N, n ≥ N =⇒ fn(x) ≈ε f(x), ∀x ∈ I.

We write fnunif−−−−→ f .

NOTE: ∀x appears at the end: N does not depend on x. This is the ”uniform nature”

of the convergence; N works globally for all of I.

NOTE: uniform convergence implies pointwise convergence.

Theorem 22.1.3. Suppose f(x) = limn fn(x) pointwise. Then

fnunif−−−−→ f ⇐⇒ sup

x∈I|fn(x)− f(x)| n→∞−−−−−→ 0.

Proof. |fn(x)− f(x)| < ε, ∀x is equivalent to the condition supx∈I |fn(x)− f(x)| < ε.

Example 22.1.7. xn does not converge uniformly on [0, 1).

For f(x) ≡ 0, sup |fn(x)− f(x)| = 1 9 0.

More directly, choose ε = 12 . For any fixed n, one can find x ≈ 1− such that xn > 1

2 = ε.

Definition 22.1.4.∑

fn converges pointwise or uniformly iff the corresponding sequence

of partial sums converges pointwise or uniformly.

Page 146: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

146 Math 311 Sequences and Series of Functions

Example 22.1.8.∑

xn

n converges uniformly to ex on any compact interval [−R, R], but

not on R.

Since |c| ≤ R =⇒ 0 < ec ≤ eR, we have

∣∣∣∣ex −(

1 + x +x2

2+ · · ·+ xn

n!

)∣∣∣∣ ≤eRRn+1

(n + 1)!n→∞−−−−−→ 0.

To see that the convergence is not uniform on R, note that for any given (fixed) n,

n = 2k =⇒ limx→−∞

sn(x) = ∞

n = 2k + 1 =⇒ limx→−∞

sn(x) = −∞,

whereas limx→−∞ ex = 0. Hence the sup is unbounded for any n and cannot go to 0.

22.2 Criteria for uniform convergence

Theorem 22.2.1 (Cauchy Criterion). {fn} converges uniformly on I iff

∀ε > 0, ∃N m, n ≥ N =⇒ |fn(x)− fm(x)| < ε, ∀x.

Proof. HW. (⇒), use ∆ ineq. (⇐), use pointwise Cauchy Crit to obtain the limit f .

Theorem 22.2.2 (Weierstrass M -test). Let {fn} be defined on I and satisfy |fn(x)| ≤Mn. If

∑Mn converges, then

∑fn(x) converges uniformly on I.

Proof. Fix ε > 0. Then

∣∣∣∣∣m∑

i=n

fi(x)

∣∣∣∣∣ ≤m∑

i=n

|fi(x)| ≤m∑

i=n

Mn < ε,

for n, m >> 1, because∑

Mn converges. The result follows from the previous thm.

Example 22.2.1.∑

cos nxn2 converges uniformly on R.

Note that∣∣ cos nx

n2

∣∣ ≤ 1n2 and

∑1

n2 converges.

In fact,∑ cos fn(x)

n2 converges uniformly on R for any arbitrary fn(x).

Theorem 22.2.3 (Unif convergence of power series). If∑

anxn has radius of convergence

R, then the series converges uniformly on [−L,L] whenever 0 ≤ L < R.

Page 147: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.3 Continuity and uniform convergence 147

Proof. We know∑

anxn converges absolutely for |x| ≤ L < R, so apply the Weierstrass

M -test with |anxn| ≤ |an|Ln = Mn.

22.3 Continuity and uniform convergence

Theorem 22.3.1. A uniform limit of continuous functions is continuous.

Proof. Suppose we have fnunif−−−−→ f , where each fn ∈ C0(I). Then NTS f is continuous

at an arbitrary point c ∈ I. Given ε > 0, use unif convergence to pick N such that

n ≥ N =⇒ fn(x) ≈ε f(x).

Then since fn is continuous at c,

x ≈δ c =⇒ fn(x) ≈ε fn(c).

Combine the two to obtain

f(x) ≈ε fn(x) ≈ε fn(c) ≈ε f(c) =⇒ f(x) ≈3ε f(c).

In ∆-ineq form, we used estimates on the RHS of

|f(x)− f(c)| ≤ |f(x)− fn(x)|+ |fn(x)− fn(c)|+ |fn(c)− f(c)|.

Corollary 22.3.2. If∑

fn(x) converges uniformly on I, then it converges to a continuous

function. In particular, a power series is continuous inside its interval of convergence.

Proof. We’ll prove that it’s differentiable in a moment, so wait until then.

22.4 Term-by-term integration

Theorem 22.4.1 (Integration of a uniform limit). Let fnunif−−−−→ f , where each fn ∈

R[a, b]. Then f ∈ R[a, b] and lim∫ b

afn(x) dx =

∫ b

af(x) dx.

Page 148: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

148 Math 311 Sequences and Series of Functions

Proof. Put εn := supa≤x≤b |fn(x)− f(x)| so that fn − en ≤ f ≤ fn + εn. Then the upper

and lower sums satisfy

∫ b

a

(fn − en) dx ≤ L(f, P ) ≤ U(f, P ) ≤∫ b

a

(fn + en) dx (∗)

and thus 0 ≤ Osc(f, P ) ≤ 2εn(b− a) n→∞−−−−−→ 0. Thus f ∈ R[a, b]. Now (*) becomes

∫ b

a

(fn − en) dx ≤∫ b

a

f dx ≤∫ b

a

(fn + en) dx,

which gives ∣∣∣∣∣∫ b

a

fn dx−∫ b

a

f dx

∣∣∣∣∣ ≤ 2εn(b− a) n→∞−−−−−→ 0.

Theorem 22.4.2 (Term-by-term integration of a series). If∑

fk(x) converges uniformly

on [a, b] and each fk ∈ R[a, b], then∫ b

af dx =

∑ ∫ b

afk(x) dx.

Proof.

∫ b

a

f dx =∫ b

a

( ∞∑

k=0

fk(x)

)dx = lim

n→∞

∫ b

a

n∑

k=0

fk(x) dx prev thm

= limn→∞

n∑

k=0

∫ b

a

fk(x) dx linearity.

Example 22.4.1 (Sawtooth function). It can be shown (using Fourier series) that

f(x) =π

2− 4

π

∞∑

k=0

cos(2k + 1)x(2k + 1)2

converges to the function g(x) = x, for 0 ≤ x ≤ π. By Weierstrass M -test, it converges

uniformly (prev example). Integrating term-by-term,

x2

2=

πx

2− 4

π

(sin x +

sin 3x

32+

sin 5x

52+ . . .

).

Since the sum converges uniformly, f(x) ∈ C0(R). In fact, f is 2π-periodic and an even

function. Thus, f is the sawtooth: /\/\/\/\/\

Theorem 22.4.3 (Dominated convergence thm). Suppose fn ∈ R[a, b] for every 0 < a <

Page 149: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.5 Term-by-term differentiation 149

b < ∞, and suppose fnunif−−−−→ f on every compact subset of (0,∞). If g ∈ R[0,∞), then

|fn| ≤ g =⇒ limn→∞

∫ ∞

0

fn(x) dx =∫ ∞

0

f(x) dx.

Proof. HW.

Theorem 22.4.4 (Stirling’s Formula). limn→∞Γ(x+1)

(x/e)x√

2πx= 1.

Proof. HW.

22.5 Term-by-term differentiation

Example 22.5.1. Recall fn(x) = sin nx√n

. This converges uniformly to f ≡ 0, but f ′n(x) 9

f ′(x)! Not even uniform convergence can save us now! Need stronger hypothesis.

Theorem 22.5.1. Let fn ∈ C1(I), fnpw−−−→ f and f ′n

unif−−−−→ g. Then f ∈ C1(I) and

f ′(x) = g(x).

Proof. Fix a point a ∈ I. Then FToC1 gives

fn(x)− fn(a) =∫ x

a

f ′n(t) dtn→∞−−−−−→

∫ x

a

g(t) dt.

However, we also have fn(x)− fn(a) → f(x)− f(a), so apply FToC2 to

f(x)− f(a) =∫ x

a

g(t) dt

to see that f ∈ C1(I) with f ′(x) = g(x).

This can be strengthened:

Theorem 22.5.2. Suppose fn ∈ C1(I) and {f ′n} converges uniformly. If {fn(c)} con-

verges for some c ∈ I, then fnunif−−−−→ f ∈ C1(I) and limn→∞ f ′n(x) = f ′(x).

Proof. Not for the faint of heart.

Corollary 22.5.3. Let fk ∈ C1(I). If∑

fk converges pointwise and∑

f ′k converges

uniformly, then f(x) :=∑

fk(x) ∈ C1(I) and f ′(x) =∑

f ′k(x).

Page 150: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

150 Math 311 Sequences and Series of Functions

Proof. Let sn(x) :=∑n

k=0 fk(x). Then s′n(x) =∑n

k=0 f ′k(x) unif−−−−→ f ′, sn ∈ C1(I), and

snpw−−−→ f , so apply the previous thm.

Example 22.5.2 (Sawtooth function). Recall the uniformly convergent series

f(x) =π

2− 4

π

∞∑

k=0

cos(2k + 1)x(2k + 1)2

= x, 0 ≤ x ≤ π.

Differentiating term-by-term,

f ′(x)? =x2

2=

πx

2− 4

π

(sin x +

sin 3x

32+

sin 5x

52+ . . .

).

To establish this equality, we’d need uniform convergence of the series on the right. Unfor-

tunately, this doesn’t converge uniformly on R. If it did, prev thm would give f ′ ∈ C0(R),

but the sawtooth is clearly nondifferentiable for x = kπ.

It turns out that it does converge uniformly on (kπ, (k + 1π). Moral: convergence of

Fourier series can be subtle.

22.6 Power series and analyticity

Theorem 22.6.1. Suppose f(x) =∑∞

n=0 anxn converges on some open interval I. Then

f ∈ C∞(I) and the derivative can be found term-by-term:

f ′(x) =∞∑

n=1

nanxn−1, x ∈ I.

Before proving this, we need a few results.

Lemma 22.6.2 (Abel’s Lemma). Let bn ≥ 0 be a decreasing sequence and let∑

an be a

series whose partial sums are bounded: |a0 + a1 + · · ·+ an| ≤ A. Then for all n ∈ N,

|a1b1 + a2b2 + · · ·+ anbn| ≤ Ab1.

Proof. Use the summation by parts identity:

q∑n=p

anbn =q−1∑n=p

An(bn − bn+1) + Aqbq −Ap−1bp,

Page 151: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.6 Power series and analyticity 151

with the partial sums AN :=∑N

n=1 an and |AN | ≤ A, so A0 = 0 and

N∑n=1

anbn =N−1∑n=1

An(bn − bn+1) + ANbN −A0b1

≤ A

N−1∑n=1

(bn − bn+1) + AbN

≤ A(b1 − bN ) + AbN

= Ab1.

If A were an upper bound on the partial sums of∑ |an| then we could just use ∆-ineq.

Abel’s lemma is a workaround for dealing with conditional convergence in this situation.

Theorem 22.6.3 (Abel’s Theorem). Let f(x) =∑∞

n=0 ckxk converge at the point x = R.

Then the series converges uniformly on [0, R].

Proof. Fix ε > 0. Since

f(x) =∞∑

k=0

ckxk =∞∑

k=0

(ckRk)( x

R

)k

,

apply Abel’s Lemma with ak = ckRk and bk =(

xR

)k. Note that∑∞

k=0 ckRk converges by

hypothesis, so we can pick N for which

n,m ≥ N =⇒∣∣∣∣∣

m∑

k=n

ckRk

∣∣∣∣∣ < ε.

Using ε as a bound on the partial sums of∑∞

j=0 ck+jRk+j and noting that x < R =⇒

(xR

)k+j is decreasing, the Lemma gives

∣∣∣∣∣∣

n∑

j=1

(ck+jRk+j)

( x

R

)k+j

∣∣∣∣∣∣< 2ε

( x

R

)k+1

.

Thus the Cauchy Criterion for uniform convergence of a series is satisfied, by K-ε.

Of course, a similar result holds for x = −R. Consequently, we have an easy corollary:

Corollary 22.6.4. If a power series converges pointwise on (−R, R), then it converges

uniformly on any compact interval K ⊆ (−R,R).

Page 152: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

152 Math 311 Sequences and Series of Functions

Theorem 22.6.5. If a power series∑∞

n=0 anxn converges on I = (−R,R), then the

differentiated power series∑∞

n=0 nanxn−1 also converges on I.

Proof. First, from the Ratio Test, we know that lim∣∣∣ bn+1

bn

∣∣∣ = r < 1 implies∑

bn converges.

Then if 0 < s < 1,

∣∣∣∣(n + 1)sn

nsn−1

∣∣∣∣ =n + 1

ns −→ s ∈ (0, 1) =⇒

∑nsn−1 converges.

Thus nsn−1 is bounded. Now choose t to satisfy |x| < t < R and observe that

∣∣nanxn−1∣∣ =

1t· n

∣∣∣xt

∣∣∣n−1

· |antn|.

Using s =∣∣x

t

∣∣ in the first part, we obtain a bound |nsn−1| ≤ B, so that

∣∣∣∣∣∞∑

n=0

nanxn−1

∣∣∣∣∣ ≤∞∑

n=0

∣∣nanxn−1∣∣

=∞∑

n=0

1t· n

∣∣∣xt

∣∣∣n−1

· |antn|

≤ B

t

∞∑n=0

|antn|,

which converges, since t ∈ I.

Consequently, the convergence of the differentiated series is uniform on any compact

K ⊆ I. Returning to the first theorem of the section, this gives

Theorem 22.6.6. Suppose f(x) =∑∞

n=0 anxn converges on some open interval I. Then

f ∈ C∞(I) and the derivative can be found term-by-term:

f ′(x) =∞∑

n=1

nanxn−1, x ∈ I.

Example 22.6.1. Find a closed-form expression for

f(x) =x2

1 · 2 +x3

2 · 3 + · · · =∞∑

n=0

xn+2

(n− 1) · n.

Solution. Ratio test gives convergence on I = (−1, 1). Differentiating twice,

f ′′(x) = 1 + x + x2 + · · · =∞∑

n=0

xn =1

1− x.

Page 153: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.6 Power series and analyticity 153

Then integrating this twice, get

f(x) = x + (1− x) log(1− x), |x| < 1,

using the constants of integration found from f(0) = 0, f ′(0) = 0.

Corollary 22.6.7. On its interval of convergence, the Taylor series of f(x) =∑∞

n=0 anxn

is just itself.

Proof. Applying the previous theorem, see that after n differentiations, get

f (n)(x) = n!an + c1x + c2x2 + . . . , ci ∈ R.

Thus evaluating at x = 0 gives an = f (n)(0)/n!.

Corollary 22.6.8. If∑∞

n=0 anxn has radius of convergence R > 0 and∑∞

n=0 anxn = 0

for |x| < R, then an = 0, ∀n.

Proof. Consider the Taylor series of f(x) ≡ 0.

Corollary 22.6.9. If f(x) =∑∞

n=0 anxn has radius of convergence R > 0 and f(x) =∑∞

n=0 bnxn for x ∈ (−R,R), then an = bn, ∀n.

Proof. Apply prev to∑∞

n=0 anxn −∑∞n=0 bnxn =

∑∞n=0(an − bn)xn.

Example 22.6.2 (Series solutions to ODE.). Find a solution to the IVP

y′ + xy = 0,

y(0) = 1.

Solution. Assume that y has a series representation: y =∑

anxn, 0 ≤ |x| < R. We “just”

need to find the an. Differentiating term-by-term,

y′ + xy =∞∑

n=1

nanxn−1 +∞∑

n=0

anxn+1

=∞∑

n=0

(n + 1)an+1xn +

∞∑n=1

an−1xn

= a1 +∞∑

n=1

((n + 1)an+1 + an−1)xn = 0.

Page 154: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

154 Math 311 Sequences and Series of Functions

This shows a1 = 0. Now by prev corollary, just need to solve

(n + 1)an+1 + an−1 = 0 =⇒ an+2 = − an

n + 2, n ≥ 0.

We start with y(0) = a0 = 1. Then

a1 = a3 = a5 = · · · = a2k+1 = 0,

a2 = − a0

0 + 2= −1

2,

a4 = − a2

2 + 2=

(−1

2

)(−1

4

)

a6 = − a4

4 + 2=

(−1

2

)(−1

4

)(−1

6

)

...

a2n =(−1)n

2nn!.

The resulting series is

y(x) =∞∑

n=0

(−1)n

2nn!x2n =

∞∑n=0

(−x2

2

)n 1n!

= e−x2/2,

and Ratio Test shows it converges for all x ∈ R.

Definition 22.6.10. A function f is said to be analytic (at 0) iff

limn→∞

n∑

k=0

akxk = f(x),

that is, if the Taylor series of f converges f .

Example 22.6.3. We spent a lot of time proving that an analytic function is infinitely

differentiable, but the converse is false. Define

f(x) =

e−1/x2, x ≥ 0

0, x ≤ 0.

Then f ∈ C∞(R) with f (n)(0) = 0, so the Taylor series of f is∑

0xn = 0 6= f .

Required: Ex #22.1.1(ac), 22.2.2(d), 22.3.3, 22.4.1, 22.4.4, 22.6.2 Prob

Page 155: Math 311 Notes - Cal Polyepearse/resources/Math311...0.1 Logic and inference 11 A = ) B means that whenever A is true, B must also be true, i.e., it CANNOT be the case that A is true

22.6 Power series and analyticity 155

22-1, 22-2, 22-4

Recommended: Ex #22.1.2, 22.2.5, 22.3.1, 22.4.3, 22.6.3, 22.6.4