Top Banner
Signals and Systems Collection Editor: Marco F. Duarte
216
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Signals and Systems

    Collection Editor:Marco F. Duarte

  • Signals and Systems

    Collection Editor:Marco F. Duarte

    Authors:

    Thanos AntoulasRichard BaraniukDan CalderonMarco F. DuarteCatherine ElderNatesh GaneshMichael HaagDon Johnson

    Stephen KruzickMatthew MoravecJustin RombergLouis ScharfMelissa SelikJP SlavinskyDante Soares

    Online:< http://legacy.cnx.org/content/col11557/1.10/ >

    OpenStax-CNX

  • This selection and arrangement of content as a collection is copyrighted by Marco F. Duarte. It is licensed under the

    Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).

    Collection structure revised: September 13, 2014

    PDF generated: December 6, 2014

    For copyright and attribution information for the modules contained in this collection, see p. 198.

  • Table of Contents

    1 Review of Prerequisites: Complex Numbers

    1.1 Geometry of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Complex Numbers: Algebra of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.3 Representing Complex Numbers in a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2 Continuous-Time Signals

    2.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.2 Common Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.3 Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    2.4 Energy and Power of Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    2.5 Continuous Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    2.6 Continuous-Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    3 Introduction to Systems

    3.1 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    3.2 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    3.3 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    4 Time Domain Analysis of Continuous Time Systems

    4.1 Continuous Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    4.2 Continuous Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    4.3 Continuous-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    4.4 Properties of Continuous Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    4.5 Causality and Stability of Continuous-Time Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . 62

    5 Introduction to Fourier Analysis

    5.1 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    5.2 Continuous Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    5.3 Eigenfunctions of Continuous-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    5.4 Continuous Time Fourier Series (CTFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    6 Continuous Time Fourier Transform (CTFT)

    6.1 Continuous Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    6.2 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    6.3 Properties of the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    6.4 Common Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    6.5 Continuous Time Convolution and the CTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    6.6 Frequency-Domain Analysis of Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    7 Discrete-Time Signals

    7.1 Common Discrete Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    7.2 Energy and Power of Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

    7.3 Discrete-Time Signal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 95

    7.4 Discrete Time Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    7.5 Discrete Time Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    8 Time Domain Analysis of Discrete Time Systems

    8.1 Discrete Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    8.2 Discrete Time Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 107

    8.3 Discrete-Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

  • iv

    8.4 Properties of Discrete Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 115

    8.5 Causality and Stability of Discrete-Time Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . 118

    9 Discrete Time Fourier Transform (DTFT)

    9.1 Discrete Time Aperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    9.2 Eigenfunctions of Discrete Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

    9.3 Discrete Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    9.4 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

    9.5 Common Discrete Time Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 132

    9.6 Discrete Time Convolution and the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 133

    10 Computing Fourier Transforms

    10.1 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

    10.2 DFT: Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

    10.3 The Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

    Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

    11 Sampling and Reconstruction

    11.1 Signal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

    11.2 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    11.3 Signal Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    11.4 Perfect Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

    11.5 Aliasing Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    11.6 Anti-Aliasing Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

    11.7 Changing Sampling Rates in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 165

    11.8 Discrete Time Processing of Continuous Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

    12 Appendix: Mathematical Pot-Pourri

    12.1 Basic Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

    12.2 Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

    12.3 Solving Linear Constant Coecient Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

    Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

    13 Appendix: Viewing Interactive Content

    13.1 Viewing Embedded LabVIEW Content in Connexions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

    13.2 Getting Started With Mathematica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

    Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

    Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198

    Available for free at Connexions

  • Chapter 1

    Review of Prerequisites: Complex

    Numbers

    1.1 Geometry of Complex Numbers

    1

    note: This module is part of the collection, A First Course in Electrical and Computer Engineer-

    ing. The LaTeX source les for this collection were created using an optical character recognition

    technology, and because of this process there may be more errors than usual. Please contact us if

    you discover any errors.

    The most fundamental new idea in the study of complex numbers is the imaginary number j. Thisimaginary number is dened to be the square root of 1:

    j =1 (1.1)

    j2 = 1. (1.2)The imaginary number j is used to build complex numbers x and y in the following way:

    z = x+ jy. (1.3)

    We say that the complex number z has real part x and imaginary part y:

    z = Re [z] + jIm [z] (1.4)

    Re [z] = x; Im [z] = y. (1.5)

    In MATLAB, the variable x is denoted by real(z), and the variable y is denoted by imag(z). In commu-nication theory, x is called the in-phase component of z, and y is called the quadrature component. Wecall z =x + jy the Cartesian representation of z, with real component x and imaginary component y. Wesay that the Cartesian pair (x, y)codes the complex number z.We may plot the complex number z on the plane as in Figure 1.1. We call the horizontal axis the realaxis and the vertical axis the imaginary axis. The plane is called the complex plane. The radius and

    angle of the line to the point z = x+ jy are

    r =x2 + y2 (1.6)

    1

    This content is available online at .

    Available for free at Connexions

    1

  • 2 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    = tan1(yx

    ). (1.7)

    See Figure 1.1. In MATLAB, r is denoted by abs(z), and is denoted by angle(z).

    Figure 1.1: Cartesian and Polar Representations of the Complex Number z

    The original Cartesian representation is obtained from the radius r and angle as follows:

    x = rcos (1.8)

    y = r sin . (1.9)

    The complex number z may therefore be written as

    z = x+ jy

    = rcos + jrsin

    = r (cos + j sin ) .

    (1.10)

    The complex number cos + jsin is, itself, a number that may be represented on the complex plane andcoded with the Cartesian pair (cos, sin). This is illustrated in Figure 1.2. The radius and angle to thepoint z = cos + jsin are 1 and . Can you see why?

    Figure 1.2: The Complex Number cos + jsin

    Available for free at Connexions

  • 3The complex number cos + jsin is of such fundamental importance to our study of complex numbersthat we give it the special symbol ej :

    ej = cos + jsin. (1.11)

    As illustrated in Figure 1.2, the complex number ej has radius 1 and angle . With the symbol ej, wemay write the complex number z as

    z = rej. (1.12)

    We call z = rej a polar representation for the complex number z. We say that the polar pair rcodes thecomplex number z. In this polar representation, we dene |z| = r to be the magnitude of z and arg (z) = to be the angle, or phase, of z:

    |z| = r (1.13)

    arg (z) = . (1.14)

    With these denitions of magnitude and phase, we can write the complex number z as

    z = |z|ejarg(z). (1.15)Let's summarize our ways of writing the complex number z and record the corresponding geometric codes:

    z = x+ jy = rej = |z|ej arg(z).

    (x, y) r(1.16)

    In "Roots of Quadratic Equations"

    2

    we show that the denition ej = cos + jsin is more than symbolic.We show, in fact, that ej is just the familiar function ex evaluated at the imaginary argument x = j. Wecall ej a complex exponential, meaning that it is an exponential with an imaginary argument.

    Exercise 1.1.1

    Prove (j)2n = (1)n and (j)2n+1 = (1)nj. Evaluate j3, j4, j5.Exercise 1.1.2

    Prove ej[(pi/2)+m2pi] = j, ej[(3pi/2)+m2pi] = j, ej(0+m2pi) = 1, and ej(pi+m2pi) = 1. Plot theseidentities on the complex plane. (Assume m is an integer.)

    Exercise 1.1.3

    Find the polar representation z = rej for each of the following complex numbers:

    a. z = 1 + j0;b. z = 0 + j1;c. z = 1 + j1;d. z = 1 j1.Plot the points on the complex plane.

    Exercise 1.1.4

    Find the Cartesian representation z = x+ jy for each of the following complex numbers:

    a. z =

    2ejpi/2 ;b. z =

    2ejpi/4;c. z = ej3pi/4 ;2

    "Complex Numbers: Roots of Quadratic Equations"

    Available for free at Connexions

  • 4 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    d. z =

    2ej3pi/2.

    Plot the points on the complex plane.

    Exercise 1.1.5

    The following geometric codes represent complex numbers. Decode each by writing down the

    corresponding complex number z:

    a. (0.7,0.1) z = ?b. (1.0, 0.5) z = ?c. 1.6pi/8 z =?d. 0.47pi/8 z =?

    Exercise 1.1.6

    Show that Im [jz] = Re [z] and Re [jz] = Im [z]. Demo 1.1 (MATLAB). Run the followingMATLAB program in order to compute and plot the complex number ej for = i2pi/360, i =1, 2, ..., 360:

    j=sqrt(-1)

    n=360

    for i=1:n,circle(i)=exp(j*2*pi*i/n);end;

    axis('square')

    plot(circle)

    Replace the explicit for loop of line 3 by the implicit loop

    circle=exp(j*2*pi*[1:n]/n);

    to speed up the calculation. You can see from Figure 1.3 that the complex number ej, evaluatedat angles = 2pi/360, 2 (2pi/360) , ..., turns out complex numbers that lie at angle and radius 1.We say that ej is a complex number that lies on the unit circle. We will have much more to sayabout the unit circle in Chapter 2.

    Available for free at Connexions

  • 5Figure 1.3: The Complex Numbers ej for 0 2pi (Demo 1.1)

    1.2 Complex Numbers: Algebra of Complex Numbers

    3

    note: This module is part of the collection, A First Course in Electrical and Computer Engineer-

    ing. The LaTeX source les for this collection were created using an optical character recognition

    technology, and because of this process there may be more errors than usual. Please contact us if

    you discover any errors.

    The complex numbers form a mathematical eld on which the usual operations of addition and multipli-

    cation are dened. Each of these operations has a simple geometric interpretation.

    1.2.1 Addition and Multiplication.

    The complex numbers z1 and z2 areadded according to the rule

    z1 + z2 = (x1 + jy1) + (x2 + jy2)

    = (x1 + x2) + j (y1 + y2) .(1.17)

    3

    This content is available online at .

    Available for free at Connexions

  • 6 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    We say that the real parts add and the imaginary parts add. As illustrated in Figure 1.4, the complex

    number z1 + z2 is computed from a parallelogram rule, wherein z1 + z2 lies on the node of a parallelogramformed from z1 and z2.

    Exercise 1.2.1

    Let z1 = r1ej1 and z2 = r2ej2 . Find a polar formula z3 =r3ej3 for z3 = z1 + z2 that involvesonly the variables r1, r2, 1, and 2. The formula for r3 is the law of cosines.

    The product of z

    1

    and z

    2

    is

    z1z2 = (x1 + jy1) (x2 + jy2)

    = (x1x2 y1y2) + j (y1x2 + x1y2) .(1.18)

    Figure 1.4: Adding Complex Numbers

    If the polar representations for z1 and z2 are used, then the product may be written as4

    z1z2 = r1ej1r2ej2

    = (r1cos1 + jr1sin1) (r2cos2 + jr2sin2)

    = ( r1 cos 1r2 cos 2 r1 sin 1r2 sin 2)+ j ( r1 sin 1r2 cos 2 + r1 cos 1r2 sin 2)

    = r1r2cos (1 + 2) + jr1r2sin (1 + 2)

    = r1r2ej(1+2).

    (1.19)

    We say that the magnitudes multiply and the angles add. As illustrated in Figure 1.5, the product z1z2 liesat the angle (1 + 2).

    4

    We have used the trigonometric identities cos (1 + 2) = cos1 cos 2 sin 1 sin 2 and sin (1 + 2) = sin1 cos 2+cos1sin 2

    to derive this result.

    Available for free at Connexions

  • 7Figure 1.5: Multiplying Complex Numbers

    Rotation. There is a special case of complex multiplication that will become very important in our study

    of phasors in the chapter on Phasors

    5

    . When z1 is the complex number z1 = r1ej1 and z2 is the complexnumber z2 = ej2 , then the product of z1 and z2 is

    z1z2 = z1ej2 = r1ej(1+2). (1.20)

    As illustrated in Figure 1.6, z1z2 is just a rotation of z1 through the angle 2.

    Figure 1.6: Rotation of Complex Numbers

    Exercise 1.2.2

    Begin with the complex number z1 = x+ jy = rej. Compute the complex number z2 = jz1 in itsCartesian and polar forms. The complex number z2 is sometimes called perp(z1). Explain why bywriting perp(z1) as z1ej2 . What is 2? Repeat this problem for z3 = jz1.5

    "Phasors: Introduction"

    Available for free at Connexions

  • 8 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    Powers. If the complex number z1 multiplies itself N times, then the result is

    (z1)N = rN1 e

    jN1 . (1.21)

    This result may be proved with a simple induction argument. Assume zk1 = rk1ejk1. (The assumption is

    true for k = 1.) Then use the recursion zk+11 = zk1z1 = r

    k+11 e

    j(k+1)1. Iterate this recursion (or induction)

    until k + 1 = N . Can you see that, as n ranges from n = 1, ..., N , the angle of zfrom 1

    to 21, ..., to N1and the radius ranges from r

    1

    to r21, ..., to rN1 ? This result is explored more fully in Problem 1.19.

    Complex Conjugate. Corresponding to every complex number z = x + jy = rej is the complexconjugate

    z = x jy = rej. (1.22)The complex number z and its complex conjugate are illustrated in Figure 1.7. The recipe for ndingcomplex conjugates is to change jto j. This changes the sign of the imaginary part of the complexnumber.

    Figure 1.7: A Complex Variable and Its Complex Conjugate

    Magnitude Squared. The product of z and its complex conjugate is called the magnitude squared of

    z and is denoted by |z|2 :

    |z|2 = zz = (x jy) (x+ jy) = x2 + y2 = rejrej = r2. (1.23)Note that |z| = r is the radius, or magnitude, that we dened in "Geometry of Complex Numbers"(Section 1.1).

    Exercise 1.2.3

    Write z as z = zw. Find w in its Cartesian and polar forms.Exercise 1.2.4

    Prove that angle (z2z1) = 2 1.Exercise 1.2.5

    Show that the real and imaginary parts of z = x+ jy may be written as

    Re [z] =12

    (z + z) (1.24)

    Im [z] = 2j (z z) . (1.25)

    Available for free at Connexions

  • 9Commutativity, Associativity, and Distributivity. The complex numbers commute, associate, and

    distribute under addition and multiplication as follows:

    z1 + z2 = z2 + z1

    z1z2 = z2z1(1.26)

    (z1 + z2) + z3 = z1 + (z2 + z3)

    z1 (z2z3) = (z1z2) z3

    z1 (z2 + z3) = z1z2 + z1z3.

    (1.27)

    Identities and Inverses. In the eld of complex numbers, the complex number 0 + j0 (denoted by 0)plays the role of an additive identity, and the complex number 1 + j0 (denoted by 1) plays the role of amultiplicative identity:

    z + 0 = z = 0 + z

    z1 = z = 1z.(1.28)

    In this eld, the complex number z = x + j (y) is the additive inverse of z, and the complex numberz1 = xx2+y2 + j

    (y

    x2+y2

    )is the multiplicative inverse:

    z + (z) = 0zz1 = 1.(1.29)

    Exercise 1.2.6

    Show that the additive inverse of z = rej may be written as rej(+pi).Exercise 1.2.7

    Show that the multiplicative inverse of z may be written as

    z1 =1zz

    z =1

    x2 + y2(x jy) . (1.30)

    Show that zz is real. Show that z1 may also be written as

    z1 = r1ej. (1.31)

    Plot z and z1 for a representative z.Exercise 1.2.8

    Prove (j)1 = j.Exercise 1.2.9

    Find z1 when z = 1 + j1.Exercise 1.2.10

    Prove

    (z1) = (z)1 = r1ej = 1zz z. Plot z and (z1) for a representative z.Exercise 1.2.11

    Find all of the complex numbers z with the property that jz = z. Illustrate these complexnumbers on the complex plane.

    Demo 1.2 (MATLAB). Create and run the following script le (name it Complex Numbers)

    6

    6

    If you are using PC-MATLAB, you will need to name your le cmplxnos.m.

    Available for free at Connexions

  • 10 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    clear, clg

    j=sqrt(-1)

    z1=1+j*.5,z2=2+j*1.5

    z3=z1+z2,z4=z1*z2

    z5=conj(z1),z6=j*z2

    axis([-4 4 -4 4]),axis('square'),plot([0 z1],'-o')

    hold on

    plot([0 z2],'-o'),plot([0 z3],'-+'),plot([0 z4],'-*'),

    plot([0 z5],'x'),plot([0 z6],'-x')

    Figure 1.8: Complex Numbers (Demo 1.2)

    With the help of Appendix 1, you should be able to annotate each line of this program. View your graphics

    display to verify the rules for add, multiply, conjugate, and perp. See Figure 1.8.

    Exercise 1.2.12

    Prove that z0 = 1.Exercise 1.2.13

    (MATLAB) Choose z1 = 1.05ej2pi/16 and z2 = 0.95ej2pi/16. Write a MATLAB program to computeand plot zn1 and z

    n2 for n = 1, 2, ..., 32. You should observe a gure like Figure 1.9.

    Available for free at Connexions

  • 11

    Figure 1.9: Powers of z

    1.3 Representing Complex Numbers in a Vector Space

    7

    note: This module is part of the collection, A First Course in Electrical and Computer Engineer-

    ing. The LaTeX source les for this collection were created using an optical character recognition

    technology, and because of this process there may be more errors than usual. Please contact us if

    you discover any errors.

    So far we have coded the complex number z = x+ jy with the Cartesian pair (x, y) and with the polar pair(r). We now show how the complex number z may be coded with a two-dimensional vector z and showhow this new code may be used to gain insight about complex numbers.

    Coding a Complex Number as a Vector. We code the complex number z = x + jy with the

    two-dimensional vector z =

    xy

    :

    x+ jy = z z = xy

    . (1.32)We plot this vector as in Figure 1.10. We say that the vector z belongs to a vector space. This meansthat vectors may be added and scaled according to the rules

    z1 + z2 =

    x1 + x2y1 + y2

    (1.33)

    az =

    axay

    . (1.34)7

    This content is available online at .

    Available for free at Connexions

  • 12 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    Figure 1.10: The Vector z Coding the Complex Number z

    Furthermore, it means that an additive inverse z, an additive identity 0, and a multiplicative identity1 all exist:

    z+ (z) = 0 (1.35)

    lz = z. (1.36)

    The vector 0 is 0 =

    00

    .

    Prove that vector addition and scalar multiplication satisfy these properties of commutation, association,

    and distribution:

    z1 + z2 = z2 + z1 (1.37)

    (z1 + z2) + z3 = z1 + (z2 + z3) (1.38)

    a (bz) = (ab) z (1.39)

    a (z1 + z2) = az1 + az2. (1.40)

    Inner Product and Norm. The inner product between two vectors z1 and z2 is dened to be the realnumber

    (z1, z2) = x1x2 + y1y2. (1.41)

    We sometimes write this inner product as the vector product (more on this in Linear Algebra

    8

    )

    (z1, z2) = zT1 z2

    = [x1 y1]

    x2y2

    = (x1x2 + y1y2) . (1.42)Exercise 1.3.1

    Prove (z1, z2) = (z2, z1) .

    8

    "Linear Algebra: Introduction"

    Available for free at Connexions

  • 13

    When z1 = z2 = z, then the inner product between z and itself is the norm squared of z:

    ||z||2 = (z, z) = x2 + y2. (1.43)These properties of vectors seem abstract. However, as we now show, they may be used to develop a vector

    calculus for doing complex arithmetic.

    A Vector Calculus for Complex Arithmetic. The addition of two complex numbers z1 and z2corresponds to the addition of the vectors z1 and z2 :

    z1 + z2 z1 + z2 = x1 + x2y1 + y2

    (1.44)

    The scalar multiplication of the complex number z2 by the real number x1 corresponds to scalar multipli-cation of the vector z2 by x1 :

    x1z2 x1 x2y2

    = x1x2x1y2

    . (1.45)Similarly, the multiplication of the complex number z2 by the real number y1 is

    y1z2 y1 x2y2

    = y1x2y1y2

    . (1.46)The complex product z1z2 = (x1 + jy1) z2 is therefore represented as

    z1z2 x1x2 y1y2x1y2 + y1x2

    . (1.47)This representation may be written as the inner product

    z1z2 = z2z1 (v, z1)

    (w, z1)

    (1.48)

    where v and w are the vectors v =

    x2y2

    and w =

    y2x2

    . By dening the matrix x2 y2

    y2 x2

    , (1.49)we can represent the complex product z1z2 as a matrix-vector multiply (more on this in Linear Algebra9

    ):

    z1z2 = z2z1 x2 y2y2 x2

    x1y1

    . (1.50)With this representation, we can represent rotation as

    zej = ejz cos sinsin cos

    x1x2

    . (1.51)9

    "Linear Algebra: Introduction"

    Available for free at Connexions

  • 14 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    We call the matrix

    cos sinsin cos

    a rotation matrix.

    Exercise 1.3.2

    Call R () the rotation matrix:

    R () =

    cos sinsin cos

    . (1.52)Show that R () rotates by (). What can you say about R ()w when w = R () z?Exercise 1.3.3

    Represent the complex conjugate of z as

    z a bc d

    xy

    (1.53)

    and nd the elements a, b, c, and d of the matrix.

    Inner Product and Polar Representation. From the norm of a vector, we derive a formula for the

    magnitude of z in the polar representation z = rej :

    r =(x2 + y2

    )1/2= ||z|| = (z, z)1/2. (1.54)

    If we dene the coordinate vectors e1 =

    10

    and e2 =

    01

    , then we can represent the vector z as

    z = (z, e1) e1 + (z, e2) e2. (1.55)

    See Figure 1.11. From the gure it is clear that the cosine and sine of the angle are

    cos =(z, e1)||z|| ; sin =

    (z, e2)||z|| (1.56)

    Figure 1.11: Representation of z in its Natural Basis

    Available for free at Connexions

  • 15

    This gives us another representation for any vector z:

    z = ||z||cose1 + ||z||sine2. (1.57)The inner product between two vectors z1 and z2 is now

    (z1, z2) =[(z1, e1) eT1 (z1, e2) e

    T2

    ] (z2, e1) e1(z2, e2) e2

    = (z1, e1) (z2, e1) + (z1, e2) (z2, e2)

    = ||z1||cos1||z2||cos2 + ||z1|| sin 1||z2||sin2.

    (1.58)

    It follows that cos (2 1) = cos2 cos 1 + sin1sin2 may be written as

    cos (2 1) = (z1, z2)||z1|| ||z2|| (1.59)

    This formula shows that the cosine of the angle between two vectors z1 and z2, which is, of course, thecosine of the angle of z2z

    1 , is the ratio of the inner product to the norms.

    Exercise 1.3.4

    Prove the Schwarz and triangle inequalities and interpret them:

    (Schwarz) (z1, z2)2 ||z1||2||z2||2 (1.60)

    (triangle) I z1 z2|| ||z1 z3||+ ||z2 z3 ||. (1.61)

    Available for free at Connexions

  • 16 CHAPTER 1. REVIEW OF PREREQUISITES: COMPLEX NUMBERS

    Available for free at Connexions

  • Chapter 2

    Continuous-Time Signals

    2.1 Signal Classications and Properties

    1

    2.1.1 Introduction

    This module will begin our study of signals and systems by laying out some of the fundamentals of signal clas-

    sication. It is essentially an introduction to the important denitions and properties that are fundamental

    to the discussion of signals and systems, with a brief discussion of each.

    2.1.2 Classications of Signals

    2.1.2.1 Continuous-Time vs. Discrete-Time

    As the names suggest, this classication is determined by whether or not the time axis is discrete (countable)

    or continuous (Figure 2.1). A continuous-time signal will contain a value for all real numbers along the

    time axis. In contrast to this, a discrete-time signal

    2

    , often created by sampling a continuous signal, will

    only have values at equally spaced intervals along the time axis.

    Figure 2.1

    1

    This content is available online at .

    2

    "Discrete-Time Signals"

    Available for free at Connexions

    17

  • 18

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.1.2.2 Analog vs. Digital

    The dierence between analog and digital is similar to the dierence between continuous-time and discrete-

    time. However, in this case the dierence involves the values of the function. Analog corresponds to a

    continuous set of possible function values, while digital corresponds to a discrete set of possible function

    values. An common example of a digital signal is a binary sequence, where the values of the function can

    only be one or zero.

    Figure 2.2

    2.1.2.3 Periodic vs. Aperiodic

    Periodic signals

    3

    repeat with some period T , while aperiodic, or nonperiodic, signals do not (Figure 2.3).We can dene a periodic function through the following mathematical expression, where t can be any numberand T is a positive constant:

    f (t) = f (T + t) (2.1)

    The fundamental period of our function, f (t), is the smallest value of T that the still allows (2.1) to betrue.

    3

    "Continuous Time Periodic Signals"

    Available for free at Connexions

  • 19

    (a)

    (b)

    Figure 2.3: (a) A periodic signal with period T0 (b) An aperiodic signal

    2.1.2.4 Finite vs. Innite Length

    As the name implies, signals can be characterized as to whether they have a nite or innite length set of

    values. Most nite length signals are used when dealing with discrete-time signals or a given sequence of

    values. Mathematically speaking, f (t) is a nite-length signal if it is nonzero over a nite interval

    t1 < f (t) < t2

    where t1 > and t2 < . An example can be seen in Figure 2.4. Similarly, an innite-length signal,f (t), is dened as nonzero over all real numbers:

    f (t)

    Figure 2.4: Finite-Length Signal. Note that it only has nonzero values on a set, nite interval.

    Available for free at Connexions

  • 20

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.1.2.5 Even vs. Odd

    An even signal is any signal f such that f (t) = f (t). Even signals can be easily spotted as theyare symmetric around the vertical axis. An odd signal, on the other hand, is a signal f such thatf (t) = f (t) (Figure 2.5).

    (a)

    (b)

    Figure 2.5: (a) An even signal (b) An odd signal

    Using the denitions of even and odd signals, we can show that any signal can be written as a combination

    of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we

    have to look no further than a single equation.

    f (t) =12

    (f (t) + f (t)) + 12

    (f (t) f (t)) (2.2)

    By multiplying and adding this expression out, it can be shown to be true. Also, it can be shown that

    f (t) + f (t) fullls the requirement of an even function, while f (t) f (t) fullls the requirement of anodd function (Figure 2.6).

    Example 2.1

    Available for free at Connexions

  • 21

    (a)

    (b)

    (c)

    (d)

    Figure 2.6: (a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) =12(f (t) + f (t)) (c) Odd part: o (t) = 1

    2(f (t) f (t)) (d) Check: e (t) + o (t) = f (t)

    Available for free at Connexions

  • 22

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    Example 2.2

    Consider the signal dened for all real t described by

    f (t) = { sin (2pit) /t t 10 t < 1(2.3)

    This signal is continuous time, analog, aperiodic, innite length, causal, and neither even nor odd.

    2.1.3 Signal Classications Summary

    This module describes just some of the many ways in which signals can be classied. They can be continuous

    time or discrete time, analog or digital, periodic or aperiodic, nite or innite, and deterministic or random.

    We can also divide them based on their causality and symmetry properties. There are other ways to classify

    signals, such as boundedness, handedness, and continuity, that are not discussed here but will be described

    in subsequent modules.

    2.2 Common Continuous Time Signals

    4

    2.2.1 Introduction

    Before looking at this module, hopefully you have an idea of what a signal is and what basic classications

    and properties a signal can have. In review, a signal is a function dened with respect to an independent

    variable. This variable is often time but could represent any number of things. Mathematically, continuous

    time analog signals have continuous independent and dependent variables. This module will describe some

    useful continuous time analog signals.

    2.2.2 Important Continuous Time Signals

    2.2.2.1 Sinusoids

    One of the most important elemental signal that you will deal with is the real-valued sinusoid. In its

    continuous-time form, we write the general expression as

    Acos (t+ ) (2.4)

    where A is the amplitude, is the frequency, and is the phase. Thus, the period of the sinusoid is

    T =2pi(2.5)

    4

    This content is available online at .

    Available for free at Connexions

  • 23

    Figure 2.7: Sinusoid with A = 2, w = 2, and = 0.

    2.2.2.2 Unit Step

    Another very basic signal is the unit-step function that is dened as

    u (t) =

    0 if t < 01 if t 0 (2.6)

    t

    1

    Figure 2.8: Continuous-Time Unit-Step Function

    The step function is a useful tool for testing and for dening other signals. For example, when dierent

    shifted versions of the step function are multiplied by other signals, one can select a certain portion of the

    signal and zero out the rest.

    Available for free at Connexions

  • 24

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.2.2.3 Unit Pulse

    Many engineers interpret the unit step function as the representation of turning on a switch and leaving it

    on. The unit-pulse function can be thought of as turning a switch on and o after a unit of time. It is

    dened as

    p (t) =

    1 if 0.5 t 0.50 if t < 0.5 or t > 0.5 (2.7)so that it is an even function.

    Figure 2.9: Continuous-Time Unit-Step Function

    Note that the pulse can be easily written in terms of unit step functions as p (t) = u (t+ 0.5)u (t 0.5)

    2.2.2.4 Triangle Function

    The last function we will introduce is the triangle function, which represents and input that increases and

    then decreases linearly with time. It is dened as

    (t) =

    t+ 1 if 1 t 01 t if 0 t 10 if t < 1 or t > 1(2.8)

    Available for free at Connexions

  • 25

    Figure 2.10: Continuous-Time Triangle Function

    2.2.3 Common Continuous Time Signals Summary

    Some of the most important and most frequently encountered signals have been discussed in this module.

    There are, of course, many other signals of signicant consequence not discussed here. As you will see later,

    many of the other more complicated signals will be studied in terms of those listed here. Especially take

    note of the complex exponentials and unit impulse functions, which will be the key focus of several topics

    included in this course.

    2.3 Signal Operations

    5

    2.3.1 Introduction

    This module will look at two signal operations aecting the time parameter of the signal, time shifting and

    time scaling. These operations are very common components to real-world systems and, as such, should be

    understood thoroughly when learning about signals and systems.

    2.3.2 Manipulating the Time Parameter

    2.3.2.1 Time Shifting

    Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtracting

    a quantity of the shift to the time variable in the function. Subtracting a xed positive quantity from the

    time variable will shift the signal to the right (delay) by the subtracted quantity, while adding a xed positive

    amount to the time variable will shift the signal to the left (advance) by the added quantity.

    5

    This content is available online at .

    Available for free at Connexions

  • 26

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    Figure 2.11: f (t T ) moves (delays) f to the right by T .

    2.3.2.2 Time Scaling

    Time scaling compresses or dilates a signal by multiplying the time variable by some quantity. If that quan-

    tity's absolute value is greater than one, the signal becomes narrower and the operation is called compression,

    while if the quantity's absolute value is less than one, the signal becomes wider and is called dilation. Note

    that if the quantity is negative, then one must also account for time reversal (described below).

    Figure 2.12: f (at) compresses f by a.

    Example 2.3

    Given f (t) we woul like to plot f (at b), with both a > 0 and b > 0. The gure below describesa method to accomplish this.

    Available for free at Connexions

  • 27

    (a) (b)

    (c)

    Figure 2.13: (a) Begin with f (t) (b) Then replace t with at to get f (at) (c) Finally, replace t witht b

    ato get f

    `a`t b

    a

    = f (at b)

    2.3.2.3 Time Reversal

    A natural question to consider when learning about time scaling is: What happens when the time variable

    is multiplied by a negative number? The answer to this is time reversal, also known as time inversion. This

    operation is the reversal of the time axis, or ipping the signal over the y-axis.

    Figure 2.14: Reverse the time axis

    Available for free at Connexions

  • 28

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.3.3 Time Scaling and Shifting Demonstration

    Figure 2.15: Download

    6

    or Interact (when online) with a Mathematica CDF demonstrating Discrete

    Harmonic Sinusoids.

    2.3.4 Signal Operations Summary

    Some common operations on signals aect the time parameter of the signal. One of these is time shifting in

    which a quantity is added to the time parameter in order to advance or delay the signal. Another is the time

    scaling in which the time parameter is multiplied by a quantity in order to dilate or compress the signal in

    time. In the event that the quantity involved in the latter operation is negative, time reversal occurs.

    2.4 Energy and Power of Continuous-Time Signals

    7

    From physics we've learned that energy is work and power is work per time unit. Energy was measured in

    Joule (J) and work in Watts(W). In signal processing energy and power are dened more loosely without

    any necessary physical units, because the signals may represent very dierent physical entities. We can say

    that energy and power are a measure of the signal's "size".

    6

    See the le at

    7

    This content is available online at .

    Available for free at Connexions

  • 29

    2.4.1 Signal Energy

    2.4.1.1 Analog signals

    Since we often think of a signal as a function of varying amplitude through time, it seems to reason that a

    good measurement of the strength of a signal would be the area under the curve. However, this area may

    have a negative part. This negative part does not have less strength than a positive signal of the same size.

    This suggests either squaring the signal or taking its absolute value, then nding the area under that curve.

    It turns out that what we call the energy of a signal is the area under the squared signal, see Figure 2.16

    Energy - Analog signal: Ea = (|x (t) |)2dt

    Note that we have used squared magnitude(absolute value) if the signal should be complex valued. If the

    signal is real, we can leave out the magnitude operation.

    (a)

    (b)

    Figure 2.16: Sketch of energy calculation (a) Signal x(t) (b) The energy of x(t) is the shaded region

    2.4.2 Signal Power

    Our denition of energy seems reasonable, and it is. However, what if the signal does not decay fast enough?

    In this case we have innite energy for any such signal. Does this mean that a fty hertz sine wave feeding

    into your headphones is as strong as the fty hertz sine wave coming out of your outlet? Obviously not.

    This is what leads us to the idea of signal power, which in such cases is a more adequate description.

    Available for free at Connexions

  • 30

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    Figure 2.17: Signal with ininite energy

    2.4.2.1 Analog signals

    For analog signals we dene power as energy per time interval.

    Power - analog signal: Pa = limT

    1T

    T2

    T2(|x (t) |)2dtFor periodic analog signals, the power needs to only be measured across a single period.

    Power - periodic analog signal with period T0: Pa = 1T0 T0

    2

    T02(|x (t) |)2dtExample 2.4

    Given the signal x (t) = sin (2pit), shown in Figure 2.18, calculate the power for one period.For the analog sine we have Pa = 11

    10

    sin2 (2pit) dt = 12 .

    Figure 2.18: Analog sine.

    Available for free at Connexions

  • 31

    2.5 Continuous Time Impulse Function

    8

    2.5.1 Introduction

    In engineering, we often deal with the idea of an action occurring at a point. Whether it be a force at

    a point in space or some other signal at a point in time, it becomes worth while to develop some way of

    quantitatively dening this. This leads us to the idea of a unit impulse, probably the second most important

    function, next to the complex exponential, in this systems and signals course.

    2.5.2 Dirac Delta Function

    The Dirac delta function, often referred to as the unit impulse or delta function, is the function that

    denes the idea of a unit impulse in continuous-time. Informally, this function is one that is innitesimally

    narrow, innitely tall, yet integrates to one. Perhaps the simplest way to visualize this is as a rectangular

    pulse from a 2 to a + 2 with a height of 1 . As we take the limit of this setup as approaches 0, we seethat the width tends to zero and the height tends to innity as the total area remains constant at one. The

    impulse function is often written as (t).

    (t) dt = 1 (2.9)

    Figure 2.19: This is one way to visualize the Dirac Delta Function.

    8

    This content is available online at .

    Available for free at Connexions

  • 32

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    Figure 2.20: Since it is quite dicult to draw something that is innitely tall, we represent the Dirac

    with an arrow centered at the point it is applied. If we wish to scale it, we may write the value it is

    scaled by next to the point of the arrow. This is a unit impulse (no scaling).

    Below is a brief list a few important properties of the unit impulse without going into detail of their

    proofs.

    Unit Impulse Properties

    (t) = 1|| (t) (t) = (t) (t) = ddtu (t), where u (t) is the unit step. f (t) (t) = f (0) (t)The last of these is especially important as it gives rise to the sifting property of the dirac delta function, which

    selects the value of a function at a specic time and is especially important in studying the relationship of an

    operation called convolution to time domain analysis of linear time invariant systems. The sifting property

    is shown and derived below.

    f (t) (t) dt =

    f (0) (t) dt = f (0)

    (t) dt = f (0) (2.10)

    Available for free at Connexions

  • 33

    2.5.3 Unit Impulse Limiting Demonstration

    Figure 2.21: Click on the above thumbnail image (when online) to download an interactive Mathematica

    Player demonstrating the Continuous Time Impulse Function.

    2.5.4 Continuous Time Unit Impulse Summary

    The continuous time unit impulse function, also known as the Dirac delta function, is of great importance

    to the study of signals and systems. Informally, it is a function with innite height ant innitesimal width

    that integrates to one, which can be viewed as the limiting behavior of a unit area rectangle as it narrows

    while preserving area. It has several important properties that will appear again when studying systems.

    2.6 Continuous-Time Complex Exponential

    9

    2.6.1 Introduction

    Complex exponentials are some of the most important functions in our study of signals and systems. Their

    importance stems from their status as eigenfunctions of linear time invariant systems. Before proceeding,

    you should be familiar with complex numbers.

    9

    This content is available online at .

    Available for free at Connexions

  • 34

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.6.2 The Continuous Time Complex Exponential

    2.6.2.1 Complex Exponentials

    The complex exponential function will become a critical part of your study of signals and systems. Its general

    continuous form is written as

    Aest (2.11)

    where s = +j is a complex number in terms of , the attenuation constant, and the angular frequency.

    2.6.2.2 Euler's Formula

    The mathematician Euler proved an important identity relating complex exponentials to trigonometric func-

    tions. Specically, he discovered the eponymously named identity, Euler's formula, which states that

    ejx = cos (x) + jsin (x) (2.12)

    which can be proven as follows.

    In order to prove Euler's formula, we start by evaluating the Taylor series for ez about z = 0, whichconverges for all complex z, at z = jx. The result is

    ejx =k=0

    (jx)k

    k!

    =k=0 (1)k x

    2k

    (2k)! + jk=0 (1)k x

    2k+1

    (2k+1)!

    = cos (x) + jsin (x)

    (2.13)

    because the second expression contains the Taylor series for cos (x) and sin (x) about t = 0, which convergefor all real x. Thus, the desired result is proven.Choosing x = t this gives the result

    ejt = cos (t) + jsin (t) (2.14)

    which breaks a continuous time complex exponential into its real part and imaginary part. Using this

    formula, we can also derive the following relationships.

    cos (t) =12ejt +

    12ejt (2.15)

    sin (t) =12jejt 1

    2jejt (2.16)

    2.6.2.3 Continuous Time Phasors

    It has been shown how the complex exponential with purely imaginary frequency can be broken up into

    its real part and its imaginary part. Now consider a general complex frequency s = + j where is theattenuation factor and is the frequency. Also consider a phase dierence . It follows that

    e(+j)t+j = et (cos (t+ ) + jsin (t+ )) . (2.17)

    Thus, the real and imaginary parts of est appear below.

    Re{e(+j)t+j} = etcos (t+ ) (2.18)

    Im{e(+j)t+j} = etsin (t+ ) (2.19)

    Available for free at Connexions

  • 35

    Using the real or imaginary parts of complex exponential to represent sinusoids with a phase delay multiplied

    by real exponential is often useful and is called attenuated phasor notation.

    We can see that both the real part and the imaginary part have a sinusoid times a real exponential. We

    also know that sinusoids oscillate between one and negative one. From this it becomes apparent that the

    real and imaginary parts of the complex exponential will each oscillate within an envelope dened by the

    real exponential part.

    (a) (b)

    (c)

    Figure 2.22: The shapes possible for the real part of a complex exponential. Notice that the oscillations

    are the result of a cosine, as there is a local maximum at t = 0. (a) If is negative, we have the case ofa decaying exponential window. (b) If is positive, we have the case of a growing exponential window.(c) If is zero, we have the case of a constant window.

    Available for free at Connexions

  • 36

    CHAPTER 2. CONTINUOUS-TIME SIGNALS

    2.6.3 Complex Exponential Demonstration

    Figure 2.23: Interact (when online) with a Mathematica CDF demonstrating the Continuous Time

    Complex Exponential. To Download, right-click and save target as .cdf.

    2.6.4 Continuous Time Complex Exponential Summary

    Continuous time complex exponentials are signals of great importance to the study of signals and systems.

    They can be related to sinusoids through Euler's formula, which identies the real and imaginary parts of

    purely imaginary complex exponentials. Eulers formula reveals that, in general, the real and imaginary parts

    of complex exponentials are sinusoids multiplied by real exponentials. Thus, attenuated phasor notation is

    often useful in studying these signals.

    Available for free at Connexions

  • Chapter 3

    Introduction to Systems

    3.1 Introduction to Systems

    1

    Signals are manipulated by systems. Mathematically, we represent what a system does by the notation

    y (t) = S (x (t)), with x representing the input signal and y the output signal.

    Denition of a system

    Systemx(t) y(t)

    Figure 3.1: The system depicted has input x (t) and output y (t). Mathematically, systems operateon function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a

    value for the dependent variable (our output signal) for each value of its independent variable (its input

    signal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S () the input-outputrelation for the system.

    This notation mimics the mathematical symbology of a function: A system's input is analogous to an

    independent variable and its output the dependent variable. For the mathematically inclined, a system is a

    functional: a function of a function (signals are functions).

    Simple systems can be connected togetherone system's output becomes another's inputto accomplish

    some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of

    three basic interconnection forms.

    1

    This content is available online at .

    Available for free at Connexions

    37

  • 38

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    3.1.1 Cascade Interconnection

    cascade

    S1[] S2[]x(t) y(t)w(t)

    Figure 3.2: The most rudimentary ways of interconnecting systems are shown in the gures in this

    section. This is the cascade conguration.

    The simplest form is when one system's output is connected only to another's input. Mathematically,

    w (t) = S1 (x (t)), and y (t) = S2 (w (t)), with the information contained in x (t) processed by the rst, thenthe second system. In some cases, the ordering of the systems matter, in others it does not. For example, in

    the fundamental model of communication

    2

    the ordering most certainly matters.

    3.1.2 Parallel Interconnection

    parallel

    x(t)

    x(t)

    x(t)

    +y(t)

    S1[]

    S2[]

    Figure 3.3: The parallel conguration.

    A signal x (t) is routed to two (or more) systems, with this signal appearing as the input to all systemssimultaneously and with equal strength. Block diagrams have the convention that signals going to more

    than one system are not split into pieces along the way. Two or more systems operate on x (t) and theiroutputs are added together to create the output y (t). Thus, y (t) = S1 (x (t))+S2 (x (t)), and the informationin x (t) is processed separately by both systems.

    2

    "Structure of Communication Systems", Figure 1: Fundamental model of communication

    Available for free at Connexions

  • 39

    3.1.3 Feedback Interconnection

    feedback

    S1[]x(t) e(t) y(t)

    S2[]

    +

    Figure 3.4: The feedback conguration.

    The subtlest interconnection conguration has a system's output also contributing to its input. Engineers

    would say the output is "fed back" to the input through system 2, hence the terminology. The mathematical

    statement of the feedback interconnection (Figure 3.4: feedback) is that the feed-forward system produces

    the output: y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system'soutput to y (t): e (t) = x (t) S2 (y (t)). Feedback systems are omnipresent in control problems, with theerror signal used to adjust the output to achieve some condition dened by the input (controlling) signal.

    For example, in a car's cruise control system, x (t) is a constant representing what speed you want, and y (t)is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (output

    equals input).

    3.2 System Classications and Properties

    3

    3.2.1 Introduction

    In this module some of the basic classications of systems will be briey introduced and the most important

    properties of these systems are explained. As can be seen, the properties of a system provide an easy way

    to distinguish one system from another. Understanding these basic dierences between systems, and their

    properties, will be a fundamental concept used in all signal and system courses. Once a set of systems can be

    identied as sharing particular properties, one no longer has to reprove a certain characteristic of a system

    each time, but it can simply be known due to the the system classication.

    3.2.2 Classication of Systems

    3.2.2.1 Continuous vs. Discrete

    One of the most important distinctions to understand is the dierence between discrete time and continuous

    time systems. A system in which the input signal and output signal both have continuous domains is said

    to be a continuous system. One in which the input signal and output signal both have discrete domains is

    said to be a discrete system. Of course, it is possible to conceive of signals that belong to neither category,

    such as systems in which sampling of a continuous time signal or reconstruction from a discrete time signal

    take place.

    3

    This content is available online at .

    Available for free at Connexions

  • 40

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    3.2.2.2 Linear vs. Nonlinear

    A linear system is any system that obeys the properties of scaling (rst order homogeneity) and superposition

    (additivity) further described below. A nonlinear system is any system that does not have at least one of

    these properties.

    To show that a system H obeys the scaling property is to show that

    H (kf (t)) = kH (f (t)) (3.1)

    Figure 3.5: A block diagram demonstrating the scaling property of linearity

    To demonstrate that a system H obeys the superposition property of linearity is to show that

    H (f1 (t) + f2 (t)) = H (f1 (t)) +H (f2 (t)) (3.2)

    Figure 3.6: A block diagram demonstrating the superposition property of linearity

    It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine

    the rst two steps to get

    H (k1f1 (t) + k2f2 (t)) = k1H (f1 (t)) + k2H (f2 (t)) (3.3)

    Available for free at Connexions

  • 41

    3.2.2.3 Time Invariant vs. Time Varying

    A system is said to be time invariant if it commutes with the parameter shift operator dened by ST (f (t)) =f (t T ) for all T , which is to say

    HST = STH (3.4)

    for all real T . Intuitively, that means that for any input function that produces some output function, anytime shift of that input function will produce an output function identical in every way except that it is

    shifted by the same amount. Any system that does not have this property is said to be time varying.

    Figure 3.7: This block diagram shows what the condition for time invariance. The output is the same

    whether the delay is put on the input or the output.

    3.2.2.4 Causal vs. Noncausal

    A causal system is one in which the output depends only on current or past inputs, but not future inputs.

    Similarly, an anticausal system is one in which the output depends only on current or future inputs, but not

    past inputs. Finally, a noncausal system is one in which the output depends on both past and future inputs.

    All "realtime" systems must be causal, since they can not have future inputs available to them.

    One may think the idea of future inputs does not seem to make much physical sense; however, we have

    only been dealing with time as our dependent variable so far, which is not always the case. Imagine rather

    that we wanted to do image processing. Then the dependent variable might represent pixel positions to the

    left and right (the "future") of the current position on the image, and we would not necessarily have a causal

    system.

    Available for free at Connexions

  • 42

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    (a)

    (b)

    Figure 3.8: (a) For a typical system to be causal... (b) ...the output at time t0, y (t0), can only dependon the portion of the input signal before t0.

    3.2.2.5 Stable vs. Unstable

    There are several denitions of stability, but the one that will be used most frequently in this course will

    be bounded input, bounded output (BIBO) stability. In this context, a stable system is one in which the

    output is bounded if the input is also bounded. Similarly, an unstable system is one in which at least one

    bounded input produces an unbounded output.

    In order to understand this concept, we must rst look more closely into exactly what we mean by

    bounded. A bounded signal is any signal such that there exists a value such that the absolute value of the

    signal is never greater than some value. Since this value is arbitrary, what we mean is that at no point can

    the signal tend to innity, including the end behavior.

    Available for free at Connexions

  • 43

    Figure 3.9: A bounded signal is a signal for which there exists a constant A such that |f (t) | < A

    Representing this mathematically, a stable system must have the following property, where x (t) is theinput and y (t) is the output. The output must satisfy the condition

    |y (t) | My

  • 44

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    3.3.2 Linear Time Invariant Systems

    3.3.2.1 Linear Systems

    If a system is linear, this means that when an input to a given system is scaled by a value, the output of the

    system is scaled by the same amount.

    Linear Scaling

    (a) (b)

    Figure 3.10

    In Figure 3.10(a) above, an input x to the linear system L gives the output y. If x is scaled by a value and passed through this same system, as in Figure 3.10(b), the output will also be scaled by .A linear system also obeys the principle of superposition. This means that if two inputs are added

    together and passed through a linear system, the output will be the sum of the individual inputs' outputs.

    (a) (b)

    Figure 3.11

    Available for free at Connexions

  • 45

    Superposition Principle

    Figure 3.12: If Figure 3.11 is true, then the principle of superposition says that Figure 3.12 (Superpo-

    sition Principle) is true as well. This holds for linear systems.

    That is, if Figure 3.11 is true, then Figure 3.12 (Superposition Principle) is also true for a linear system.

    The scaling property mentioned above still holds in conjunction with the superposition principle. Therefore,

    if the inputs x and y are scaled by factors and , respectively, then the sum of these scaled inputs willgive the sum of the individual scaled outputs:

    (a) (b)

    Figure 3.13

    Superposition Principle with Linear Scaling

    Figure 3.14: Given Figure 3.13 for a linear system, Figure 3.14 (Superposition Principle with Linear

    Scaling) holds as well.

    Available for free at Connexions

  • 46

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    Example 3.1

    Consider the system H1 in which

    H1 (f (t)) = tf (t) (3.7)

    for all signals f . Given any two signals f, g and scalars a, b

    H1 (af (t) + bg (t)) = t (af (t) + bg (t)) = atf (t) + btg (t) = aH1 (f (t)) + bH1 (g (t)) (3.8)

    for all real t. Thus, H1 is a linear system.

    Example 3.2

    Consider the system H2 in which

    H2 (f (t)) = (f (t))2(3.9)

    for all signals f . Because

    H2 (2t) = 4t2 6= 2t2 = 2H2 (t) (3.10)for nonzero t, H2 is not a linear system.

    3.3.2.2 Time Invariant Systems

    A time-invariant system has the property that a certain input will always give the same output (up to

    timing), without regard to when the input was applied to the system.

    Time-Invariant Systems

    (a) (b)

    Figure 3.15: Figure 3.15(a) shows an input at time t while Figure 3.15(b) shows the same inputt0 seconds later. In a time-invariant system both outputs would be identical except that the one inFigure 3.15(b) would be delayed by t0.

    In this gure, x (t) and x (t t0) are passed through the system TI. Because the system TI is time-invariant, the inputs x (t) and x (t t0) produce the same output. The only dierence is that the outputdue to x (t t0) is shifted by a time t0.Whether a system is time-invariant or time-varying can be seen in the dierential equation (or dierence

    equation) describing it. Time-invariant systems are modeled with constant coecient equations.

    A constant coecient dierential (or dierence) equation means that the parameters of the system are not

    changing over time and an input now will give the same result as the same input later.

    Available for free at Connexions

  • 47

    Example 3.3

    Consider the system H1 in which

    H1 (f (t)) = tf (t) (3.11)

    for all signals f . Because

    ST (H1 (f (t))) = ST (tf (t)) = (t T ) f (t T ) 6= tf (t T ) = H1 (f (t T )) = H1 (ST (f (t))) (3.12)for nonzero T , H1 is not a time invariant system.

    Example 3.4

    Consider the system H2 in which

    H2 (f (t)) = (f (t))2(3.13)

    for all signals f . For all real T and signals f ,

    ST (H2 (f (t))) = ST(f(t)2

    )= (f (t T ))2 = H2 (f (t T )) = H2 (ST (f (t))) (3.14)for all real t. Thus, H2 is a time invariant system.

    3.3.2.3 Linear Time Invariant Systems

    Certain systems are both linear and time-invariant, and are thus referred to as LTI systems.

    Linear Time-Invariant Systems

    (a) (b)

    Figure 3.16: This is a combination of the two cases above. Since the input to Figure 3.16(b) is a scaled,

    time-shifted version of the input in Figure 3.16(a), so is the output.

    As LTI systems are a subset of linear systems, they obey the principle of superposition. In the gure

    below, we see the eect of applying time-invariance to the superposition denition in the linear systems

    section above.

    Available for free at Connexions

  • 48

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    (a) (b)

    Figure 3.17

    Superposition in Linear Time-Invariant Systems

    Figure 3.18: The principle of superposition applied to LTI systems

    3.3.2.3.1 LTI Systems in Series

    If two or more LTI systems are in series with each other, their order can be interchanged without aecting

    the overall output of the system. Systems in series are also called cascaded systems.

    Available for free at Connexions

  • 49

    Cascaded LTI Systems

    (a)

    (b)

    Figure 3.19: The order of cascaded LTI systems can be interchanged without changing the overall

    eect.

    3.3.2.3.2 LTI Systems in Parallel

    If two or more LTI systems are in parallel with one another, an equivalent system is one that is dened as

    the sum of these individual systems.

    Parallel LTI Systems

    (a) (b)

    Figure 3.20: Parallel systems can be condensed into the sum of systems.

    Available for free at Connexions

  • 50

    CHAPTER 3. INTRODUCTION TO SYSTEMS

    Example 3.5

    Consider the system H3 in which

    H3 (f (t)) = 2f (t) (3.15)

    for all signals f . Given any two signals f, g and scalars a, b

    H3 (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH3 (f (t)) + bH3 (g (t)) (3.16)

    for all real t. Thus, H3 is a linear system. For all real T and signals f ,

    ST (H3 (f (t))) = ST (2f (t)) = 2f (t T ) = H3 (f (t T )) = H3 (ST (f (t))) (3.17)for all real t. Thus, H3 is a time invariant system. Therefore, H3 is a linear time invariant system.

    Example 3.6

    As has been previously shown, each of the following systems are not linear or not time invariant.

    H1 (f (t)) = tf (t) (3.18)

    H2 (f (t)) = (f (t))2(3.19)

    Thus, they are not linear time invariant systems.

    3.3.3 Linear Time Invariant Demonstration

    Figure 3.21: Interact(when online) with the Mathematica CDF above demonstrating Linear Time

    Invariant systems. To download, right click and save le as .cdf.

    3.3.4 LTI Systems Summary

    Two very important and useful properties of systems have just been described in detail. The rst of these,

    linearity, allows us the knowledge that a sum of input signals produces an output signal that is the summed

    original output signals and that a scaled input signal produces an output signal scaled from the original

    output signal. The second of these, time invariance, ensures that time shifts commute with application of

    the system. In other words, the output signal for a time shifted input is the same as the output signal for the

    original input signal, except for an identical shift in time. Systems that demonstrate both linearity and time

    invariance, which are given the acronym LTI systems, are particularly simple to study as these properties

    allow us to leverage some of the most powerful tools in signal processing.

    Available for free at Connexions

  • Chapter 4

    Time Domain Analysis of Continuous

    Time Systems

    4.1 Continuous Time Systems

    1

    4.1.1 Introduction

    As you already now know, a continuous time system operates on a continuous time signal input and produces

    a continuous time signal output. There are numerous examples of useful continuous time systems in signal

    processing as they essentially describe the world around us. The class of continuous time systems that

    are both linear and time invariant, known as continuous time LTI systems, is of particular interest as the

    properties of linearity and time invariance together allow the use of some of the most important and powerful

    tools in signal processing.

    4.1.2 Continuous Time Systems

    4.1.2.1 Linearity and Time Invariance

    A system H is said to be linear if it satises two important conditions. The rst, additivity, states for everypair of signals x, y that H (x+ y) = H (x) +H (y). The second, homogeneity of degree one, states for everysignal x and scalar a we have H (ax) = aH (x). It is clear that these conditions can be combined togetherinto a single condition for linearity. Thus, a system is said to be linear if for every signals x, y and scalarsa, b we have that

    H (ax+ by) = aH (x) + bH (y) . (4.1)

    Linearity is a particularly important property of systems as it allows us to leverage the powerful tools of

    linear algebra, such as bases, eigenvectors, and eigenvalues, in their study.

    A system H is said to be time invariant if a time shift of an input produces the corresponding shiftedoutput. In other, more precise words, the system H commutes with the time shift operator ST for everyT R. That is,

    STH = HST . (4.2)

    Time invariance is desirable because it eases computation while mirroring our intuition that, all else equal,

    physical systems should react the same to identical inputs at dierent times.

    When a system exhibits both of these important properties it allows for a more straigtforward analysis

    than would otherwise be possible. As will be explained and proven in subsequent modules, computation

    1

    This content is available online at .

    Available for free at Connexions

    51

  • 52

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    of the system output for a given input becomes a simple matter of convolving the input with the system's

    impulse response signal. Also proven later, the fact that complex exponential are eigenvectors of linear time

    invariant systems will enable the use of frequency domain tools such as the various Fouier transforms and

    associated transfer functions, to describe the behavior of linear time invariant systems.

    Example 4.1

    Consider the system H in which

    H (f (t)) = 2f (t) (4.3)

    for all signals f . Given any two signals f, g and scalars a, b

    H (af (t) + bg (t)) = 2 (af (t) + bg (t)) = a2f (t) + b2g (t) = aH (f (t)) + bH (g (t)) (4.4)

    for all real t. Thus, H is a linear system. For all real T and signals f ,

    ST (H (f (t))) = ST (2f (t)) = 2f (t T ) = H (f (t T )) = H (ST (f (t))) (4.5)for all real t. Thus, H is a time invariant system. Therefore, H is a linear time invariant system.

    4.1.3 Continuous Time Systems Summary

    Many useful continuous time systems will be encountered in a study of signals and systems. This course

    is most interested in those that demonstrate both the linearity property and the time invariance property,

    which together enable the use of some of the most powerful tools of signal processing. It is often useful to

    describe them in terms of rates of change through linear constant coecient ordinary dierential equations.

    4.2 Continuous Time Impulse Response

    2

    4.2.1 Introduction

    The output of an LTI system is completely determined by the input and the system's response to a unit

    impulse.

    System Output

    Figure 4.1: We can determine the system's output, y(t), if we know the system's impulse response,

    h(t), and the input, f(t).

    The output for a unit impulse input is called the impulse response.

    2

    This content is available online at .

    Available for free at Connexions

  • 53

    Figure 4.2

    4.2.1.1 Example Approximate Impulses

    1. Hammer blow to a structure

    2. Hand clap or gun blast in a room

    3. Air gun blast underwater

    4.2.2 LTI Systems and Impulse Responses

    4.2.2.1 Finding System Outputs

    By the sifting property of impulses, any signal can be decomposed in terms of an integral of shifted, scaled

    impulses.

    f (t) =

    f () (t ) d (4.6)

    (t ) peaks up where t = .

    Available for free at Connexions

  • 54

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    Figure 4.3

    Since we know the response of the system to an impulse and any signal can be decomposed into impulses,

    all we need to do to nd the response of the system to any signal is to decompose the signal into impulses,

    calculate the system's output for every impulse and add the outputs back together. This is the process

    known as Convolution. Since we are in Continuous Time, this is the Continuous Time Convolution

    Integral.

    4.2.2.2 Finding Impulse Responses

    Theory:

    a. Solve the system's dierential equation for y(t) with f (t) = (t)b. Use the Laplace transform

    Practice:

    a. Apply an impulse-like input signal to the system and measure the output

    b. Use Fourier methods

    We will assume that h(t) is given for now.

    The goal now is to compute the output y(t) given the impulse response h(t) and the input f(t).

    Available for free at Connexions

  • 55

    Figure 4.4

    4.2.3 Impulse Response Summary

    When a system is "shocked" by a delta function, it produces an output known as its impulse response. For

    an LTI system, the impulse response completely determines the output of the system given any arbitrary

    input. The output can be found using continuous time convolution.

    4.3 Continuous-Time Convolution

    3

    4.3.1 Introduction

    Convolution, one of the most important concepts in electrical engineering, can be used to determine the

    output a system produces for a given input signal. It can be shown that a linear time invariant system

    is completely characterized by its impulse response. The sifting property of the continuous time impulse

    function tells us that the input signal to a system can be represented as an integral of scaled and shifted

    impulses and, therefore, as the limit of a sum of scaled and shifted approximate unit impulses. Thus, by

    linearity, it would seem reasonable to compute of the output signal as the limit of a sum of scaled and

    shifted unit impulse responses and, therefore, as the integral of a scaled and shifted impulse response. That

    is exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine a

    linear time invariant system's output from knowledge of the input and the impulse response.

    4.3.2 Convolution and Circular Convolution

    4.3.2.1 Convolution

    4.3.2.1.1 Operation Denition

    Continuous time convolution is an operation on two continuous time signals dened by the integral

    (f g) (t) =

    f () g (t ) d (4.7)

    for all signals f, g dened on R. It is important to note that the operation of convolution is commutative,meaning that

    f g = g f (4.8)3

    This content is available online at .

    Available for free at Connexions

  • 56

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    for all signals f, g dened on R. Thus, the convolution operation could have been just as easily stated usingthe equivalent denition

    (f g) (t) =

    f (t ) g () d (4.9)

    for all signals f, g dened on R. Convolution has several other important properties not listed here butexplained and derived in a later module.

    4.3.2.1.2 Denition Motivation

    The above operation denition has been chosen to be particularly useful in the study of linear time invariant

    systems. In order to see this, consider a linear time invariant system H with unit impulse response h. Givena system input signal x we would like to compute the system output signal H (x). First, we note that theinput can be expressed as the convolution

    x (t) =

    x () (t ) d (4.10)

    by the sifting property of the unit impulse function. Writing this integral as the limit of a summation,

    x (t) = lim0

    n

    x (n) (t n) (4.11)

    where

    (t) = {1/ 0 t <

    0 otherwise(4.12)

    approximates the properties of (t). By linearity

    Hx (t) = lim0

    n

    x (n)H (t n) (4.13)

    which evaluated as an integral gives

    Hx (t) =

    x ()H (t ) d. (4.14)

    Since H (t ) is the shifted unit impulse response h (t ), this gives the result

    Hx (t) =

    x ()h (t ) d = (x h) (t) . (4.15)

    Hence, convolution has been dened such that the output of a linear time invariant system is given by the

    convolution of the system input with the system unit impulse response.

    4.3.2.1.3 Graphical Intuition

    It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes.

    Consider the convolution of two functions f, g given by

    (f g) (t) =

    f () g (t ) d =

    f (t ) g () d. (4.16)

    The rst step in graphically understanding the operation of convolution is to plot each of the functions.

    Next, one of the functions must be selected, and its plot reected across the = 0 axis. For each real t, that

    Available for free at Connexions

  • 57

    same function must be shifted left by t. The product of the two resulting plots is then constructed. Finally,the area under the resulting curve is computed.

    Example 4.2

    Recall that the impulse response for the capacitor voltage in a series RC circuit is given by

    h (t) =1RC

    et/RCu (t) , (4.17)

    and consider the response to the input voltage

    x (t) = u (t) . (4.18)

    We know that the output for this input voltage is given by the convolution of the impulse response

    with the input signal

    y (t) = x (t) h (t) . (4.19)We would like to compute this operation by beginning in a way that minimizes the algebraic

    complexity of the expression. Thus, since x (t) = u (t) is the simpler of the two signals, it isdesirable to select it for time reversal and shifting. Thus, we would like to compute

    y (t) =

    1RC

    e/RCu ()u (t ) d. (4.20)

    The step functions can be used to further simplify this integral by narrowing the region of inte-

    gration to the nonzero region of the integrand. Therefore,

    y (t) = max{0,t}

    0

    1RC

    e/RCd. (4.21)

    Hence, the output is

    y (t) = { 0 t 01 et/RC t > 0(4.22)

    which can also be written as

    y (t) =(

    1 et/RC)u (t) . (4.23)

    4.3.3 Online Resources

    The following pages have interactive Java applets that demonstrate several aspects of continuous-time con-

    volution.

    Joy of Convolution (Johns Hopkins University)

    4

    Step-by-Step Convolution (Rice University)

    5

    4

    http://www.jhu.edu/signals/convolve/index.html

    5

    http://www.ece.rice.edu/dsp/courses/elec301/demos/applets/Convo1/

    Available for free at Connexions

  • 58

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    4.3.4 Convolution Demonstration

    Figure 4.5: Interact (when online) with a Mathematica CDF demonstrating Convolution. To Download,

    right-click and save target as .cdf.

    4.3.5 Convolution Summary

    Convolution, one of the most important concepts in electrical engineering, can be used to determine the

    output signal of a linear time invariant system for a given input signal with knowledge of the system's unit

    impulse response. The operation of continuous time convolution is dened such that it performs this function

    for innite length continuous time signals and systems. The operation of continuous time circular convolution

    is dened such that it performs this function for nite length and periodic continuous time signals. In each

    case, the output of the system is the convolution or circular convolution of the input signal with the unit

    impulse response.

    4.4 Properties of Continuous Time Convolution

    6

    4.4.1 Introduction

    We have already shown the important role that continuous time convolution plays in signal processing. This

    section provides discussion and proof of some of the important properties of continuous time convolution.

    Analogous properties can be shown for continuous time circular convolution with trivial modication of the

    proofs provided except where explicitly noted otherwise.

    6

    This content is available online at .

    Available for free at Connexions

  • 59

    4.4.2 Continuous Time Convolution Properties

    4.4.2.1 Associativity

    The operation of convolution is associative. That is, for all continuous time signals f1, f2, f3 the followingrelationship holds.

    f1 (f2 f3) = (f1 f2) f3 (4.24)In order to show this, note that

    (f1 (f2 f3)) (t) =

    f1 (1) f2 (2) f3 ((t 1) 2) d2d1

    =

    f1 (1) f2 ((1 + 2) 1) f3 (t (1 + 2)) d2d1

    =

    f1 (1) f2 (3 1) f3 (t 3) d1d3

    = ((f1 f2) f3) (t)

    (4.25)

    proving the relationship as desired through the substitution 3 = 1 + 2.

    4.4.2.2 Commutativity

    The operation of convolution is commutative. That is, for all continuous time signals f1, f2 the followingrelationship holds.

    f1 f2 = f2 f1 (4.26)In order to show this, note that

    (f1 f2) (t) = f1 (1) f2 (t 1) d1

    = f1 (t 2) f2 (2) d2

    = (f2 f1) (t)(4.27)

    proving the relationship as desired through the substitution 2 = t 1.

    4.4.2.3 Distributivity

    The operation of convolution is distributive over the operation of addition. That is, for all continuous time

    signals f1, f2, f3 the following relationship holds.

    f1 (f2 + f3) = f1 f2 + f1 f3 (4.28)In order to show this, note that

    (f1 (f2 + f3)) (t) = f1 () (f2 (t ) + f3 (t )) d

    = f1 () f2 (t ) d +

    f1 () f3 (t ) d

    = (f1 f2 + f1 f3) (t)(4.29)

    proving the relationship as desired.

    Available for free at Connexions

  • 60

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    4.4.2.4 Multilinearity

    The operation of convolution is linear in each of the two function variables. Additivity in each variable

    results from distributivity of convolution over addition. Homogenity of order one in each variable results

    from the fact that for all continuous time signals f1, f2 and scalars a the following relationship holds.

    a (f1 f2) = (af1) f2 = f1 (af2) (4.30)In order to show this, note that

    (a (f1 f2)) (t) = a f1 () f2 (t ) d

    = (af1 ()) f2 (t ) d

    = ((af1) f2) (t)= f1 () (af2 (t )) d

    = (f1 (af2)) (t)

    (4.31)

    proving the relationship as desired.

    4.4.2.5 Conjugation

    The operation of convolution has the following property for all continuous time signals f1, f2.

    f1 f2 = f1 f2 (4.32)In order to show this, note that (

    f1 f2)

    (t) = f1 () f2 (t ) d

    = f1 () f2 (t )d

    = f1 () f2 (t ) d

    =(f1 f2

    )(t)

    (4.33)

    proving the relationship as desired.

    4.4.2.6 Time Shift

    The operation of convolution has the following property for all continuous time signals f1, f2 where ST isthe time shift operator.

    ST (f1 f2) = (ST f1) f2 = f1 (ST f2) (4.34)In order to show this, note that

    ST (f1 f2) (t) = f2 () f1 ((t T ) ) d

    = f2 ()ST f1 (t ) d

    = ((ST f1) f2) (t)= f1 () f2 ((t T ) ) d

    = f1 ()ST f2 (t ) d

    = f1 (ST f2) (t)

    (4.35)

    proving the relationship as desired.

    Available for free at Connexions

  • 61

    4.4.2.7 Dierentiation

    The operation of convolution has the following property for all continuous time signals f1, f2.

    d

    dt(f1 f2) (t) =

    (df1dt f2

    )(t) =

    (f1 df2

    dt

    )(t) (4.36)

    In order to show this, note that

    ddt (f1 f2) (t) =

    f2 ()

    ddtf1 (t ) d

    =(df1dt f2

    )(t)

    = f1 ()

    ddtf2 (t ) d

    =(f1 df2dt

    )(t)

    (4.37)

    proving the relationship as desired.

    4.4.2.8 Impulse Convolution

    The operation of convolution has the following property for all continuous time signals f where is the Diracdelta funciton.

    f = f (4.38)In order to show this, note that

    (f ) (t) = f () (t ) d= f (t)

    (t ) d= f (t)

    (4.39)

    proving the relationship as desired.

    4.4.2.9 Width

    The operation of convolution has the following property for all continuous time signals f1, f2 whereDuration (f) gives the duration of a signal f .

    Duration (f1 f2) = Duration (f1) +Duration (f2) (4.40). In order to show this informally, note that (f1 f2) (t) is nonzero for all t for which there is a such thatf1 () f2 (t ) is nonzero. When viewing one function as reversed and sliding past the other, it is easy tosee that such a exists for all t on an interval of length Duration (f1) +Duration (f2). Note that this is notalways true of circular convolution of nite length and periodic signals as there is then a maximum possible

    duration within a period.

    4.4.3 Convolution Properties Summary

    As can be seen the operation of continuous time convolution has several important properties that have

    been listed and proven in this module. With slight modications to proofs, most of these also extend to

    continuous time circular convolution as well and the cases in which exceptions occur have been noted above.

    These identities will be useful to keep in mind as the reader continues to study signals and systems.

    Available for free at Connexions

  • 62

    CHAPTER 4. TIME DOMAIN ANALYSIS OF CONTINUOUS TIME

    SYSTEMS

    4.5 Causality and Stability of Continuous-Time Linear Time-

    Invariant Systems

    7

    4.5.1 Introduction

    We have previously dened the system properties of causality and bounded-input bounded-output (BIBO)

    stability. We have also determined that a linear time-invariant (LTI) system is completely determined

    by its impulse response h (t): the output y (t) for an arbitrary input x (t) is obtained via convolution asy (t) = x (t) h (t). It should not be surprising then that one can determine whether an LTI system is causalor BIBO stable simply by inspecting its impulse response h (t).

    4.5.2 Causality

    Recall that a system is causal if its output y (t0)