Top Banner
Projects 3-4 person groups Deliverables: Poster, Report & main code (plus proposal, midterm slide) Topics: your own or chose form suggested topics / kaggle Week 3 groups due to TA Nima. Rearrangement might be needed. May 2 proposal due. TAs and Peter can approve. Proposal: One page: Title, A large paragraph, data, weblinks, references. Something physical and data oriented. May ~16 Midterm slides. Likely presented in 4 subgroups (3TA+Peter). 5pm 6 June Jacobs Hall lobby, final poster session. Snacks Poster, Report & main code. Report due Saturday 16 June.
19

Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Jul 25, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Projects• 3-4 person groups• Deliverables: Poster, Report & main code (plus proposal, midterm slide)• Topics: your own or chose form suggested topics / kaggle• Week 3 groups due to TA Nima. Rearrangement might be needed.

• May 2 proposal due. TAs and Peter can approve. • Proposal: One page: Title, A large paragraph, data, weblinks, references. • Something physical and data oriented.• May ~16 Midterm slides. Likely presented in 4 subgroups (3TA+Peter).• 5pm 6 June Jacobs Hall lobby, final poster session. Snacks

• Poster, Report & main code. Report due Saturday 16 June.

Page 2: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Logistic regression (page 205)

When there are only two classes we can model the conditional probability of the positive class as

If we use the right error function, something nice happens: The gradient of the logistic and the gradient of the error function cancel each other:

)exp(11

)()()|( 01 zzwherewCp T

-+=+= ss xwx

nn

N

nn tyEpE xwwtw )()(),|(ln)(

1-=Ñ-= å

=

Page 3: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

The natural error function for the logistic

Fitting logistic model using maximum likelihood, requires minimizing the negative log probability of the correct answer summed over the training set.

)1(

11

)1(ln)1(ln

)|(ln

1

1

nn

nn

n

n

n

n

n

n

nn

N

nnn

N

nnn

yyty

yt

yt

yE

ytyt

ytpE

--

=

--

+-=¶¶

--+-=

-=

å

å

=

=

error derivative on training case n

if t =1 if t =0

Page 4: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Using the chain rule to get the error derivatives

nnnn

n

n

n

nn

nnn

n

nn

nn

n

n

nn

nT

n

tyzdzdy

yEE

yydzdy

yyty

yE

zwz

xww

xw

xw

)(

)1(,)1(

,0

-=¶¶

¶¶

=¶¶

-=--

=¶¶

=¶¶

+=

Page 5: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Softmax function

198 4. LINEAR MODELS FOR CLASSIFICATION

Note that in (4.57) we have simply rewritten the posterior probabilities in anequivalent form, and so the appearance of the logistic sigmoid may seem rather vac-uous. However, it will have significance provided a(x) takes a simple functionalform. We shall shortly consider situations in which a(x) is a linear function of x, inwhich case the posterior probability is governed by a generalized linear model.

For the case of K > 2 classes, we have

p(Ck|x) =p(x|Ck)p(Ck)!j p(x|Cj)p(Cj)

=exp(ak)!j exp(aj)

(4.62)

which is known as the normalized exponential and can be regarded as a multiclassgeneralization of the logistic sigmoid. Here the quantities ak are defined by

ak = ln p(x|Ck)p(Ck). (4.63)

The normalized exponential is also known as the softmax function, as it representsa smoothed version of the ‘max’ function because, if ak ≫ aj for all j ̸= k, thenp(Ck|x) ≃ 1, and p(Cj |x) ≃ 0.

We now investigate the consequences of choosing specific forms for the class-conditional densities, looking first at continuous input variables x and then dis-cussing briefly the case of discrete inputs.

4.2.1 Continuous inputsLet us assume that the class-conditional densities are Gaussian and then explore

the resulting form for the posterior probabilities. To start with, we shall assume thatall classes share the same covariance matrix. Thus the density for class Ck is givenby

p(x|Ck) =1

(2π)D/2

1|Σ|1/2

exp"− 1

2(x − µk)TΣ−1(x − µk)

#. (4.64)

Consider first the case of two classes. From (4.57) and (4.58), we have

p(C1|x) = σ(wTx + w0) (4.65)

where we have defined

w = Σ−1(µ1 − µ2) (4.66)

w0 = − 12µT

1 Σ−1µ1 +12µT

2 Σ−1µ2 + lnp(C1)p(C2)

. (4.67)

We see that the quadratic terms in x from the exponents of the Gaussian densitieshave cancelled (due to the assumption of common covariance matrices) leading toa linear function of x in the argument of the logistic sigmoid. This result is illus-trated for the case of a two-dimensional input space x in Figure 4.10. The resulting

198 4. LINEAR MODELS FOR CLASSIFICATION

Note that in (4.57) we have simply rewritten the posterior probabilities in anequivalent form, and so the appearance of the logistic sigmoid may seem rather vac-uous. However, it will have significance provided a(x) takes a simple functionalform. We shall shortly consider situations in which a(x) is a linear function of x, inwhich case the posterior probability is governed by a generalized linear model.

For the case of K > 2 classes, we have

p(Ck|x) =p(x|Ck)p(Ck)!j p(x|Cj)p(Cj)

=exp(ak)!j exp(aj)

(4.62)

which is known as the normalized exponential and can be regarded as a multiclassgeneralization of the logistic sigmoid. Here the quantities ak are defined by

ak = ln p(x|Ck)p(Ck). (4.63)

The normalized exponential is also known as the softmax function, as it representsa smoothed version of the ‘max’ function because, if ak ≫ aj for all j ̸= k, thenp(Ck|x) ≃ 1, and p(Cj |x) ≃ 0.

We now investigate the consequences of choosing specific forms for the class-conditional densities, looking first at continuous input variables x and then dis-cussing briefly the case of discrete inputs.

4.2.1 Continuous inputsLet us assume that the class-conditional densities are Gaussian and then explore

the resulting form for the posterior probabilities. To start with, we shall assume thatall classes share the same covariance matrix. Thus the density for class Ck is givenby

p(x|Ck) =1

(2π)D/2

1|Σ|1/2

exp"− 1

2(x − µk)TΣ−1(x − µk)

#. (4.64)

Consider first the case of two classes. From (4.57) and (4.58), we have

p(C1|x) = σ(wTx + w0) (4.65)

where we have defined

w = Σ−1(µ1 − µ2) (4.66)

w0 = − 12µT

1 Σ−1µ1 +12µT

2 Σ−1µ2 + lnp(C1)p(C2)

. (4.67)

We see that the quadratic terms in x from the exponents of the Gaussian densitieshave cancelled (due to the assumption of common covariance matrices) leading toa linear function of x in the argument of the logistic sigmoid. This result is illus-trated for the case of a two-dimensional input space x in Figure 4.10. The resulting

Page 6: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Cross-entropy or “softmax” function for multi-class classification

iij i

j

ji

jjj

iii

i

j

z

z

i

tyzy

yE

zE

ytE

yyzy

e

eyj

i

-=¶

¶¶

=¶¶

-=

-=¶¶

=

å

å

å

ln

)(1

The output units use a non-local non-linearity:

The natural cost function is the negative log probof the right answer

output units

z

y

z

y

z

y1

1 2

2 3

3target value

Page 7: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

A special case of softmax for two classes

So the logistic is just a special case of softmax without redundant parameters:

Adding the same constant to both z1 and z0 has no effect.The over-parameterization of the softmax is because the probabilities must add to 1.

)(10101

1

11

zzzz

z

eeeey --+

=+

=

Page 8: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten
Page 9: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Lecture 8: Backpropagation

Page 10: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Number of parameters• " = $%& ,N measurement, M parameters

– How large a w can we determine?

• " = '($, &)– How large a w can we determine?

• Consider a neural network, with one hidden layer, each layer having N=M=100 nodes– How large is W?– How many observations is needed to estimate W?

Page 11: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Why we need backpropagation• Networks without hidden units are very limited in the input-

output mappings they can model.– More layers of linear units do not help. Its still linear.– Fixed output non-linearities are not enough

• We need multiple layers of adaptive non-linear hidden units,giving a universal approximator. But how to train such nets?– We need an efficient way of adapting all the weights, not just

the last layer. Learning the weights going into hidden units is equivalent to learning features.

– Nobody is telling us directly what hidden units should do.

Page 12: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Learning by perturbing weightsRandomly perturb one weight. If it improves performance save the change.

– Very inefficient. We need to do multiple forward passes on a representative set of training data to change one weight.

– Towards the end of learning, large weight perturbations will nearly always make things worse.

Randomly perturb all weights in parallel and correlate the performance gain with the weight changes.

Not any better. We need lots of trials to “see” the effect of changing a weight through the noise created by all the others.

Learning the hidden to output weights is easy. Learning the input to hidden weights is hard.

hidden unitsoutput units

input units

Page 13: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

The idea behind backpropagation

Don’t know what the hidden units should be, but we can compute how fast the error changes as we change a hidden activity.

– Instead of using desired activities to train the hidden units, use error derivatives w.r.t. hidden activities.

– Each hidden activity affect many output units and have many separate effects on the error.

– Error derivatives for all the hidden units is computed efficiently.

– Once we have the error derivatives for the hidden activities, its easy to get the error derivatives for the weights going into a hidden unit.

Page 14: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

A difference in notation• For networks with multiple hidden layers Bishop uses an

explicit extra index to denote the layer.• The lecture notes use a simpler notation in which the index

denotes the layer implicitly.y is the output of a unit in any layerx is the summed input to a unit in any layerThe index indicates which layer a unit is in.

yjyi xji j

Page 15: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Non-linear neurons with smooth derivatives

• For backpropagation, we need neurons that have well-behaved derivatives.– Typically they use the logistic

function– The output is a smooth function

of inputs and weights.

)1(

1

1

jjj

j

iji

ji

ij

j

jj

iji

ijj

yydx

dy

wy

xy

w

x

xe

y

wybx

-=

¶=

-+

=

+= å

0.5

00

1

jx

jy

Page 16: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Backpropagation• J nodes• Observations !"• Predictions #"• Energy function $ =• '(

')*=

• '('+*

=

• '(',-*

=

• '(')-

= ∑"/

• '('+-

=

yjyi xji j

Page 17: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

NOT USED YET

Page 18: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

Sketch of backpropagation on a single training case

1. Convert the discrepancy between each output and its target value into an error derivative.

2. Compute error derivatives in each hidden layer from error derivatives in the layer above.

3. Use error derivatives w.r.t. activities to get error derivatives w.r.t. the weights.

i

j

jjj

jj

j

yE

yE

dyyE

dyE

¶¶

¶¶

-=¶

-=å 221 )(

Page 19: Projects - University of California, San Diegonoiselab.ucsd.edu/ECE228_2018/slides/lecture8.pdf · 198 4. LINEAR MODELS FOR CLASSIFICATION Note that in (4.57) we have simply rewritten

The derivatives

åå ¶¶

=¶¶

=¶¶

¶¶

=¶¶

¶=

¶¶

¶¶

-=¶¶

=¶¶

j jij

j ji

j

i

ji

jij

j

ij

jjj

jj

j

j

xE

wxE

dy

dx

yE

xE

yxE

w

x

wE

yE

yyyE

dx

dy

xE )1(

i

j

j

y

x

y

j

i