Top Banner
In the name of god Autoencoders Mostafa Heidarpour 1
25

In the name of god

Mar 14, 2016

Download

Documents

Destiny Hawkins

In the name of god. Autoencoders Mostafa Heidarpour. Autoencoders. An auto-encoder is an artificial neural network used for learning efficient codings The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: In the name of god

1

In the name of god

AutoencodersMostafa Heidarpour

Page 2: In the name of god

2

Autoencoders

• An auto-encoder is an artificial neural network used for learning efficient codings

• The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data

• This means it is being used for dimensionality reduction

Page 3: In the name of god

3

Autoencoders

• Auto-encoders use three or more layers:– An input layer. For example, in a face recognition

task, the neurons in the input layer could map to pixels in the photograph.

– A number of considerably smaller hidden layers, which will form the encoding.

– An output layer, where each neuron has the same meaning as in the input layer.

Page 4: In the name of god

4

Autoencoders

Page 5: In the name of god

5

Autoencoders

• Encoder

Where h is feature vector or representation or code computed from x

• Decoder maps from feature space back into input space, producing a reconstruction

attempting to incur the lowest possible reconstruction errorGood generalization means low reconstruction error at test examples, while

having high reconstruction error for most other x configurations

Page 6: In the name of god

6

Autoencoders

Page 7: In the name of god

7

Autoencoders

Page 8: In the name of god

8

Autoencoders

• In summary, basic autoencoder training consists in finding a value of parameter vector minimizing reconstruction error:

• This minimization is usually carried out by stochastic gradient descent

Page 9: In the name of god

9

regularized autoencoders

To capture the structure of the data-generating distribution, it is therefore important that something in the training criterion or the

parameterization prevents the autoencoder from learning the identity function, which has zero reconstruction error everywhere. This is

achieved through various means in the different forms of autoencoders, we call these

regularized autoencoders.

Page 10: In the name of god

10

Autoencoders

• Denoising Auto-encoders (DAE)• learning to reconstruct the clean input from a corrupted version.

• Contractive auto-encoders (CAE)• robustness to small perturbations around the training points• reduce the number of effective degrees of freedom of the

representation (around each point)• making the derivative of the encoder small (saturate hidden units)

• Sparse Autoencoders• Sparsity in the representation can be achieved by penalizing the

hidden unit biases or by directly penalizing the output of the hidden unit activations

Page 11: In the name of god

11

Example

ورودی خروجی

1000000001000000001000000001000000001000000001000000001000000001

1000000001000000001000000001000000001000000001000000001000000001

Hidden nodes

Page 12: In the name of god

12

Example

• net=fitnet([3]);

Page 13: In the name of god

13

Example

• net=fitnet([8 3 8]);

Page 14: In the name of god

14

Example

Page 15: In the name of god

15

Page 16: In the name of god

16

Introduction

• the auto-encoder network has not been utilized for clustering tasks

• To make it suitable for clustering, proposed a new objective function embedded into the auto-encoder model

Page 17: In the name of god

17

Proposed Model

Page 18: In the name of god

18

Proposed Model

• Suppose one-layer auto-encoder network as an example (minimizing the reconstruction error)

• Embed objective function:

Page 19: In the name of god

19

Proposed Algorithm

Page 20: In the name of god

20

Experiments

• All algorithms are tested on 3 databases: – MNIST contains 60,000 handwritten digits images

(0 9) with the resolution ∼ of 28 × 28.– USPS consists of 4,649 handwritten digits images

(0 9) with the resolution ∼ of 16 × 16.– YaleB is composed of 5,850 faces image over ten

categories, and each image has 1200 pixels.• Model: a four-layers auto-encoder network

with the structure of 1000-250-50-10.

Page 21: In the name of god

21

Experiments

• Baseline Algorithms: Compare with three classic and widely used clustering algorithms

• K-means• Spectral clustering• N-cut

• Evaluation Criterion• Accuracy (ACC)• Normalized mutual information (NMI)

Page 22: In the name of god

22

Quantitative Results

Page 23: In the name of god

23

Visualization

Page 24: In the name of god

24

Difference of Spaces

Page 25: In the name of god

25

Thanks for attention

Any question?