Top Banner
Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I’ve divided the questions and related them to the topics covered in Lecture 1 to 6. Lecture 1: 1. Consider a machine vision system for Original Character Recognition (OCR). List and also explain shortly the fundamental steps of the image processing for this system. 2. Motivate shortly why a machine vision system should be used for optical quality inspection of items produced at a factory. Also give drawbacks, pros and cons.
13

Exercise questions for Machine vision - miun.se

Nov 28, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Exercise questions for Machine vision - miun.se

Exercise questions for Machine vision

This is a collection of exercise questions. These questions are all examination alike which

means that similar questions may appear at the written exam. I’ve divided the questions and

related them to the topics covered in Lecture 1 to 6.

Lecture 1:

1. Consider a machine vision system for Original Character Recognition (OCR). List and

also explain shortly the fundamental steps of the image processing for this system.

2. Motivate shortly why a machine vision system should be used for optical quality

inspection of items produced at a factory. Also give drawbacks, pros and cons.

Page 2: Exercise questions for Machine vision - miun.se

Lecture 2 and 3:

3. Compare the following three different architectures for CCD sensors, Full frame CCD,

Frame transfer CCD, Interline transfer CCD. Draw simple architectural layouts for each

sensor, explain differences, advantages and drawbacks with the different architectures.

4. How is the Signal to Noise Ratio (SNR) related to the architectural design of an area scan

image sensor? What can be done to improve SNR for a given architecture?

5. Explain the two most common modes for readout of area scan sensors, progressive- and

interlaced scan. Explain advantages and disadvantages with the two different modes.

6. What is a telecentric lens, what is its most important feature and what purpose can it be

used for? Draw a sketch of the basic light rays for a telecentric lens and explain its

function in comparison with a standard Gaussian optics.

7. Explain shortly the following characteristics of illumination.

• Diffuse light

• Directed light

• Telecentric light

• Front light

• Back light

• Bright field

• Dark field

Page 3: Exercise questions for Machine vision - miun.se

'

11

'

1

fss=−

s

s

h

h ''==β

8.

Combine pictures A to B above with the following choices, F=2 F=16 and Telecentric

lens. Each picture matches only one choice. Motivate shortly why you think a picture

matches your choice.

9. I will first list a number of properties related to the illumination for a machine vision

system:

• Diffuse light

• Directed light

• Telecentric light

• Front light

• Back light

• Bright field

• Dark field

The following four pictures show different types of illumination systems with cameras.

Assign the right properties to each of the illumination system. There can be more than one

relevant property for an illumination.

A) B) C) D)

10. A Gaussian lens is used to project an

image of a car at far distance. It must be

possible to resolve the car’s two

headlights with spacing of 1.5 meters

and at a distance of 600 meters from the

camera. What will be the absolute

maximum pixel size of an area scan

sensor if the focal length of the lens is 8 mm?

11. Explain shortly the effects of reducing the aperture of a lens system in front of a pixel area

sensor in terms of depth of field, signal to noise ratio and image resolution.

A) B) C)

Page 4: Exercise questions for Machine vision - miun.se

12. a) I will now list five different sources of noise that can appear in images made with for

example CMOS or CCD sensors.

Noise sources:

- Shot noise (In amplifiers)

- Sensitivity variations between pixels

- Photon noise

- Dark signal variations

- Thermal noise

Assign the above five sources of noise to the two following classes of noise. Each class

will have more than one noise source and both classes will have total of five noise

sources.

Classes of noise:

- Temporal noise

- Spatial noise

b) Suggest a method how to suppress the temporal noise.

Page 5: Exercise questions for Machine vision - miun.se

Lecture 4:

13. Figure 1 depicts a diagram for the amplitude characteristic of a 2D linear filter. What kind

of filter is this, High Pass, Low Pass or Band Pass ? What do you expect to be the visual

effect on an image if this filter where applied on it?

Figure 1. Amplitude characteristic for a 2D filter.

14. The three pictures above show the amplitude transfer function for a 2D Butterworth filter.

The amplitude characteristics is illustrated as a mesh plot, an intensity image and as a

radial plot for different orders n of the filter.

a) What class of filter is this, Low Pass, High Pass, Band Pass or Band Stop filter?

b) Which one of the pictures labeled A to E is filtered using the smallest value for r as

defined in the amplitude characteristics above.

r

r

Page 6: Exercise questions for Machine vision - miun.se

15. Explain shortly what kind of image processing operations is necessary for high quality

downscaling of an image?

16. Figure 2 shows a picture of the silhouette of a screw taken at back lightening. The

silhouette is highlighted at subpixel precision by image processing. Suggest a method for

how this image processing can be done.

Figure 2. Screw thread.

17. Assume a gray-level image f(r,c) and its smoothened correspondence g(r,c). The region of

interest is R. Then the dynamic thresholding of brighter objects on a dark background can

be defined as, { }diffgcrgcrfRcrS ≥−∈= ),(),(|),(

Where gdiff is a fixed constant. Pictures A and B both have bright spots on a darker

background. If compared with using simple global thresholding, which one of the pictures

A or B will require the use of dynamic thresholding in order to successfully segment the

bright spots from the background? Motivate your answer shortly. If you answer with a

long and not precise story, your credits will be reduced.

Pic. A Pic. B

Original

image

A

B C

D E

Page 7: Exercise questions for Machine vision - miun.se

18. Image (2) was processed by Histogram equalisation to create image (1).

a) Which one of the histograms A and B correspond to image (1) and (2)?

Explain and motivate

b) How is the graylevel transformation function computed for Histogram equalisation?

Image (1)

Image (2)

Histogram A Histogram B

0

2000

4000

6000

8000

10000

0 50 100 150 200 250

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

0 50 100 150 200 250

Page 8: Exercise questions for Machine vision - miun.se

Lecture 5: 19. The following equation defines a morphological operation, { }∅≠∩= ABxOP x)(|

).

a) What is the name of this operation?

b) Which one of the following pictures below is the correct graphical illustration of the

effect of that operation for a binary image A and a structural element B?

Illustration A)

Illustration B)

Illustration C)

Illustration D)

20. Two image points (x1,y1), (x2,y2), lying

on a single line are shown. The

corresponding lines in a parameter space

are also shown. This transformation can

be utilized in the Hough transform to find

lines in an image. Explain shortly how

this detection of lines works for the Hough transform and how the line parameters for that

line can be measured.

A

Page 9: Exercise questions for Machine vision - miun.se

21. Figure 3 depicts an image object A and a structural element B used for the morphological

operation Dilate. Dilate is defined as, { }∅≠∩=⊕ ABxBA x)(|)

. M)

is the reflection of

a region M and xM is the translation of region M by a vector x. Draw a nice picture and

show how BA ⊕ will look like.

Figure 3. Image object A and the structural element B used for morphological operations.

22. Explain how Template matching is working and suggest also a method how to cope with

the increasing execution times for Template matching as the resolution of the image is

increased.

23. The following drawing shows a square shaped region A of pixels belonging to one single

image component in a binary image. Region B is a circular shaped structural element

having the diameter 2 and with its origin at the center, indicated with a dark spot.

A morphological operation OP is defined as, { }ABxOP x ⊆= )(| .

a) What is the name of this operation as known in all reference litterature?

b) Make a simple sketch having the right proportions showing the visual effect on region

A after applying this morphological operation OP using structural element B. Also make

an indication in your drawing on what the size of the processed region will be. I want just

the sketch as an answer, nothing else.

A

B 3

2

Page 10: Exercise questions for Machine vision - miun.se

24. The following binary picture to the left shows vertical and horizontal lines having a width

of 5 pixels. Distances between lines are at least 30 pixels. Consider lines as belonging to

region A. The drawing to the right shows a structuring element B.

A morphological operation OP1 is defined as, { }ABxOP x ⊆= )(|1

.

Another morphological operation OP2 is defined as, { }∅≠∩= ABxOP x)(|2

)

a) What are the names for operations OP1 and OP2 ?

b) Apply firstly OP1 on region A and then apply OP2 such that C = OP2(OP1(A)).

Make a drawing and show how region C will look like.

25. An Edge Histogram Descriptor EHD is computed on the two pictures shown below.

Estimate and illustrate EHD for the two pictures A and B. Explain what the diagrams show

and why they look like they do.

Picture A Picture B

B 10 pixels

1 pixel

Page 11: Exercise questions for Machine vision - miun.se

Lecture 6:

26. Explain shortly how a minimum distance classifier works. What kind of priori statistics is

computed for the trainings sets?

27. Assume that you are going to apply a segmentation algorithm in a machine vision system

that is built to inspect colours, sizes and shapes of cookies on a conveyer belt before

packaging. Damaged cookies, burnt cookies, cookies with non circular shapes or cookies

out of size range must be disposed by the system. The intensity of the illumination for this

machine vision system is inhomogeneous distributed (not constant intensity) over the

observation area. What kind of segmentation algorithm would you select for this task,

discuss and motivate your selection of algorithm and also discuss properties of other

system components that might be important to consider for this selection?

Page 12: Exercise questions for Machine vision - miun.se

Lecture 7: 28. Draw a picture and explain how a sheet of light laser can be used together with an area

scan sensor for acquisition of a 3D-surface and based on triangulation techniques. Just

explain the measurement principle how it works.

29. Figure 4 depicts a schematic setup for stereo imaging based on two image sensors and an

object W at distance Z given by 12 xx

BZ

−−=

λλ . The object W is projected onto the image

sensors 1 and 2 at position (x1 ,y1) and (x2 ,y2) respectively. Explain what kind of image

processing is necessary in order to measure the distance Z from the two sensors to the

object W. Relate your explanation to the given expression for Z.

Figure 4. Stereo imaging.

30. The position of a laser line projected onto an image detector versus height of object is

shown in Figure 5. One curve is representing measured values used for calibration and

second curve shows a computed transfer function. These curves comes from a setup for

laser scanning used to capture a 3D surface. It shows almost a perfect linear relation

between pixels and height. From measurements and it was shown that the standard

deviation of computed position of laser line was 0.2 pixels.

a) Explain shortly what property of captured images is limiting precision of laser line

position to 0.2 pixels?

b) What is the precision of height measurement that this scanner can achieve?

pixels

0 10 20 30 40 50 60

hei

ght

[mm

]

3.4

3.6

3.8

4

4.2

Measured slope =0.014406

Computed slope =0.014209

Calibration reference level =3.3139 mm

Deviation in slopes =0.0089986 mm

Comparison of computed and measured levels

Measured Heights

Computed Heights

Figure 5. Height versus position on image detector.

Page 13: Exercise questions for Machine vision - miun.se

31. The intensity profile of an imaged laser line is shown in Figure 6. When Center Of

Gravity (COG) is computed to find position of laser line in one of the spatial dimensions,

a threshold can be used.

a) Explain and motivate why this threshold is used for a laser scanner.

Figure 6. Gray level versus pixels for an imaged laser line.

32. A laser scanner is using a step size of 0.5 mm. What is the highest frequency along the

scanning dimension that can be resolved?

33. A laser scanner is using a telecentric lens having an optical amplification of 0,25. Pixel

size of image detector is 10 µm.

What is the highest frequency along the laser line that can be resolved?

0 50 100 150 200 2500

50

100

150

200

250

[Pixels]

Gray level

Threshold