Tagging of digital historical images

Post on 12-Jan-2016

31 Views

Category:

Documents

5 Downloads

Preview:

Click to see full reader

DESCRIPTION

Tagging of digital historical images. Authors : A. N. Talbonen (antal@sampo.ru) A. A. Rogov (rogov@psu.karelia.ru). Petrozavodsk state university. General tagging model. Tag DB. Object selection. Tag attribution. Indexing. Full-text index. Object DB. Image collection. - PowerPoint PPT Presentation

Transcript

Tagging of digital historical images

Authors: A. N. Talbonen (antal@sampo.ru)

A. A. Rogov (rogov@psu.karelia.ru)

Petrozavodsk state university

General tagging model

Object selection

Tag attribution

Indexing

Imagecollection

ObjectDB

File Tags

I1 ……

I2 ……

Full-text index

Tag DB

General research features

Research is based on analysis of image collection of White Sea-Baltic Sea Canal provided by National museum of Karelia

Collection consists of about 8k images with resolution 75 dpi.

1. Face taggingGeneral features Predominance of small-sized objects (width is less than

40 pixels) No database Available expert

Distribution of object’s size

1. Face tagging General algorithm

Object (face) detection. Computing of pairwise distances

between objects. Tagging (for each object):

The system displays a list of the most similar objects.

The expert determines a relationship between objects

Object tags are specified

1. Face taggingFace detection features There is OpenCV library (OpenCvSharp in

C#) and it’s method cv::CascadeClassifier::detectMultiScale (haarDetectObject in C#) (Viola-Jones implementation) being used for face detection

Viola-Jones method parameters are used to affect on precision and recall on face detection results

There is face recognition method based on Local Binary Patterns being used to improve the quality of Viola-Jones results

Training set

Object

Recognition

Face objects

Fake objects Object is

a faceInsert in result

collectionYes

Face detectionSource image

1. Face taggingFace detection diagram

1. Face tagging Local binary patterns (LBP)Original LBP filter

Advanced LBP filters

1. Face tagging Local binary patterns

Uniform codes (patterns)

Rotation invariant codes

1. Face tagging Local binary patterns

Weight matrix

Computing of face object histogram

1. Face tagging Face detection experiment The purpose is to find the LBP modification with the

best detection rates Experiment features:

Sample of 1070 images Assessing features

Fake object when: Object is not a face Faces are recognized weakly Faces turned at an angle greater than 90 degrees

Face object when: Object is a face Object is an image of people: portraits, paintings,

sculptures 12 different LBP modifications were used

1. Face tagging Face detection experiment results

1. Face tagging Face recognition experiment

Purpose is to find the LBP modification with the best face recognition rates

Experiment features Training set contains 19 objects including

3 relevant pairs of face objects and 1 relevant pair of fake objects

10 LBP modifications were used

1. Face tagging Face recognition experiment

1) 2) 3) 4) 5)

6) 7) 8) 9) 10)

11) 12) 13) 14) 15)

16) 17) 18) 19)

Pairs: {1, 15}, {3, 14}, {4, 13}, {7, 9}

1. Face tagging Face recognition experiment results

8,1LBP

16,1LBP

8,2LBP

16,2LBP

8,3LBP

16,3LBP

ri16,3LBP

riu16,3LBP

u16,3LBP

16,3LBP

Взвешенный

Взвешенный

Взвешенный

Взвешенный

Modification Precision

0,38

0,25

0,50

0,50

0,50

0,75

0,50

0,38

0,63

1,00

1. Face tagging Face comparing

Training set object’s histograms:

Objects at position (row, col): (1,1) and (3, 4) correspond to fake objects and have similar histograms very different from the rest

2. Texture taggingGeneral features

The classifier with tags based on moments is built

Texture searching is based on the built classifier

Search involves finding the segments corresponding to different textures

Minimal segment size to be include in result is 100 pixels

2. Texture taggingMoment-based segmentation

Moment calculation function:

Source image I Moment image M00

Moment image M10 Moment image M01

2. Texture taggingMoment-based segmentation

F00

F10

F01

Binary segmentation example

Precision: 96,7 %

Moment feature calculation function:

2. Texture segmentationImplementation features

Each moment is an image Moment computing is based on library

OpenCV and it’s method cv::filter2D Parameter seek is based on

developed experiment

2. Texture tagging Parameter seek example

Moment window size

Moment featureWindow size

Sigma Precision

9 49 0,01 95,285

9 39 0,005 95,1782

9 39 0,02 95,1752

9 44 0,005 95,1355

9 49 0,015 95,1324

14 14 0,02 93,8416

14 14 0,005 93,7103

14 19 0,005 92,7826

14 19 0,015 92,7826

14 34 0,015 92,5293

14 29 0,015 92,395

14 34 0,02 92,3248

24 24 0,02 87,9639

39 19 0,01 87,9639

2. Texture taggingClassifier features

Set of textures of several classes is given

Each class is assigned a set of tags Each image is subjected to a

separate texture search Each texture found adds appropriate

set of tags to the source image

2. Texture taggingExample

Source image

2. Texture taggingExample

Classifier example

Classifier textures example

2. Texture taggingExperiment Purpose is to evaluate the search quality Experiment features

Sample of 100 images Classifier contains 2 textures

House roof House wall

2. Texture taggingSearch quality evaluate method

iij

iij

j

iij

iij

j E

R

F

RRe;Pr

jiij

jiij

jiij

jiij

E

R

F

R

,

,

,

, Re;Pr

ijE

ijF

ijijij RER

- Flag of belonging to assessed collection

- Flag of belonging tosearch result

Flag of relevance

Single texture estimations:

General estimations:

2. Texture taggingExperiment results

Thanks for your attention!

top related