Top Banner
Vide 1 Introduction The amount of video has rap video content and retrieve vid now use metadata to describe video, the information that ta that only a small portion in a information across videos. C content directly from videos shots collection we need to contents becomes necessary Figure 1 (a). In fact, above technique can be applied in joined interesting videos and The relevant researches [1] shorten the video while con highlight of a video. In this several videos aiming for pro Our system, Figure 1 (b), fir and extract shots from vide representing each shot by th weighting vector, we then se in the same cluster. Since we how they would differ from e to find appropriate boundari same shot, we can automatic to browse clustered shots an preference and along with ou (a) Figure 1. Figure (a) illustrates shots, users can have a big pictu Dash arrows are our future wor eo Montage - Video Abstraction I-Ting Fang pidly increased in recent years, and therefore how deo from huge database is a crucial issue. Video e videos, while there might be so many scenes co ags reveal could be limited. Furthermore, there m a video that users are interested in, while some use Considering those situations, it’s then important s and be able to collect those users’ interested s o first know users’ preference, and thus an ove which is the main topic of this project – Video description is just one of applications of video n facilitating browsing video database back and d original videos, and to save the bandwidth as we in video abstraction are mostly done for singl ntaining as much information as possible and ho s project, in the contrast, we try to give a cont oviding a novel way to browse, search and create rst describes frames by their color and spatial st eos if they remain consistent for certain amoun hree of its frames and rescaling feature coordinat eparate them into clusters where similar shots are e have no idea what scenes would look like in fe each other, we apply clustering algorithm K-mea ies. Also, relying on the strong relation among cally decide the best number of clusters. Finally, u nd select interesting ones. The resulting video is ur smooth shot transition scheme. (b) the concept of the purpose of video abstraction. By s ture of video contents. Figure (b) shows the flow char rk to extract useful sharing websites ontained in single might be chances eful and relevant t to describe the shots. Before the erview of video Abstraction, see abstraction. The d forth between ell. e video, how to ow to extract the tent overview of personal videos. tructure features, nt of time. After tes using trained supposed to fall eature space, and ans to data trying frames from the users will be able joined by user’s showing clustered rt of our system.
5

Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

May 12, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

Video Montage

1 Introduction

The amount of video has rapidly increased in recent years

video content and retrieve video

now use metadata to describe videos,

video, the information that tags reveal could be limited. Furthermore, there might be

that only a small portion in a video

information across videos. Considering those situation

content directly from videos and

shots collection we need to

contents becomes necessary which is th

Figure 1 (a). In fact, above description is just one of applications

technique can be applied in

joined interesting videos and original videos

The relevant researches [1]

shorten the video while containing as

highlight of a video. In this project,

several videos aiming for providing a novel way to

Our system, Figure 1 (b), first

and extract shots from videos

representing each shot by three

weighting vector, we then separate

in the same cluster. Since we have no idea what scenes would look like

how they would differ from each other, we apply clustering algorithm K

to find appropriate boundaries. Also

same shot, we can automatically decide the best number of cluster

to browse clustered shots and

preference and along with our smooth

(a)

Figure 1. Figure (a) illustrates the concept of the purpose of video abs

shots, users can have a big picture of video contents.

Dash arrows are our future work

Video Montage - Video Abstraction

I-Ting Fang

The amount of video has rapidly increased in recent years, and therefore how to extract useful

video content and retrieve video from huge database is a crucial issue. Video sharing websites

now use metadata to describe videos, while there might be so many scenes contained in single

tags reveal could be limited. Furthermore, there might be

that only a small portion in a video that users are interested in, while some useful

Considering those situations, it’s then important to describe th

nt directly from videos and be able to collect those users’ interested shots

we need to first know users’ preference, and thus an overview of video

necessary which is the main topic of this project – Video Abstraction

In fact, above description is just one of applications of video

plied in facilitating browsing video database back and forth between

s and original videos, and to save the bandwidth as well

in video abstraction are mostly done for single video, how to

shorten the video while containing as much information as possible and how to extract the

In this project, in the contrast, we try to give a content overview of

aiming for providing a novel way to browse, search and create personal

first describes frames by their color and spatial structure

from videos if they remain consistent for certain amount of

three of its frames and rescaling feature coordinates

we then separate them into clusters where similar shots are supposed to

Since we have no idea what scenes would look like in feature space

how they would differ from each other, we apply clustering algorithm K-means

boundaries. Also, relying on the strong relation among frames from the

automatically decide the best number of clusters. Finally, u

and select interesting ones. The resulting video is

preference and along with our smooth shot transition scheme.

(b)

illustrates the concept of the purpose of video abstraction. By showing clustered

, users can have a big picture of video contents. Figure (b) shows the flow chart of our system

our future work

and therefore how to extract useful

. Video sharing websites

while there might be so many scenes contained in single

tags reveal could be limited. Furthermore, there might be chances

some useful and relevant

important to describe the

interested shots. Before the

overview of video

Video Abstraction, see

abstraction. The

back and forth between

as well.

are mostly done for single video, how to

how to extract the

we try to give a content overview of

and create personal videos.

structure features,

amount of time. After

feature coordinates using trained

are supposed to fall

in feature space, and

means to data trying

he strong relation among frames from the

Finally, users will be able

he resulting video is joined by user’s

traction. By showing clustered

shows the flow chart of our system.

Page 2: Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

2 Feature Selection and Distance Computation

To automatically describe the video content, we need to employ low level features. Different

types of features are reported to have good performance on the video abstraction task, such as

text, visual, audio and motion, these features are chosen according to the application. In this

project, we make our effort on global visual information, HSV color histogram and spatial

envelope.

1. HSV Color Histogram

HSV color histogram and other modified versions have been widely used in this area,

and could be discriminated in different scenes. To compute the histogram, the HSV

space is uniformly quantized into a total of 256 bins which includes 16 levels in H, four

levels in S, and four levels in V. In order to make them work with the other feature and

be more efficient, the values are rescaled and quantized nonlinearly, giving higher

significance to the small values with higher probability. Figure 2 shows that if we only

scale the histogram values, the distance of two frames is still dominated mostly by color

difference. Thus, we further apply nonlinear quantization to eliminate high variances of

color histogram.

(a) (b)

Figure 2. Combined feature vector. First part is spatial feature and second part is color feature.

(a) Scaled color histogram. (b) Scaled and nonlinear quantized color histogram.

2. Spatial Envelope

Spatial Envelope was originally developed for scene recognition. In [2], the authors use

it to distinguish different secne categories for both natural scenes and man-made scenes.

It reveals information about openness, expansion, naturalness and roughness of space

which are very suitable for shots clustering. By applying different scale and oriented

Gabor-like filters onto images, the spatial envelope is represented by the relationship

between the outlines of the surfaces and their properties including the inner textured

pattern generated by windows, trees, cars, people etc. Here we use three scales, 10 filters

in total and partition filtered results into 8x8 blocks. The dimension of spatial envelope

is then 640, along with color histogram, each frame is represented by a 896 dimensions

vector.

Distance Computation

There exist several metrics to compute distance. In order to find the best solution for our

compound feature vectors, spatial envelope and color histogram, we try to measure the

correlation between the ground truth and distances computed by different methods, mainly

Euclidean distance and Cityblock distance. The ground truth here is generated by manually

labeling 0 or 1 indicating similarity. For example, A= [2.1, 3, 2], B= [0.5, 2.3, 1.7],

GroundTruth= [0, 1, 1], we would conclude that sequence B has higher correlation with the

ground truth than A. Based on this scenario, we found correlations are 0.3 and 0.5 respectively,

Page 3: Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

where Cityblock metrics has stronger correlation with ground truth. This could be

understandable since the distance of histograms computed in this way makes more sense than

in Euclidean distance while two metrics don’t differ so much for spatial envelope.

3 Shots Generation and Representation

Videos are often composed of several chunks, and viewers may distinguish chunks from

chunks by detecting scene changes. Based on this observation, we first split videos into shots

before clustering. Intuitively, we record those frames with significant scene changes as salient

frames, and examine the duration between two salient frames to see how long the shot

remains consistent. If the duration passes the minimum length we set, the shot is kept,

otherwise it’s ignored.

The frames might be slightly different within one shot, but we still expect all frames should

fall in the same cluster. Since using all frames to represent one shot will cost too much

computation, and using average of one shot will, on the other hand, lose information, we

therefore extract three frames from a shot. And because of the strong relation among those

three frames, it can later help in clustering.

4 Clustering

The original distance d is computed as the sum of absolute differences, d=sum(|x1-x2|), and its

correlation with ground truth is around 0.5 which is not high enough, and our experiments

also show that it can only achieve average 65% accuracy when clustering. Thus we try to use

some labeled data and train the weighting vector which gives weights to features to improve

accuracy.

Weighting Vector Learning

Instead of directly using ground truth (0, 1), where 0 indicates similar shots and thus distance

is zero, and dissimilar for 1, as our objective distance, we modified the distance by setting

d’=0.1*d if x1 and x2 are similar, and d’=d otherwise. This will preserve original distance in

some sense while increase the correlation with ground truth to 0.9. Using this modified

distance, we try to learn w such that |x1-x2|*w is close to d’ for m frame pairs, with restriction

that wi must be greater or equal to zero. That is to solve Aw=d’ s.t. w>0, or minimize

1/2*wTA

TAw-(A

Td’)

Tw, s.t. w>0, where

−−−

−−−

=

M

rr

rr

43

21

xx

xx

A,

nmRA ×∈,

1×∈ nRw,

1×∈′ mRd

Applying weighting vector is just like rescaling the coordinates of the feature space such that

data can be clustered better. Given learned weighting vector, and since wi>0, |x1 wi -x2 wi

|=|x1-x2| wi, we can directly multiply it into original feature space, and do clustering.

Deciding the number of clusters K

In most cases of application, users don’t know how many types of shots in the targeting

videos, so we develop a way trying to automatically choose the best K. At the beginning of

clustering, we set K to be a large number and separate shots into K clusters. And then using

the information mentioned in shots representation that three frames from one shot should fall

in the same cluster, we record the ratio of the number of shots that frames fall in cluster i also

fall in cluster j and the number of shots in cluster i in Clusters Relation Table R(i, j). If R(i, j)

is high, we know that there is strong relation between them, and thus combine them into one

and set K=K-1. Same procedures continue until no more high R(i, j) or reaching the minimal

K we set.

Page 4: Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

After finishing iterative clustering, for those shots that three frames still fall in different

clusters, we can keep the portion

or generally ignore entire shot

5 Experiment and Results

Database

In order to have videos that are close to reality, w

the key words and randomly selecting

tennis, basketball, surfing, snowboard

and is totally around 30 minutes long.

However the videos we downloaded are polluted,

in a certain type of video, they might

are people talking in snowboarding videos, tennis players walking to

video and so on. Therefore, w

frames of them for training and

as shown in Figure 3.

Figure 3. Black bars represent polluted shots.

training and testing, while eventually it is videos with polluted shots that need to be clustered.

Results and Discussion

Within 2500 frames, 500 from each category, we randomly select two frames and record their

difference, modified distance and ground truth

vector from those training data

dash lines represent clustering error for training set and test set without using

vector. The error rate of clustering

0.04 while the test error doesn

explained as the fact that there are obvious differences even within one clean video. For

instance, people ski both in mountain and artificial skiing resort, and tennis courts look

different in U.S. Open and Wimbledon.

put them into the same cluster

Using the scenario described in section 4 and applying the learned weighting vector, we

classify original polluted videos and obtain 6 final clustered videos. For each cluster, although

misclassified shots exist, we can easily tell what the theme is as in Table 1. For news

most of the 35% that don’t belong to news category are essentially close

Figure 5(a). So, even if there are only 65% that are exact news shots, the whole clustered

video is visually consistent. For surfing cluster, most of miscl

which share common low level f

2 is what we didn’t train with, but it automatically extracts all the shots displayed in

widescreen.

erative clustering, for those shots that three frames still fall in different

keep the portion that most frames fall in the same cluster and ignore the other

or generally ignore entire shot.

Results Discussion

order to have videos that are close to reality, we collect our videos from youtube by keying

the key words and randomly selecting. There are five categories of video in our database,

tennis, basketball, surfing, snowboarding and news. Each of them contains ten or more videos

minutes long. To train the weighting vector, we need to label frames.

videos we downloaded are polluted, meaning that although some frames appear

ey might not visually belong to that category. For example, there

are people talking in snowboarding videos, tennis players walking toward their seat in tennis

Therefore, we manually join around 10 minutes clean shots and use first 500

training and remaining part for testing which contains around 1600 frames

Black bars represent polluted shots. The bottom 5 manually extracted videos are used for

eventually it is videos with polluted shots that need to be clustered.

500 from each category, we randomly select two frames and record their

difference, modified distance and ground truth for 1000 to 20000 times, and learn weighting

data. The training and test error rates are shown in Figure 4

dash lines represent clustering error for training set and test set without using

clustering 2500 frames used for training slowly decrease to nearly

while the test error doesn’t change too much as we increase training data. This might be

there are obvious differences even within one clean video. For

people ski both in mountain and artificial skiing resort, and tennis courts look

different in U.S. Open and Wimbledon. If they are not contained in training set, w

put them into the same cluster even we increase training data.

described in section 4 and applying the learned weighting vector, we

classify original polluted videos and obtain 6 final clustered videos. For each cluster, although

misclassified shots exist, we can easily tell what the theme is as in Table 1. For news

t belong to news category are essentially close-ups of

even if there are only 65% that are exact news shots, the whole clustered

video is visually consistent. For surfing cluster, most of misclassified shots are snowboarding

which share common low level features with surfing as we can see in Figure 5(b)

t train with, but it automatically extracts all the shots displayed in

erative clustering, for those shots that three frames still fall in different

most frames fall in the same cluster and ignore the other

e collect our videos from youtube by keying

There are five categories of video in our database,

contains ten or more videos

To train the weighting vector, we need to label frames.

that although some frames appear

sually belong to that category. For example, there

their seat in tennis

clean shots and use first 500

around 1600 frames,

videos are used for

eventually it is videos with polluted shots that need to be clustered.

500 from each category, we randomly select two frames and record their

and learn weighting

are shown in Figure 4, where

dash lines represent clustering error for training set and test set without using weighting

g slowly decrease to nearly

increase training data. This might be

there are obvious differences even within one clean video. For

people ski both in mountain and artificial skiing resort, and tennis courts look very

If they are not contained in training set, we can hardly

described in section 4 and applying the learned weighting vector, we

classify original polluted videos and obtain 6 final clustered videos. For each cluster, although

misclassified shots exist, we can easily tell what the theme is as in Table 1. For news cluster,

ups of people, see

even if there are only 65% that are exact news shots, the whole clustered

assified shots are snowboarding

Figure 5(b). And cluster

t train with, but it automatically extracts all the shots displayed in

Page 5: Video Montage - Video Abstraction - Stanford Universitycs229.stanford.edu/proj2008/Fang-VideoAbstraction.pdfVideo Montage - Video Abstraction. Video Montage. 1 Introduction. The amount

Figure 4. Red for training and blue for test error

rate. The dash lines are error rates of training

and test set without using weighting vector.

Table 1. Clustering results.

(a) (b)

Figure 5. (a) The left shot, close-up of people from snowboarding video, falls in the news

cluster, the right one. (b) Snowboarding and surfing images share common low level features,

thus in the surfing cluster, up to 20% are snowboarding shots.

6 Conclusion and Future Work

So far, two kinds of features work well in current database, especially spatial envelope which

can distinguish major scene prospect. But there are still problems that current features can’t

solve. To apply to more general videos, we will need more features like motion, audio

information and even object detectors. Video abstraction is only the first step of many

applications; we’ll use clustered results to further develop tools for video searching, browsing

and creating, Figure 1(b) shows one of them.

7 Acknowledgements

Special thanks to media group leading by Ashutosh Saxena for providing precious opinion.

This project is cooperated with Video Montage by Abhishek Gupta and Shakti Sinha.

Reference

[1] OLIVA, A., & TORRALBA, A. (2001). Modeling the Shape of the Scene: A Holistic

Representation of the Spatial Envelope. International Journal of Computer Vision 42(3) , pp.

145-175.

[2] TRUONGTUBA, & VENKATESHSVETHA. (2007). Video Abstraction: A Systematic

Review and Classification. ACM Transactions on Multimedia Computing, Communications

and Applications, ( Vol. 3, No. 1, Article 3).

0

0.1

0.2

0.3

0.4

1 3 5 8 15 20

err

or

rate

training data size (k)

# Theme Correct/ total Accuracy

1 Snowboarding 8:25/ 8:45 96%

2 Widescreen (black bars at top and bottom)

3 Basketball 10:45/ 11:58 90%

4 News 14:22/ 22:18 65%

5 Tennis 15:48/ 18:20 86%

6 Surfing 19:18/ 28:15 68%