Documenting Motion Sequences with Personalized Annotation System Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan Arizona State University.

Post on 21-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Documenting Motion Sequences with Personalized Annotation System

Kanav Kahol,Priyamvada Tripathi, and

Sethuraman PanchanathanArizona State University

Yu-song Syu

20060822

IEEE Multimedia 2006

Outline Gestures & complex motion sequences Steps of annotation

Modeling gestures anatomically Motion capture Gesture segmentation Gesture recognition Movement annotation

Results Future work

Gestures A sequence of poses

Modeled by state transition Each state corresponds to a pose in the

sequence

Start pose

Endpose

When it becomes complex…

In dance, a large vocabulary of gestures are used

A scalable gesture segmentation / recognition methodology is needed

HMM is needed here

HMM – Hidden Markov Model

We have: Possible symbols Possible states Possibility of transition between

states Possibility of symbols in every state

Symbol series are given State series are hidden

Modeling gestures anatomically

Model the anatomy with 23 segments & 14 joints A parent segment inherits the characteristics of

its children

Two adjacent segments can be perceived as one when They have similar motion vectors Angle of the joint between them

doesn’t change in a time period Dynamic body hierarchy

Modeling gestures anatomically

Dynamic body hierarchy

Segments behaving as one

unit have the same color

Motion capsure

7 choreographers Each creates 3 short dance sequences Every sequences are repeated 3 times Choreographers write down:

Original score for every dance sequence Detail score for every gesture Score: a verbal description

Motion capture

Gesture segmentation

For every body segment Derivate the spatial orientation, velocity, and

acceleration Dynamic hierarchy

Compute the activity SegmentForce = SegmentMass * SegmentAcceleration SegmentMomentum = SegmentMass * SegmentVelocity SegmentKE = SegmentMass * segmentVelocity2

Derive parent activities by vector addition of roots Gesture boundary determination

Gesture segmentation Gesture boundary determination

Find out local minima as binary triples When force reaches its minimum, mark “1” In common with momentum and KE I.e. (100), (011), …

Not every local minimum is a gesture boundary 22 real-world physical configurations in which

adjacent body segments could coalesce We use the 23 triples and 22-elements vector

to train classifier to figure out whether the local minimum is a gesture boundary

Gesture recognition

Find minima of total force of segments Find stabilization of joints

Change of joint angle doesn’t exceed a threshold during a time period

segmentHMM with 23 states jointHMM with 14 states cHMM couples above-mentioned

HMMs

cHMM

cHMM

Θc’c: coupling weight from jointHMM to segmentHMM

d(t,i): distance between segmentt and jointi

Movement annotation

Movement annotation can be useful while teaching dance

Use Anvil annotation software while training http://www.dfki.de/~kipp/anvil Choreographers can use it to

add/modify annotations and set gesture boundaries

Anvil

Motion annotation results The proposed system is simple to use

Xml language and Anvil interface A manual annotation of a 4-5 minute dance

takes about 60 minutes This system takes only 1 minute

A 6-9 percent improvement in accuracy

Future work

Extend this system to annotate generic human movements i.e. walking, running, and washing

utensils Develop a common motion

language with this kind of software

top related