Psych590 Presentation Interest Point Sujoy

Post on 27-Jan-2015

105 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

Transcript

Interest points: Cost effective methods for capturing overt visual attention

Sujoy Kumar Chowdhury

Research area covered

• Selective visual attention• Saliency map• Interest point• Eye tracking: Interpretation, comparison• Low cost usability techniques• Cost: time, $, deployment overhead

Selective visual attention

• Stimulus driven: auto processing, subconscious, bottom-up, fast, exogenous

• Goal driven: conscious, top-down, slow, endogenous

• Overt attention: co-located with fixation, absorbable

• Covert attention: not co-located with fixation• Saccade• Fixation

IM, SM, FM, (IP)

• IM = Interest point map (performance –test)• SM = Saliency map (predictive)• FM = Fixation map (performance-test)• (IP = Interest point plot) (performance-test)• Eye-tracking: Heat-map, gaze-plot

Qualitative, Quantitative, Formative, Summative

• Number of users:( 3039, 56)• Monetary cost: Cheap• When done• Time requirement: Quick

How many users

Kara Pernice & Jakob Nielsen (2009)

Think aloud: Concurrent VS Retrospective

• Concurrent: May slow down user, data not representative, not usually done with eye-tracking (Johansen, S. A., & Hansen, J. P. (2006))

• Retrospective: User omits/ forgets data

Key references

• Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009). Everyone knows what is interesting: Salient locations which should be fixated. Journal of Vision, 9(11), 1-22.

• Kara Pernice & Jakob Nielsen (2009) .Eyetracking Methodology: 65 Guidelines for How to Conduct and Evaluate Usability Studies Using Eyetracking

• Johansen, S. A., & Hansen, J. P. (2006). Do we need eye trackers to tell where people look? In CHI '06 extended abstracts on Human factors in computing systems (pp. 923-928). Montréal, Québec, Canada

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Variability in heat-map due to number of users

Kara Pernice & Jakob Nielsen (2009)

Variability in heat-map

Kara Pernice & Jakob Nielsen (2009)

Variability in heat-map due to number of users

Why need a better deliverable than heatmap

Kara Pernice & Jakob Nielsen (2009)

Why need a better deliverable than heatmap

Kara Pernice & Jakob Nielsen (2009)

Why need a better deliverable than heat-map: Recommendations

• 30 + 9 = 39 users required for heatmap for representative data (85%)

• 24% extra users (9 users) required to account for eye-tracking data loss

• Better watch live eye-tracking and listen to user thinking aloud

• Good for slow-motion gaze-replay later• Heat-maps still can be used, but only as illustration, not

as primary data• Test with small number of users (6), test more frequently

Kara Pernice & Jakob Nielsen (2009)

Recommendations

“ The way to happiness (or at least a high ROI) is to conserve your budget and invest most of it in discount usability methods. Test a small number of users in each study and rely on qualitative analysis and your own insight instead of chasing overly expensive quantitative data. The money you save can be spent on running many more studies. The two most fruitful things to test are your competitors’ sites and more versions of your own site. Use iterative design to try out a bigger range of design possibilities, polishing the usability as you go, instead of blowing your entire budget on one big study."

Kara Pernice & Jakob Nielsen (2009)

Where did you look at: Do we need eye-trackers

Johansen, S. A., & Hansen, J. P. (2006).

Where did you look at: Do we need eye-trackers

• 10 users, 17 Web designers, 8 web pages• Self-reported gaze pattern (by user), predicted gaze-pattern (by

web designer)• Users could reliably remember 70% of the web elements they

had actually seen• Web designers could only predict 46% of the elements typically

seen (squint-test)• No difference between simple and complex webpages in

number of remembered items• Users were not good at remembering Area of Interest (AOI)

sequence• Memory difference between logo and other web elements

Johansen, S. A., & Hansen, J. P. (2006).

Comments

• Users repeated the eye-movements, Web designers used paper

• User might have thought: “Better look at things I could recall”

• N-gram analysis, Levensthein distance 16 (SD = 12.8)

Johansen, S. A., & Hansen, J. P. (2006).

What to ask

• What are the most interesting points?

• Where would you look at to do this?• Where did you look at?

Johansen, S. A., & Hansen, J. P. (2006).

Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)

Scan-path: Saliency model and goal-dependence

• Foulsham, T., & Underwood, G. (2008). What can saliency models predict about eye movements?

• D. norton & L. Stark (1971). Scan-path theory: Top-down recapitualtion

Interesting objects are visually salient

• Elazary, L., & Itti, L. (2008), N = 78

StomperScrutinizer: A Squint test without squinting

ChalkMark: First Impression testing

Crazy Egg

Five second test

Interest point test: Unblurred

1

X2

X

3

X

4

X

5

X

Interest point test: Blurred/ Squinted

1

X2

X

3

X

4

X

5

X

Interest point plot: Unblurred/ Unsquinted

Interest point plot: Blurred/ Squinted

Interest point heat-map: Unblurred/ Unsquinted

(This placeholder image is computationally generated or predictive, NOT based on performance test)

Interest point heat-map: Blurred/ Squinted

Variant: Task-Interest point test: Blurred (Mark five probable points where the price could be located)

1

X2

X

3

X

4

X

5

X

1

X2

X

3

X

4

X

5

X

Variant: Task-Interest point test: Unblurred (Mark five points where you would look at to get the price)

Interest point based usability tests: Recommendations based on hypotheses• Sequence: blurred-squint tests before unblurred-unsquinted

tests, Retrospective• Tasks: Generic exploratory interest point test before task-

centric test.• Questions: “Look freely” “Where did you look” “Where

would you look to do this” “What are the most interesting points”

• Key report: based on interest point plot (qualitative, formative, 5 user) rather than heat-map (quantitative, summative, 30 user)

• Implementation: Static-Web app Dynamic-URL, Provision for Area of Interest (AOI)

Statistical comparison

• Between subjects• ET vs. interest-point-plot (IP) • ET vs. interest-point-map (IM)• IP vs. IM (Number of user)• Squinted vs. Unsquinted• Exploratory vs. Task-centric

top related