Top Banner
Rating Scales for Collective Intelligence in Innovation Communities > Christoph Riedl Ivo Blohm Jan Marco Leimeister Helmut Krcmar Why Quick and Easy Decision Making Does Not Get it Right
36
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Rating Scales for

Collective Intelligence in

Innovation Communities

> Christoph Riedl

Ivo Blohm

Jan Marco Leimeister

Helmut Krcmar

Why Quick and Easy Decision

Making Does Not Get it Right

Page 2: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

1. Problem Setting

Page 3: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final
Page 4: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final
Page 5: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final
Page 6: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

So, there are large data pools…How do you select the best ideas?

Page 7: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

2. Theory

Background

Page 8: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final
Page 9: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research Questions

Which rating mechanisms perform best for selecting innovation ideas?

Page 10: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Dimensions of Idea Quality

Idea quality

Novelty

Feasibility

Relevance

Elaboration

Ease of transforming an idea into a new product

An idea‘s value for the organization

An idea‘s concretization and maturity

An idea‘s originality and innovativeness

Source: [1, 2, 3]

Page 11: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

3. Research

Model

Page 12: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research Model

H1: The granularity of the rating scale positively influences

its rating accuracy.

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

H2: The granularity of the rating scale positively influences the users' satisfaction with their ratings.

Page 13: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research Model

H3a: User expertise moderates the relationship between

rating scale granularity and rating accuracy such that the

positive relationship will be weakened for high levels of user

expertise and strengthened for low levels of user expertise.

User

ExpertiseH3a

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

Page 14: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research ModelUser

ExpertiseH3a

H3b

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

H3b: User expertise moderates the relationship between

rating scale granularity and rating satisfaction such that the

positive relationship will be strengthened for high levels of

user expertise and weakened for low levels of user

expertise.

Page 15: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research Methodology

• Pool of 24 ideas from real-world idea competition

• Multi-method study

• Web-based experiment

• Survey measuring rating satisfaction of participants

• Independent expert (N=7) rating of idea quality (based on Consensual Assessment Technique, [1] and [2])

Page 16: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

4. Experiment

Page 17: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Participant Demographics

N = 313

Page 18: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Participant Demographics

Page 19: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Screenshot of system

Page 20: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Research Design

Promote/DemoteRating

ComplexRating

5Star Rating

Page 21: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

So much for the data space and its attributes. Next, we have to think about who our users are and what they want to do. All lifeloggingapplications are first of all about

5. Results

Page 22: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Correct Identification of Good and Bad Ideas

5.1

3.9

2.5

0

1

2

3

4

5

6

Me

an o

f co

rre

ctly

ide

nti

fie

d

ide

as

Rating Scale

Page 23: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

4.9

3.6

1

0

1

2

3

4

5

6

Me

an o

f w

ron

gly

ide

nti

fie

d

ide

as

Rating Scale

Error Identifying Top Ideas as Good and Bottom Ideas as Bad

Page 24: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Rating Accuracy (Fit-Score)

0.2 0.3

1.5

0

1

2

3

4

5

Me

an o

f ad

just

ed

fit

sco

re

Rating Scale

Page 25: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Participants’ Rating Satisfaction

3.2

3.9 3.7

0

1

2

3

4

5

Me

an o

f ra

tin

g sa

tisf

acti

on

Rating Scale

Page 26: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

ANOVA Results

Panel B. Effect of Rating Scale on Rating Satisfaction

Source dfSum of

Squares

Mean of

SquaresF Hypothesis Supported

Between Groups 2 7.44 3.72 4.52*** H2 Yes

Within Groups 310 253.36 0.82

Total 312 270.80

Panel A. Effect of Rating Scale on Rating Accuracy

Source dfSum of

Squares

Mean of

SquaresF Hypothesis Supported

Between Groups 2 121.23 60.61 9.05*** H1 Yes

Within Groups 310 2075.77 6.70

Total 312 2196.99

N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05

Page 27: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

ANOVA Results

Post-hoc comparisons:

Complex rating scale leads to significantly higher rating accuracy

than promote/demote rating and

5-star rating (p < 0.001)

Page 28: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Panel A. Moderating Effect of User Expertise on Rating Scale & Rating Accuracy

Step Independent Variable R² ΔR² Hypotheses Supported

1 Expertise 0.02 -

2Dummy 1

0.11** 0.09***Dummy 2

3Expertise x Dummy1

0.12** 0.01 H3a NoExpertise x Dummy2

Panel B. Moderating Effect of User Expertise on Rating Scale & Rating Satisfaction

Step Independent Variable R² ΔR² Hypotheses Supported

1 Expertise 0.03 -

2Dummy 1

0.08** 0.05**Dummy 2

3Expertise x Dummy1

0.10* 0.02 H3b NoExpertise x Dummy2

N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05

Regression Results

Page 29: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Regression Results

There is no direct and no moderating effect of user expertise.

The scale with the highest rating accuracy / rating satisfaction should be used for all

user groups.

Page 30: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

6. Contribution

Page 31: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Limitations

Expert as base-line

Forced choice

Page 32: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final
Page 33: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Theory

• Theory Building

– Collective Intelligence

• Theory Extension –

Creativity Research

Page 34: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Practice

• Design recommendation

Page 35: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Rating Scales for

Collective Intelligence

in Innovation Communities

> Christoph Riedl

Ivo Blohm

Jan Marco Leimeister

Helmut Krcmar

[email protected]

twitter: @criedl

Page 36: ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

Image credits:

Title background: Author collection

Starbucks Idea: http://mystarbucksidea.force.com/

The Thinker: http://www.flickr.com/photos/tmartin/32010732/

Information Overload: http://www.flickr.com/photos/verbeeldingskr8/3638834128/#/

Scientists: http://www.flickr.com/photos/marsdd/2986989396/

Reading girl: http://www.flickr.com/photos/12392252@N03/2482835894/

User: http://blog.mozilla.com/metrics/files/2009/07/voice_of_user2.jpg

Male Icon: http://icons.mysitemyway.com/wp-content/gallery/whitewashed-star-patterned-icons-

symbols-shapes/131821-whitewashed-star-patterned-icon-symbols-shapes-male-symbol1-

sc48.png

Harvard University: http://gallery.hd.org/_exhibits/places-and-sights/_more1999/_more05/US-MA-

Cambridge-Harvard-University-red-brick-building-sunshine-grass-lawn-students-1-AJHD.jpg

Notebook scribbles: http://www.flickr.com/photos/cherryboppy/4812211497/

La Cuidad: http://www.flickr.com/photos/37645476@N05/3488148351/

Theory and Practice: http://www.flickr.com/photos/arenamontanus/2766579982

Papers:[1] Amabile, T. M. (1996). Creativity in Context. Update to Social Psychology of Creativity. 1 edition, Westview

Press, Oxford, UK.[2] Blohm, I., Bretschneider, U., Leimeister, J. M. and Krcmar, H. (2010). Does collaboration among participants

lead to better ideas in IT-based idea competitions? An empirical investigation. In Proceedings of the 43th Hawaii Internat. Conf. System Sci. p. Kauai, Hawai.

[3] Dean, D. L., Hender, J. M., Rodgers, T. L. and Santanen, E. L. (2006). Identifying quality, novel, and creative ideas: Constructs and scales for idea evaluation. Journal of the Association for Information Systems, 7 (10), 646-698.