Rebecca Fiebrink Princeton University / University of Washington
Post on 24-Feb-2016
64 Views
Preview:
DESCRIPTION
Transcript
Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and PerformanceRebecca FiebrinkPrinceton University / University of Washington
2
3
4
5
function [x flag hist dt] = pagerank(A,optionsu)[m n] = size(A);if (m ~= n) error('pagerank:invalidParameter', 'the matrix A must be square');end; options = struct('tol', 1e-7, 'maxiter', 500, 'v', ones(n,1)./n, … 'c', 0.85, 'verbose', 0, 'alg', 'arnoldi', … 'linsys_solver', @(f,v,tol,its) bicgstab(f,v,tol,its), … 'arnoldi_k', 8, 'approx_bp', 1e-3, 'approx_boundary', inf,… 'approx_subiter', 5);if (nargin > 1) options = merge_structs(optionsu, options);end;if (size(options.v) ~= size(A,1)) error('pagerank:invalidParameter', … 'the vector v must have the same size as A');end;if (~issparse(A)) A = sparse(A);end;% normalize the matrixP = normout(A);switch (options.alg) case 'dense’ [x flag hist dt] = pagerank_dense(P, options); case 'linsys’ [x flag hist dt] = pagerank_linsys(P, options) case 'gs’ [x flag hist dt] = pagerank_gs(P, options); case 'power’ [x flag hist dt] = pagerank_power(P, options); case 'arnoldi’ [x flag hist dt] = pagerank_arnoldi(P, options); case 'approx’ [x flag hist dt] = pagerank_approx(P, options); case 'eval’ [x flag hist dt] = pagerank_eval(P, options); otherwise
error('pagerank:invalidParameter', ...
'invalid computation mode specified.');
end;
function [x flag hist dt] = pagerank(A,optionsu)
6
7
8
useful algorithms
usable interfaces and appropriate interactions
9
Machine learning
algorithms?
10
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
interactive computer music
12
Interactive computer music
sensed action
interpretation
response (music, visuals, etc.)
computer
human with microphone, sensors, control interface, etc.
audio synthesis or processing,
visuals, etc.
13
Example 1: Gesture recognition
sensed action
identification
response
computer
Bass drum:
“Gesture 1”
14
Example 1: Gesture recognition
sensed action
response
computer
Bass drum:
Hi-hat“Gesture 2”
identification
15
Model of sensed action to meaning
sensed action
response
computer
model
meaning
16 computer
Example 2: Continuous gesture-to-sound mappings
17
sensed action
interpretation
sound generation
computer
mapping
human + control interface
Example 2: Continuous gesture-to-sound mappings
18
A composed system
sensed action
mapping/model/
interpretation
response
mapping/model/
interpretation
supervised learning
20
algorithm
trainingdata
Training
Supervised learning
model
inputs
outputs
21
algorithm
trainingdata
Training
Supervised learning
model
inputs
outputsRunning
“Gesture 1” “Gesture 2” “Gesture 3”
“Gesture 1”
22
Supervised learning is useful
• Models capture complex relationships from the data. (feasible)
• Models can generalize to new inputs. (accurate)• Supervised learning circumvents the need to
explicitly define mapping functions or models. (efficient)
Has been demonstrated to be useful in musical applications, but no usable, general-purpose tools exist for composers to apply algorithms in their work.
23
Weka: A model tool
• General-purpose
• GUI-based• Cited 11,705
times!
24
Criteria for a supervised learning tool for composers
1. General-purpose2. GUI-based3. Runs in real time4. Supports appropriate end-user
interactions with the supervised learning process
25
Appropriate interactions
algorithm
trainingdata
Training
model
inputs
outputsRunning
“Gesture 1” “Gesture 2” “Gesture 3”
“Gesture 1”
26
Appropriate interactions
algorithm
trainingdata
Training
model
inputs
outputsRunning “Gesture 1”
“Gesture 1” “Gesture 2”
creating training data
27
Appropriate interactions
algorithm
trainingdata
Training
inputs
outputsRunning
“Gesture 1” “Gesture 2”
model
“Gesture 1”
creating training data…evaluating the trained model
28
Appropriate interactions
algorithm
trainingdata
Training
model
inputs
outputsRunning “Gesture 1”
“Gesture 1” “Gesture 2” “Gesture 3”
creating training dataevaluating the trained model…
modifying training data (and repeating)
29
Interactive machine learning (IML)
• Training set editing for computer vision systems: Fails and Olsen 2003
• Application to other domains e.g. Shilman et al. 2006; Fogarty et al. 2008; Amershi et al. 2009; Baker et al. 2009
• Other types of interactionse.g., Talbot et al. 2009; Kapoor et al. 2010
30
Research questions for end-user IML
•Which interactions are possible and useful?•What are the practical benefits and challenges of incorporating end-user interaction in applied machine learning?
•How can IML be useful in real-time and creative contexts?
31
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
32
The Wekinator
• Built on Weka API• Downloadable: http://code.google.com/p/wekinator/
1. General-purpose2. GUI-based3. Runs in real time4. Supports appropriate end-user interactions
with the supervised learning process
33
Running models in real-time
model(s)
.01, .59, .03, ....01, .59, .03, ....01, .59, .03, ....01, .59, .03, ...
5, .01, 22.7, …5, .01, 22.7, …5, .01, 22.7, …5, .01, 22.7, …
time
time
Feature extractor(s)
Parameterizable process
34
Interactive data creation and model evaluation
“Gesture 1”
“Gesture 2”
“Gesture 3”
model
“Gesture 1”
trainingdata
35
Real-time, iterative design
36
3.3098 Class24
Under the hood
Model1 Model2 ModelM
Feature1 Feature2 Feature3 FeatureN…
Parameter1 Parameter2 ParameterM
…
…
joystick_x joystick_y
pitchvolume
webcam_1
37
3.3098 Class24
Under the hood
Model1 Model2 ModelM
Feature1 Feature2 Feature3 FeatureN…
Parameter1 Parameter2 ParameterM
…
…
Learning algorithms:Classification:
AdaBoost.M1J48 Decision TreeSupport vector machineK-nearest neighbor
Regression:MultilayerPerceptron
38
Tailored but not limited to music
The Wekinator• Built-in feature extractors for music & gesture• ChucK API for feature extractors and synthesis
classes
Other modules for sound synthesis,
animation, …?
Other feature extraction modules
Open Sound Control (UDP)
39
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
40
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
41
Study 1: Participatory design process with composers
• Process:– 7 composers– 10 weeks, 3 hours / week– Discussion and brainstorming at
each meeting– Final questionnaire
• Outcomes:– Focus on instrument-building– 2 publicly-performed compositions– Much-improved software and lots of feedback
42
Study 2: Teaching interactive systems building in an undergraduate course
• Princeton Laptop Orchestra (PLOrk)• Midterm assignment– Students built 1 continuous
+ 1 discrete system– Logging + short answer questions
• Outcomes:– Successful project completion– Used in midterm and final performances– Logs from 21 students
43
Study 3: Bow gesture recognition
• Case study with a composer/cellist• Task: Classify 8 types of standard bow gestures
e.g., up/down bow (2 classes), articulation (7 classes)
43
44
Study 3: Bow gesture recognition
• Case study with a composer/cellist• Task: Classify 8 types of standard bow gestures
e.g., up/down bow (2 classes), articulation (7 classes)• Method:
– Tasks defined and directed by cellist– Logging, observations, final questionnaire– Cellist assigned each iteration’s classifier a quality rating
(1 to 10)
• Successful classifiers created for all 8 tasks (rated “9” or “10”)
44
45
Study 4: Composer case studies
• Clapping Music Machine Variations (CMMV) by Dan Trueman, faculty
46
Study 4: Composer case studies
• CMMV by Dan Trueman, faculty• The Gentle Senses / MARtLET by Michelle Nagai, graduate student
47
Study 4: Composer case studies
• CMMV by Dan Trueman, faculty• The Gentle Senses / MARtLET by Michelle Nagai, graduate student• G by Raymond Weitekamp, undergraduate
48
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
Discussion of Findings
1. Users took advantage of interaction in their work with the Wekinator.
2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.
3. Feedback from the Wekinator influenced users’ actions and goals.
4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for
supporting creativity and embodiment.
50
An iterative approach to model-building
51
An iterative approach to model-building
(mean per student for each task)
(mean per classification task)
PLOrk con-tinuous
PLOrk discrete
KBow 1st session
KBow 2nd session
0
2
4
6
Mean # Trainings
52
Frequent modifications to training data in-between re-trainings
Add Data Edit Data Delete Data Clear All Data
Change Learner
Change Learn
Params
Change Features
0
0.5
1
1.5
2
2.5
3
KB
owpe
r-ta
sk a
vera
ge
Add Data
Edit Data
Delete
Data
Clear A
ll Data
Chang
e Learn
er
Chang
e Learn
er P
arams
Change
Fea
tures
0
2
4
6
Cont.Disc.P
LOrk
per-
stud
ent a
vera
ge
53
Interaction and the training dataset
• Training data is an interface for key tasks:– defining the learning problem– clarifying the learning problem to fix errors– communicating changes in problem over time– providing a “sketch” that the computer fills in
54
Interaction and the training dataset
• Training data is the most appropriate interface for:– defining the learning problem– clarifying the learning problem by fixing errors– communicating changes in problem over time– providing a “sketch” that the computer fills in
55
Playalong data recording
• Allowed training data to represent more fine-grained information
• Enabled composers to engage their musical and physical expertise– Allowed practice and attention to “feel”
56
“Conventional” model evaluation
model
Available data
Training set
Evaluation set
Train
Evaluate
Cross-validation: repeat with different data partitions.
57
“Direct” evaluation in Wekinator
model
Training set Train
Evaluate
58
Direct evaluation used most frequently
• Composers in participatory design and case studies: only direct evaluation
• KBow and PLOrk:
Cross-val. Acc.
Train. Acc. Direct Eval.0
1
2
3
4
5
6
PLOrk cont.PLOrk disc.KBow 1KBow 2
Mea
n #
times
act
ion
take
n
59
Roles of cross-validation and training accuracy
• K-bow: Cross-validation used to quickly and objectively compare different feature selections and learning algorithms
• PLOrk:– Treated as reliable evidence a model was performing
well– Used to validate the user’s own ability
60
Roles of direct evaluation
• Used to assess behavior of the model against subjective criteria
• Used to obtain feedback that shapes the users’ future interactions with the system
Discussion of Findings
1. Users took advantage of interaction in their work with the Wekinator.
2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.
3. Feedback from the Wekinator influenced users’ actions and goals.
4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for
supporting creativity and embodiment.
62
Subjective assessment of accuracy
• Important for gesture classifiers– Accuracy = model outputs are correct according to
learning concept definition
• Still important for open-ended instrument-building (continuous) tasks– Accuracy = matching a user’s expectations, especially on
inputs like the training examples
63
Other evaluation criteria
•Discrete classifiers:– Cost: consequences and locations of model errors– Decision boundary smoothness
64
Other evaluation criteria
•Discrete classifiers:– Cost: consequences and locations of model errors– Decision boundary smoothness
•Continous mappings:– Complexity, difficulty these are good!– Unexpectedness and surprise– “Feel”
65
Subjective evaluation criteria & CV
66
Subjective evaluation criteria & CV
• K-bow:– Cross-validation sometimes correlates with subjective
quality, but sometimes it doesn’t!
Task: Horizontal Position
Vertical Position
Bow Direction
On/Off String
Speed Articulation
R: -0.59 -0.44 -0.74 -0.50 0.65 0.93
Pearson’s correlation for tasks with > 3 iterations:
67
Subjective evaluation criteria & CV
• K-bow:– Cross-validation sometimes correlates with subjective
quality, but sometimes it doesn’t!
Task: Horizontal Position
Vertical Position
Bow Direction
On/Off String
Speed Articulation
R: -0.59 -0.44 -0.74 -0.50 0.65 0.93
Pearson’s correlation for tasks with > 3 iterations:
68
Subjective evaluation criteria & CV
• K-bow:– Cross-validation sometimes correlates with subjective
quality, but sometimes it doesn’t!
Task: Horizontal Position
Vertical Position
Bow Direction
On/Off String
Speed Articulation
R: -0.59 -0.44 -0.74 -0.50 0.65 0.93
Pearson’s correlation for tasks with > 3 iterations:
69
Thoughts: Is generalization accuracy important?
• Yes!– Human and environmental variations are inevitable
• …BUT it may not be the only or most important factor• Generalization estimated from the training set (e.g.,
using cross-validation) is not always informative• Implies that models designed for human use should
be evaluated by human use.
70
What should be the goal of the learning algorithm?• Most algorithms’ training process aims for a model with
good generalization (sometimes appropriate)• BUT the user is also employing the training data as an
interface (not representative of future inputs)• Better algorithms might– Optimize other criteria important to the user– Privilege training accuracy (e.g., k-nearest neighbor)– Provide parameters for interactive improvement against other
subjective criteria (e.g., using regularization parameter for boundary smoothness)
Discussion of Findings
1. Users took advantage of interaction in their work with the Wekinator.
2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.
3. Feedback from the Wekinator influenced users’ actions and goals.
4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for
supporting creativity and embodiment.
72
Interaction involves control and feedback
control
feedbackRunning the modelsCross-validation and training accuracy
Machine learning
algorithms
73
Running models informs future actions
• For example:– locate errors add correctly-labeled examples
– detect total failure delete all the data
model
WRONG LABEL!
“CORRECT LABEL”
new training example
74
Running models trains users to be more effective supervised learning practitioners
•Users especially learned to create better training datasets– Minimize noise– Balance the number of examples in each class– Vary examples along all the dimensions that might vary
in performance
• Important for novice users
75
Running models informs users’ goals for machine learning• Users liked being inspired by surprising behaviors of
neural networks• Users learned what was most easily accomplished
… and exploited flexibilities in the learning concept definition to create a model that most easily met their most important goals
• IML allows users to discover how goals might change– and to communicate changes via the training set
76
Running models teaches users about themselves and their work
K-Bow cellist:Model’s confusion of spiccato and riccocet
realization that her spiccato was too much like riccocet
improved technique
improved classifiers
Discussion of Findings
1. Users took advantage of interaction in their work with the Wekinator.
2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.
3. Feedback from the Wekinator influenced users’ actions and goals.
4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for
supporting creativity and embodiment.
78
Barriers to usability
• Long training time• Algorithms’ inability to model the desired concept
[easily]• Difficulty in debugging – No guidance on choosing a better algorithm or
algorithm parameters– Especially difficult for ML novices
79
Usability and usefulness: Study 1 composers
Statement 5-point Likert mean (std. dev.)
“The Wekinator allows me to create more expressive mappings than other techniques.”
4.5 (.8)
“The Wekinator allows me to create mappings more easily than other techniques.”
4.7 (.5)
80
Usability and usefulness: PLOrk students
Statement 5-point Likert mean (std. dev.)
“I can reliably predict what sound my model will make for a given inputgesture.”
4.5 (.7)
“Wekinator eventually learned what I wanted it to.”
4.3 (.9)
“My model provides reliable gesture classifications” (discrete task)
4.9 (.2)
“My model is musically expressive” (continuous task)
4.1 (.7)
81
Usability and usefulness: PLOrk students
• Building working interactive systems was fast– 27.1 minutes for continuous mapping– 16.1 minutes for discrete classifier
• Students enjoyed the Wekinator– “Learning by experimentation was
a lot of fun!”– “It’s so cool, the Wekinator rocks.”
82
Usability and usefulness: K-Bow
Task Rating (1 to 10) CV Accuracy (%)Direction 10 87.3On/Off String 10 83.5Grip 10 100.0Roll 10 98.2Horizontal Position 10 89.3Vertical Position 10 90.0Speed 9 87.5Articulation 9 98.8
• Models successfully created for all 8 tasks:
83
Usability and usefulness: K-Bow
Statement 5-point Likert response
“The Wekinator was able to create accurate bow stroke classiers in our work so far”
4
“The Wekinator was able tocreate bow stroke classiers more easily than other approaches”
“10 (so 5)"
84
Usability and usefulness: Case studies
create mappings more easily
Series1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Trueman Nagai Weitekamp
Agreement (1-5)
create mappings that were more expressive
create a kind of music thatisn't possible or that is hard to
create using other techniques
approach the process of composition in a new way
The Wekinator allowed me to:
Discussion of Findings
1. Users took advantage of interaction in their work with the Wekinator.
2. Users employed a variety of model evaluation criteria, and subjective evaluation did not always correlate with cross-validation accuracy.
3. Feedback from the Wekinator influenced users’ actions and goals.
4. The Wekinator was a useful and usable tool.5. Interactive supervised learning can be a tool for
supporting creativity and embodiment.
86
“There is simply no way I would be able to manually create the mappings that the Wekinator comes up with; being able to playfully explore a space that I've roughly mapped out, but that the Wekinator has provided the detail for, is inspiring.”
87
“The ability to map sound and gesture, in a very immediate and intuitive (yet unpredictable) way is really the most inspiring and useful aspect of the wekinator for me right now. I can see the possibility of building interfaces or instruments as needed, flexibly, on the fly, for different kinds of projects, and being able to quickly map them out to existing sound sets with only minor programming changes.”
88
Supporting qualities important to composers• Speed and ease of creating and exploring mappings
(especially complex mappings)– Demonstration can be faster and more efficient than
coding.• Access to surprise and discovery– Neural networks fill in the details of the training data
sketch.• Balancing surprise and complexity with predictability and
control– Users can reliably steer model behavior using the
training data.
89
Creativity support in HCI
• “Creativity support tool” guidelines proposed by Shneiderman (2000, 2007) and Resnick et al. (2005):– Support exploration, discovery, and sketching– Support diverse users (e.g., novices and experts) and applications– Operate seamlessly with other [composition] tools
• IML is integral to Wekinator’s realization of these guidelines
90
Embodiment is important
“I have never before been able to work with a musical interface … that allowed me to really ‘feel’ the music as I was playing it and developing it. The Wekinator allowed me to approach composing with electronics and the computer more in the way I might if I was writing a piece for cello, where I would actually sit down with a cello and try things out.”
91
Embodiment is important
• The Wekinator engaged users’ physical expertise as musicians– And allowed them to create instruments that “felt
right”– Users were physically engaged in the creation of the
data and the evaluation of the models – Playalong interface further supported embodied design
92
Outline
• Overviews of interactive computer music and machine learning
• The Wekinator software• Live demo• User studies• Findings and Discussion• Conclusions
93
IML is feasible and useful in music composition and performance.
“Well, I had basically lost interest in the whole process of digital controller-based instrument building, so the Wekinator's very existence has enabled and inspired me to get back into the game... The Wekinator enables you to focus on what your primary sonic and physical concerns are, and takes away the need to address so many details, and it does so in such a way that even if you DID spend all the time on building the mappings manually, you would *never* come up with what the Wekinator comes up with. So, the process becomes more focused, more musical, more creative, more playful. I actually *want* to do it.”
94
End-user IML poses distinct requirements and challenges
Supporting machine learning novices
Enabling fast training
Supporting debugging
Exposing meaningful parameters to users
Matching algorithms to users’ goals
95
Interaction can play many important roles
Interaction with the training data:Engages physical/embodied expertise
Allows changes to the learning problemFixes errors
Allows sketching
Interaction with the trained models:Informs edits to the algorithm & data
Teaches users what an algorithm can learnTeaches how to be a better data providerTeaches users about their own technique
96
IML can support creativity and embodiment
Supporting exploration, sketching, rapid prototyping
Providing access to surprise and discovery
Supporting diverse users
Supporting many applications
Engaging a high-level approach to design
97
Final Conclusions
• IML has the potential to significantly improve the usability and usefulness of conventional learning algorithms, and to enable application to new problems by new users.
• Applied machine learning is an HCI problem.
98
Thanks!• Perry Cook• Dan Trueman• Dan Morris• Ken Steiglitz• Adam Finkelstein• Szymon Rusinkiewicz• Michelle Nagai• Cameron Britt• Konrad Kaczmarek• Michael Early• MR Daniel• Anne Hege• Raymond Weitekamp• All the PLOrk students
• Meg Schedel• Andrew McPherson• Barry Threw• Keith McMillen Instruments• Ge Wang• Jeff Snyder• Xiaojuan Ma• Sonya Nikolova• Matt Hoffmann• Merrie Morris• Sumit Basu• Ichiro Fujinaga
• National Science Foundation GRFP
• Francis Lathrop Upton Fellowship
• National Science Foundation grants 0101247 and 0509447
• The Kimberly and Frank H. Moss '71 Research Innovation Fund
• The David A. Gardner '69 Magic Project
• The John D. and Catherine T. MacArthur Foundation
• Everyone else I’m forgetting
99
Related publications• Fiebrink, R. 2006. An exploration of feature selection as an optimization tool for musical genre
classification. Master’s thesis, McGill University. • Fiebrink, R., P. R. Cook, and D. Trueman. 2009. “Play-along mapping of musical controllers.” Proc.
International Computer Music Conference.• Fiebrink, R., M. Schedel, and B. Threw. 2010. “Constructing a personalizable gesture-recognizer
infrastructure for the K-Bow.” International Conference on Music and Gesture (MG3).• Fiebrink, R., D. Trueman, C. Britt, M. Nagai, K. Kaczmarek, M. Early, M.R. Daniel, A. Hege, and P. R.
Cook. 2010. “Toward understanding human-computer interactions in composing the instrument.” Proc. International Computer Music Conference.
• Fiebrink, R., D. Trueman, and P. R. Cook. 2009. “A meta-instrument for interactive, on-the-fly learning.” Proc. New Interfaces for Musical Expression.
• Fiebrink, R., G. Wang, and P. R. Cook. 2007. “Don't forget the laptop: Using native input capabilities for expressive musical control.” Proc. International Conference on New Interfaces for Musical Expression.
• Fiebrink, R., G. Wang, and P. R. Cook. 2008. “Support for MIR prototyping and real-time applications in the ChucK programming language.” Proc. International Conference on Music Information Retrieval.
• Wang, G., R. Fiebrink, and P. R. Cook. 2007. “Combining analysis and synthesis in the ChucK programming language.” Proc. International Computer Music Conference.
100
References
• Amershi, S., Fogarty, J., Kapoor, A., and Tan, D. 2010. “Examining Multiple Potential Models in End-User Interactive Concept Learning.” Proc CHI 2010.
• Baker, K., A. Bhandari, and R. Thotakura. 2009. “Designing an Interactive Automatic Document Classification System.” Proc. HCIR 2009, pp. 30–33.
• Fails, Jerry, and Dan Olsen. 2003. “Interactive machine learning.” Proc. IUI, pp. 39–45.• Fels, S. S. and G. E. Hinton. 1993. “Glove-Talk: A neural network interface between a data-glove and a
speech synthesizer.” IEEE Trans. on Neural Networks, vol. 4.• M. Lee, A. Freed, and D. Wessel. 1992. “Neural networks for simultaneous classification and parameter
estimation in musical instrument control.” Adaptive and Learning Systems, vol. 1706, pp. 244-55.• Raphael, Chris. 2001. “A probabilistic expert system for automatic musical accompaniment.” Journal of
Computational and Graphical Statistics, vol. 10, no. 3, pp. 487-512.• Shneiderman, B. 2000. “Creating Creativity: User interfaces for supporting innovation.” ACM Trans. CHI, vol.
7, no. 1, pp. 114–138.• Shneiderman, B. 2007. “Creativity support tools: Accelerating discovery and innovation.” Comm. ACM vol.
50, no. 12, Dec. 2007, pp. 20–32.• Witten, I., and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. San
Francisco: Morgan Kaufmann.
101
Training set size – why so small?
• Learning concepts were “easier” ? (i.e., lower sample complexity)
• Users learned to provide the most useful training examples for representing the problem?– like active learning, but the user is in charge
• Users defined the learning concept definition in order to negotiate the tradeoffs between what they wanted and what was possible in a given amount of time to create training data and train the algorithms?
102
Running models enables users to practice employing them more effectively
• Through practice, they learn to use models more effectively• Users accepted or expected the need to adapt their behaviors
103
An HCI view on algorithms
• Algorithms afford certain possible interactions, control, and feedback – i.e., they have an innate potential to be useful
• User interfaces can hide or expose these affordances– And can expose them in more or less usable ways
• The Wekinator exploits the fact that supervised learning models can be manipulated through the training dataset
• Algorithms can be made more useful and usable– through more appropriate interfaces– through affording more appropriate interactions, control, and
feedback
top related