Top Banner
Doctoral consortium LAK17 14th March 2017, Vancouver, Canada Digital Learning Projection Daniele DI MITRI [email protected] Learning state estimation from multimodal learning experiences.
20

Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Apr 12, 2017

Download

Education

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Doctoral consortium LAK17

14th March 2017, Vancouver, Canada

Digital Learning Projection

Daniele DI MITRI [email protected]

Learning state estimation from multimodal learning experiences.

Page 2: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Pagina 2

Where does learning happen?

“Online learning does not happen online, it happens where the learner is. It can’t happen where the learner isn’t.”

- Peter Goodyear

Page 3: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Background takeaways

1. Learning data available are not enough

2. Learning happens everywhere: ubiquitous and incidental

3. Sensors can collect multimodal data

4. Machine learning can help analysing data

5. Automatically learn how to estimate “learning states”

6. It can generate feedback and recommend actions

Pagina 3

Page 4: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Machine Learning approach

Pagina 4

Learning States

Multi modal data

Learning state estimation

Page 5: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Pagina 5

Proposed Framework

Blueprint of Cognitive Inference

Back-track the intangible by projecting the tangible.

Page 6: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Input space

Pagina 6

Multimodal Data

Involuntary

EEG / Focus

HeartRate

Sweat

Deliberated

Step count

Gaze direction

Head position

Hands position

• Observable behavior!

• Taxonomy

• Several way to divide events

– Deliberated vs involuntary

– Deterministic vs stochastic (random)

– Endogenous vs exogenous

– Interactive vs Reflective

Page 7: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Output space

Pagina 7

Can be defined as:• Learning Gain • Learning Progress• Learning Performance

The spaces can be divided into segments that are the Learning States

Affective- Behaviour- Cognition (ABC) Learning Gains Project Open University U.K. https://twitter.com/LearningGains

01020304050607080

Affective (emotions)

Cognition (feelings)

Behaviour (actions)

Learning State (idea)

LS1 LS2 LS3

Page 8: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Research Challenges

C1 – Availability of Labels – defining and sampling the output space.

C2 – Architectural Design– designing an architecture which collects different multiple heterogeneous modalities

C3 – Sensor Fusion – aligning the multimodal data for analysis and prediction

C4 – Appropriate Feedback – designing some sort of feedback which maximises learning

Pagina 8

Page 9: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Research Questions

• Q1 – Learning States Is it possible to represent the learning process into numerable learning states which can be predicted?

• Q2 – Data Collection What are the requirements for a multimodal sensor fusion architecture?

• Q3 – Data Analysis Do multimodal data streams combined with learner’s action sequences improve the prediction of learning states?

• Q4 – Feedback Generation How can we use multimodal data to generate feedback to support learning?

Pagina 9

Page 10: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Research Tasks

Pagina 10

• T1 – Literature Review on multimodal data for learning

• T2 – 1st experiment Learning Pulse

• T3 – 2nd experiment WEKIT prototype

• T4 – 3rd experiment using WEKIT for Learning States

Page 11: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Mapping tasks with questions & challenges

Pagina 11

Q1: Learning States

Q3: Data Analysis

Q2: Data Collection

Literature Review

1stExperiment

2nd Experiment

3rdExperiment

C3:Sensor Fusion

C2: Architecture Design

C1: Availability Labels

Tasks Questions Challenges

C4: Appropriate Feedback

Q4: Feedback Generation

Page 12: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Literature review: Learning Blueprint• Taxonomy of Multimodal Data for Learning

• Target: JCAL special issue on MMLA

• Look for1. similar experiments

2. techniques used to collect data

3. multimodal data chosen in related studies

4. learning performance indicators used

5. data analysis approaches used

6. results obtained.

Pagina 12

Page 13: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

1st Experiment: Learning Pulse LAK17

Pagina 13

Di Mitri, D., Scheffel, Drachsler, H., M., Börner, D., & Specht, M. (2017). Learning Pulse : a machine learning approach for predicting performance in self-regulated learning using multimodal data.

Page 14: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

2nd Experiment: WEKIT

Pagina 14

• Wearable Enhanced Knowledge Intensive Training

• Main task now: designing & building prototype

• Design of a multimodal architecture

• For more info: https://wekit.eu/

Page 15: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

3rd Experiment: First Aid Training with manikins

• full monitoring of the learning environment

• clear start and end

• performance measurement

• practical learning over more cognitive ones

• close set of actions.

• High practical significance!

Pagina 15

Page 16: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

3rd Experiment: Option B)Indigenous shelter building • Proposal of collaboration

• Land & Learning Indigenous Technology Experience (LLITE) with Canadian partners

• Fit for WEKIT AR technology

• 3D holograms can be used

• Case scenario must be defined well

Pagina 16

Page 17: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Unique Selling Points

• Generative approach vs data already available

• Machine learning approach learn from history

– Input space: multimodal data

– Output space: notion of Learning State

• “What to do with data?” vs “what data can be collected?”

• Real-time data collection & analysis vs ex-post

• Random Action sequences • AR technologies

Pagina 17

Page 18: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

S.W.O.T.

Pagina 18

Strengths Weaknesses

Data driven Not enough data

Opportunities Threats

Personalisation Garbage-in-garbage-out

Page 19: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Ethics & Privacy - ‘2084’

• Out of scope of my PhD

• But still really relevant!

• Great opportunity vs great risks

• Expand the LA Framework

• Follow-up of the project

Pagina 19

Page 20: Digital Learning Projection - Learning state estimation from multimodal learning experiences.

Q&A

Thanks for listening!

Daniele Di Mitri

[email protected]

@dimstudi0

Pagina 20