Top Banner
Ultrasound-integrated pronunciation teaching and learning Noriko Yamane, Jennifer Abel, Blake Allen, Strang Burton, Misuzu Kazama, Masaki Noguchi, Asami Tsuda, and Bryan Gick University of British Columbia The Problem: Second language (L2) pronunciation is a key element of L2 learning, but it’s often hard to teach, there isn’t enough classroom time, and mapping from acoustics to articulation of difficult sounds can be challenging. The eNunciate Project (enunciate.arts.ubc.ca): Creating multimodal online pronunciation learning resources featuring ultrasound overlay videos Implementation: Used in Japanese and Linguistics courses Next Steps: Creating an interactive, real-time ultrasound-based tongue visualizer . Learners can see their own productions in overlay format, and future video production can be automated. 1. Double- simultaneous recording 2. Manual alignment (using Adobe Premiere) 3. Manual erasing and colouring of US image (using Adobe After Effects) 4. Manual overlay (using Premiere) 1. Double- simultaneous video collection 3. Real-time overlay 2. Algorithm automates steps 2 and 3 from above in real-time using OpenCV FACE EDGE DETECTION ULTRASOUND IMAGE PROCESSING TONGUE REGION HIGHLIGHTING SCALING ULTRASOUND TO FACE OVERLAYING ULTRASOUND ON FACE
1

Noriko Yamane, Jennifer Abel, Blake Allen, StrangBurton ... · Noriko Yamane, Jennifer Abel, Blake Allen, StrangBurton, Misuzu Kazama, Masaki Noguchi, Asami Tsuda, and Bryan Gick.

Jan 18, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Noriko Yamane, Jennifer Abel, Blake Allen, StrangBurton ... · Noriko Yamane, Jennifer Abel, Blake Allen, StrangBurton, Misuzu Kazama, Masaki Noguchi, Asami Tsuda, and Bryan Gick.

Ultrasound-integrated pronunciation teaching and learningNoriko Yamane, Jennifer Abel, Blake Allen, Strang Burton, Misuzu

Kazama, Masaki Noguchi, Asami Tsuda, and Bryan GickUniversity of British Columbia

The Problem: Second language (L2) pronunciation is a key element of L2 learning, but it’s often hard to teach, there isn’t enough classroom time, and mapping from acoustics to articulation of difficult sounds can be challenging.

The eNunciate Project (enunciate.arts.ubc.ca): Creating multimodal online pronunciation learning resources featuring ultrasound overlay videos

Implementation: Used in Japanese and Linguistics courses

Next Steps: Creating an interactive, real-time ultrasound-based tongue visualizer. Learners can see their own productions in overlay format, and future video production can be automated.

1. Double-simultaneous recording

2. Manual alignment (using Adobe Premiere)

3. Manual erasing and colouring of US image (using Adobe After Effects)

4. Manual overlay (using Premiere)

1. Double-simultaneous video collection

3. Real-time overlay

2. Algorithm automates steps 2 and 3 from above in real-time using OpenCV

FACE EDGE DETECTION

ULTRASOUND IMAGE

PROCESSING

TONGUE REGION

HIGHLIGHTING

SCALING ULTRASOUND

TO FACE

OVERLAYING ULTRASOUND

ON FACE