Recently, I attended a small conference called AZCALL 2018 hosted by the CALL Club of Arizona State University. This one-day conference was planned by the graduate students in the CALL Club at ASU for the first time, anticipating about 60 people to attend. To their surprise, actual registrations doubled that number! The best part of attending small conferences like this one is that they are usually highly impactful without being overwhelming. So I’m still jazzed about some of the topics discussed!
The conference opened with a Keynote by Jonathon Reinhardt, Associate Professor of English at the University of Arizona, about the potential of using multiplayer games for second language learners. If you go to his page, you’ll see his recent research focuses on the use of games and gameful educational techniques, which have been very hot topics in both second language pedagogy and instructional design circles.
Aside from the now common theme of games for education, game-based learning and gamification, virtual and augmented reality were represented in presentations by Margherita Berti, Doctoral Candidate at the University of Arizona and the ending keynote by the always energetic Steven Thorne, among others. Berti won the conference award for best presentation when she spoke about how she uses 360º YouTube videos and Google Cardboard to increase cultural awareness in her students of Italian. Check out her website for more of her examples, Italian Open Education.
My personal favorite presentation was given by Heather Offerman from Purdue University, who spoke about her work on using visualization of sound to give pronunciation feedback to Spanish language learners (using a linguistics tool called Praat). Her work is very close to some of the research I’m doing into the visualization of Chinese tones with Language Lesson, so I was excited to hear about the techniques she was using and how successful she feels they were as pedagogical interventions. It’s interesting that in the last few CALL conferences I’ve attended, there have started to be more presentations on the need for more explicit and structured teaching of L2 pronunciation in particular, which could appear to be in contrast with the trends for teaching Comprehensible Input (check out this 2014 issue of The Language Educator by ACTFL for more info on CI). But I argue that it’s possible – and possibly a good idea – to integrate explicit pronunciation instruction along with the CI methodology to get the best of both worlds. Everything in moderation, as my mom would say.
Just like with all things, there is no silver bullet technology for automatically evaluating student L2 speech and providing them with the perfect feedback to help them improve. Some have been focusing on the use of Automatic Speech Recognition (ASR) technologies and have been using them in their L2 classrooms. However, the use of ASR is founded on the premise that if the machine can understand you then your pronunciation is good enough. I’m not sure that’s the bar that I want to set in my own language classroom, I’d rather give the students much more targeted feedback on the segmentals of their speech that not only help them notice where their speech might differ from the model, but also to notice important aspects of the target language to gain better socio-cultural understanding of verbal cues.
That is why I have been working on developing the pitch visualization component of Language Lesson. The goal is to help students who struggle with producing Chinese tones properly notice the variance between their speech and the model they are repeating by showing them both the model and their own pitch contours. Soon, I hope to have a display that will overlap the two pitch contours so that students can see very clearly the differences between them. Below are some screenshots of the pitch contours that I hope to integrate in the next 6 months.


I’m looking forward to spending part of this winter break working on a research project to assess the value of pitch contour visualization for Chinese L2 learners. I will be collecting the recordings I’ve been capturing for the past two years and producing a dataset for each group of students (some of whom had the pitch visualization and some who did not). I will be looking to see if there are differing trends in the students’ production of Chinese tones amongst the different treatment groups. Below are just a few of the articles that I’ve read recently that have informed my research direction. It should be exciting work!
Elicited Imitation Exercises
Vinther, T. (2002). Elicited imitation:a brief overview. International Journal of Applied Linguistics, 12(1), 54–73. https://doi.org/10.1111/1473-4192.00024
Yan, X., Maeda, Y., Lv, J., & Ginther, A. (2016). Elicited imitation as a measure of second language proficiency: A narrative review and meta-analysis. Language Testing, 33(4), 497–528. https://doi.org/10.1177/0265532215594643
Erlam, R. (2006). Elicited Imitation as a Measure of L2 Implicit Knowledge: An Empirical Validation Study. Applied Linguistics, 27(3), 464–491. https://doi.org/10.1093/applin/aml001
Chinese Tone Acquisition
Rohr, J. (2014) Training Naïve Learners to Identify Chinese Tone: An Inductive Approach in Jiang, N., & Jiang, N. (Ed.). Advances in Chinese as a Second Language: Acquisition and Processing. (pgs 157 – 178). Newcastle-upon-Tyne: Cambridge Scholars Publishing. Retrieved from http://ebookcentral.proquest.com/lib/carleton-ebooks/detail.action?docID=1656455a”]
Add a comment