New Article: Investigating English Language Teachers’ Beliefs and Stated Practices Regarding Bottom-up Processing Instruction for Listening in L2 English

An article based upon my MA dissertation has just been published in the Journal of Second Language Teaching & Research. It is Open Access so you can access the full text, but here is a summary a bit longer than the abstract.

Language learners face difficulties in parsing what they hear into a meaningful message. There are still gaps in SLA research about how we do this and about how it can be taught. There was nothing about bottom-up (phonology level, syllable-level) listening. There is not much research on what teachers do or say that they do, so I wanted find out about this.

Not much has changed since John Field (2008) said that “the Comprehension Approach” dominates how listening is taught. This was supported by Siegel (2014), who found that most teachers used comprehension-based activities.

Based on speech learning models (Best, 1995; Flege, 1995, 2007), it would be advisable to educate learners to discern the difference between sounds (phonemes) that form part of the language being learned, but which do not form part of their first language. Also, ongoing practice with variations in these sounds would be helpful.

With words and grammar, there might be a psychological process of cueing (Ellis, 2006), which might also explain lexical priming and collocation. However, making things stand out appears to be key. Making things stand out does not mean only teaching isolated, citation-form words because this does not always carry over to listening skill acquisition (Bonk, 2000; Joyce, 2013). Instead this need to be balanced with listening to natural connected speech.

I wanted to find out whether teachers taught learners to decode single words and phrases, connected speech and phonological differences between languages. I did this by questionnaire and asked people over Twitter. I analysed the data in JASP and did some explorations in the data.

There is not a total absence of bottom-up instruction. A lot of use of stresses corresponded with bottom up instruction. A minority of teachers in my sample used knowledge of phonology of their learners’ first languages. There is a correlation between using this knowledge and regular single sound (phoneme) and connected speech instruction. However, there is a reluctance among teachers to teach single sounds and words. However, it should be noted that this is a minority activity. Most teachers in the sample said they did not consider differences between first and second language phonology, are reluctant or do not regularly teach decoding of single words and , phrases, though connected speech may be taught slightly more regularly.

References (in this summary)

Best C. T. (1995) A direct realist view of cross-language speech perception, in Strange, W. (ed.) Speech Perception and Linguistic Experience. York Press. 171-206. Retrieved April 25th 2017 from http://www.haskins.yale.edu/Reprints/HL0996.pdf

Bonk, W. J. (2000) Second Language Lexical Knowledge and Listening Comprehension, International Journal of Listening, 14:1, 14-31, DOI:10.1080/10904018.2000.10499033

Ellis, N. (2006) Language acquisition as rational contingency learning. Applied Linguistics 27, pp. 1-24

Field, J. (2008) Listening in the Language Classroom (ebook). Cambridge: CUP.

Flege, J. (1995) Second-language Speech Learning: Theory, Findings, and Problems. In Strange, W. (Ed) Speech Perception and Linguistic Experience: Issues in Cross-language research. Timonium, MD: York Press, pp. 229-273.

Flege, J. (2007) Language contact in bilingualism: Phonetic system interactions. In Cole, J. & Hualde, J. I. (Eds.), Laboratory Phonology 9. Berlin: Mouton de Gruyter, pp. 353-380.

JASP Team (2017). JASP (Version 0.8.1.2)[Computer software]. Retrieved June 25th 2017 from https://jasp-stats.org/download/

Joyce, P. (2013) Word Recognition Processing Efficiency as a Component of Second Language Listening, International Journal of Listening, 27:1, 13-24, DOI:10.1080/10904018.2013.732407

Siegel, J. (2014) Exploring L2 listening instruction: examinations of practice. ELT J 2014; 68 (1): 22-30. doi:10.1093/elt/cct058

Rating learners’ pronunciation: how should it be done?

This goes into a bit more detail about phonetics than some people familiar with me might be comfortable with.

On Friday I went to Tokyo JALT’s monthly meeting (no link because I can’t find a permalink) to see three presentations on pronunciation (or more accurately, phonology, seeing as Alastair Graham-Marr covered both productive and receptive, listening skills). All three presenters, Kenichi Ohyama, Yukie Saito and Alastair Graham-Marr were interesting but there was one particular point that stuck with me from Yukie Saito’s presentation.

She was talking about rating pronunciation and how it had often been carried out by ‘native speaker’ raters. She also said that it was often carried out according to rater intuition on Likert scales of either ‘fluency’ (usually operating as speed of speech), ‘intelligibility’ (usually meaning phonemic conformity to a target community norm) or ‘comprehensibility’ (how easily raters understand speakers).

What else could work is something that needs to be answered, not only to make work done in applied linguistics more rigorous but to make assessment of pronunciation less arbitrary. I have an idea. Audio corpora could be gathered of speakers in target communities, phonemes run through Praat, and typical acceptable ranges for formant frequencies taken. Learners should then be rated according to comprehensibility by proficient speakers, ideally from the target community, as well as run through Praat to check that phonemes correspond to the acceptable ranges for formants. This data would all then be triangulated and a value assigned based on both.

Now, I fully acknowledge that there are some major drawbacks to this. Gathering an audio corpus is massive pain. Running it all through Praat and gathering the data even more so. To then do the same with learners for assessment makes things yet more taxing. However, is it really better to rely on rater hunches and hope that every rater generally agrees? I don’t think so and the reason is, there is no construct that makes any of this any less arbitrary, especially if assessment is done quickly. With the Praat data, there is at least some quantifiable data to show whether, for example, a learner-produced /l/ conforms to that typically produced in the target community and it would be triangulated with the rater data. It would also go some way to making the sometimes baffling assessment methodologies a bit more transparent, at least to other researchers.