The LINGUIST List is dedicated to providing information on language and language analysis, and to providing the discipline of linguistics with the infrastructure necessary to function in the digital world. LINGUIST is a free resource, run by linguistics students and faculty, and supported primarily by your donations. Please support LINGUIST List during the 2016 Fund Drive.
|Title:||Interacting with Visuals in L2 Listening Tests: An eye-tracking study||Add Dissertation|
|Author:||Ruslan Suvorov||Update Dissertation|
|Email:||click here to access email|
|Institution:||Iowa State University, Applied Linguistics and Technology|
|Linguistic Subfield(s):||Applied Linguistics;|
|Abstract:||Visual information plays an important role in second language (L2) listening comprehension (Anderson & Lynch, 1988; Field, 2008; Rost, 2011), yet visuals have seen limited use in the assessment of L2 listening. The limited use of visuals in listening tests can be partially attributed to the lack of solid empirical evidence about how visuals are viewed during such tests and what impact they have on test performance. In particular, existing research has produced mixed results regarding the effect of visual information on media-enhanced L2 listening test performance of language learners, whereas their viewing behavior during such assessment has not been explored in detail (Ockey, 2007; Wagner, 2007, 2010a). To address this gap, the present study employed eye-tracking technology to investigate the extent to which L2 learners view context and content videos during a Video-based L2 Academic Listening Test (VALT), learners' self-reported perceptions and use of the two video types, and the effect of these visuals on their test performance.
This mixed-methods study was based on Creswell and Plano Clark's (2007) data transformation model of the triangulation design and addressed five research questions that investigated (a) the appropriateness of statistical properties of test scores for norm-referenced decisions, (b) differences between scores on the subtests associated with different video types and between scores on the video and audio-only versions of the test, (c) learners' viewing patterns with regard to context and content videos, (d) learners' use of visual information when watching the two video types, and (e) learners' use of visual information when answering individual test questions. Three sets of data were collected and analyzed in the study. Test performance data consisted of L2 learners' scores on the Video-based Academic Listening Test (n = 75) and its audio-only version called the AALT (n = 46), which were analyzed using paired-samples and independent samplest tests, as well as descriptive statistics, reliability analysis, item analysis, and distractor analysis. Eye-tracking data were collected using eye-tracking technology and included the recordings of the participants' eye movements (n = 33), which were analyzed using descriptive statistics, paired-samples t tests, and correlation analysis. Retrospective verbal data obtained via cued retrospective reporting were composed of 33 participants' verbalizations regarding their use of visual information when watching the two types of videos and their perceptions of the helpfulness of this information for answering individual test questions.
The results demonstrated that scores on the VALT and on the AALT, which were developed for this study, were appropriate for making norm-referenced decisions. While the analysis of test performance data found no effect of visuals on L2 learners' test scores, the use of eye-tracking technology was instrumental in detecting the different effects of the context and content visuals. Moreover, the results revealed differences between context and content videos in terms of their perceived use during the test-taking process and their perceived helpfulness for answering questions on the VALT. This study has valuable practical and theoretical implications with regard to the use of visuals for L2 listening instruction and assessment.