Benjamin Kremmel (University of Innsbruck) and Luke Harding (Lancaster University) have developed the Language Assessment Literacy Survey. The aim of this project was to create a comprehensive survey of language assessment literacy (LAL) which can be used for needs analysis, self-assessment, reflective practice, and research. The survey was conducted in several stages. Some background information (including conference presentations) is provided at the bottom of this page.
The survey is live; you can participate by clicking on the link here: https://lancasteruni.eu.qualtrics.com/jfe/form/SV_dgUnkyGlDhQGtNz
We would also be grateful if you could share the link to the survey with any interested colleagues or networks of acquaintances. The survey is targeted at: language teachers, professional examiners, language assessment developers, language assessment researchers, score users (e.g. university admissions officers), policy-makers, learners/candidates, and parents of candidates.
Survey items
The full set of items used in the current survey (launched May 2017) is shown below. We are happy for others to make use of these items, however if you do please provide the following citation / attribution:
Kremmel, B. & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the Language Assessment Literacy Survey. Language Assessment Quarterly, 17(1), 100-120.
1) how to use assessments to guide learning or teaching goals
2) how to use assessments to evaluate progress in language learning
3) how to use assessments to evaluate achievement in language learning
4) how to use assessments to evaluate language programs
5) how to use assessments to diagnose learners’ strengths and weaknesses
6) how to use assessments to motivate student learning
7) how to use self-assessment
8) how to use peer-assessment
9) how to interpret measurement error
10) how to interpret what a particular score says about an individual’s language ability
11) how to determine if a language assessment aligns with a local system of accreditation
12) how to determine if a language assessment aligns with a local educational system
13) how to determine if the content of a language assessment is culturally appropriate
14) how to determine if the results from a language assessment are relevant to the local context
15) how to communicate assessment results and decisions to teachers
16) how to communicate assessment results and decisions to students or parents
17) how to train others about language assessment
18) how to recognize when an assessment is being used inappropriately
19) how to prepare learners to take language assessments
20) how to find information to help in interpreting test results
21) how to give useful feedback on the basis of an assessment
22) how assessments can be used to enforce social policies (e.g., immigration, citizenship)
23) how assessments can influence the design of a language course or curriculum
24) how assessments can influence teaching and learning materials
25) how assessments can influence teaching and learning in the classroom
26) how language skills develop (e.g., reading, listening, writing, speaking)
27) how foreign/second languages are learned
28) how language is used in society
29) how social values can influence language assessment design and use
30) how pass-fail marks are set
31) the concept of reliability (how accurate or consistent an assessment is)
32) the concept of validity (how well an assessment measures what it claims to measure)
33) the structure of language
34) the advantages and disadvantages of standardized testing
35) the history of language assessment
36) the philosophy behind the design of a relevant language assessment
37) the impact language assessments can have on society
38) the relevant legal regulations for assessment in your local area
39) the assessment traditions in your local context
40) the specialist terminology related to language assessment
41) different language proficiency frameworks (e.g., the Common European Framework of Reference [CEFR], American Council on the Teaching of Foreign Languages [ACTFL])
42) different stages of language proficiency
43) different types of purposes for language assessment purposes (e.g., proficiency, achievement, diagnostic)
44) different forms of alternative assessments (e.g., portfolio assessment)
45) your own beliefs/attitudes towards language assessment
46) how your own beliefs/attitudes might influence one’s assessment practices
47) how your own beliefs/attitudes may conflict with those of other groups involved in assessment
48) how your own knowledge of language assessment might be further developed
49) using statistics to analyse the difficulty of individual items (questions) or tasks
50) using statistics to analyse overall scores on a particular assessment
51) using statistics to analyse the quality of individual items (questions)/tasks
52) using techniques other than statistics (e.g., questionnaires, interviews, analysis of language) to get information about the quality of a language assessment
53) using rating scales to score speaking or writing performances
54) using specifications to develop items (questions) and tasks
55) scoring closed-response questions (e.g. Multiple Choice Questions)
56) scoring open-ended questions (e.g. short answer questions)
57) developing portfolio-based assessments
58) developing specifications (overall plans) for language assessments
59) selecting appropriate rating scales (rubrics)
60) selecting appropriate items or tasks for a particular assessment purpose
61) training others to use rating scales (rubrics) appropriately
62) training others to write good quality items (questions) or tasks for language assessments
63) writing good quality items (questions) or tasks for language assessments
64) aligning tests to proficiency frameworks (e.g., the Common European Framework of Reference [CEFR], American Council on the Teaching of Foreign Languages [ACTFL])
65) determining pass-fail marks or cut-scores
66) identifying assessment bias
67) accommodating candidates with disabilities or other learning impairments
68) designing scoring keys and rating scales (rubrics) for assessment tasks
69) making decisions about what aspects of language to assess
70) piloting/trying-out assessments before their administration
71) selecting appropriate ready-made assessments
Background
The initial idea for the survey was to empirically validate the suggested language assessment literacy levels provided in Taylor (2013). Our first version was a simple survey which explored stakeholders’ views of their own LAL needs, and the LAL needs of other stakeholders. The results of this first survey are described in the presentation below (given at the EALTA 2015 conference in Copenhagen).
Following that presentation, we decided that a more expansive survey would be required, with multiple items targeting each hypothesised dimension of LAL. The second version of the survey went through multiple iterations (including two stages of expert review and pre-testing). Part of that process is described in the presentation below (given at the CRELLA 2016 summer seminar).
https://www.beds.ac.uk/__data/assets/pdf_file/0008/509264/Luke-Harding-CRELLA-7-July-2016.pdf
The survey was refined further throughout 2016 and 2017, and officially launched in May 2017. The Initial results from the first large-scale administration of the survey will be presented at LTRC in Bogotá in July 2017.