Update on researching media coverage of international large-scale assessments

A key rationale for carrying out international comparative surveys of skills such as PISA is that the findings can positively influence policy and therefore educational outcomes. For example the OECD in 2013 had this to say about the hopes for its new survey of adult skills (PIAAC):
“we are now able to base policies for adult skills on large-scale facts and figures. Just as PISA2 had deep repercussions for schools, we can expect the ground-breaking evidence provided by the Survey to have far-reaching implications, for the way skills are acquired, maintained, stepped-up and managed throughout the entire lifecycle – and ultimately how good Europe is at putting skills to work to create economic growth and jobs.”

Such claims implicate the media as part of a chain of influence. The argument runs that the media publicise the findings, which influence public opinion and in turn this puts pressure on politicians to respond. The media can also compare past successes, failures and improvements through a running commentary on trends in the test scores.

This is a significant topic given the increasing mediatisation of society. However, the impact of media on educational policy is assumed but not widely researched. With colleagues in a range of countries I have been taking some first steps in this area, comparing media coverage of the OECD survey of adult skills (PIAAC) as it develops. In Yasukawa et al, 2016 we compared media coverage of the first PIAAC survey in France, Japan and the UK; in Hamilton, 2017 I examined coverage in the countries participating in the second survey, focussing particularly on Singapore, Greece, New Zealand and Slovenia. Our comparative research reveals some of the factors that are important in determining the media’s place in the chain of influence.

While we can envisage positive roles for the media in policy formation, in practice journalists are often blamed for partial and sensational coverage of international survey findings. Researchers and agencies voice frustration at this, searching for ways to prevent misinterpretation of data and poor commentary. Our research suggests that we need a better, more sympathetic understanding of the constraints under which journalists work along with willingness to share responsibility for the ways in which data from international assessments are translated in the public sphere. There are often unproductive gaps between the research, practice and policy communities due to problems of communication, differences in logic, and goals, In the same way, media professionals follow procedures and priorities that are specific to their industry.

I develop this argument in my blog post for UNESCO World Education. The most recent Global Education Monitoring Report focuses on accountability so my blog addresses this theme asking

How far does media coverage of international large-scale assessment help hold governments to account for their education commitments?