Bill & Melinda Gates Foundation
Surprisingly many low- and middle-income countries have no regular measures of foundational learning outcomes. Better measurement can help communicate the magnitude and urgency of the learning crisis and be a trigger to action.
In most low- and middle-income countries (LMICs), we do not have the data to know how many children can read with meaning. One of the key policy lessons to emerge from the RISE Programme’s research is the need to measure learning in order to drive systems change and learning improvements.
This body of work suggests learning measurement should follow three R’s: Learning measures should be regular, allowing the tracking of learning across grades and over time, beginning in the early primary grades. Learning measures should be relevant to many actors in the system, from informing high-level strategic decision making to informing daily instruction in the classroom, and learning measures must cover relevant skills such as foundational learning in the early grades. And learning measures should be reliable, avoiding distortions often introduced by high-stakes assessments that result in widespread cheating.
To what extent are these principles being applied, and what tools are available for making them a reality?
There is a learning crisis in the developing world: World Bank reports suggest that in Sub-Saharan Africa 9 out of 10 children do not know how to read with comprehension by age 10. Yet most LMICs lack regular, relevant, and reliable measures of learning. Twenty-four countries in Sub-Saharan Africa do not have data to report on learning poverty. In 86 percent of LMICs, we do not know how much learning has been lost due to COVID school closures.
This is a problem because the measurement of learning plays an important role in driving commitment to addressing low learning. In places where we have seen substantial improvements in learning outcomes, publicly available learning measures helped focus attention on low learning and drive action to address it.
The significant improvements on schooling across the world were in part enabled by the ability to measure progress on schooling attainment in a regular, relevant, and reliable manner through EMIS systems. This helped generate political gains and support sustained commitment to ongoing improvements in schooling access. Generating regular, relevant, and reliable measures of learning in national systems would similarly have the potential to create a common understanding around learning levels and progress and incentivise political action to improve learning.
Once a commitment to improving learning is firmly in place, learning measures are critical for policymakers to be able to answer important questions such as: ‘Are we on the right track to improve learning outcomes?’ and ‘What do we need to change?’.
It may seem surprising that most countries do not have these types of learning measures already. Most countries indeed have assessments. However, these assessments typically:
It is worth noting that even research projects that measure learning use a wide variety of tools to do so and therefore do not typically provide comparable learning measures across contexts (cf. Bertling et al, CGD working paper, forthcoming).
If learning measurement is a necessary (even if not sufficient1 ) condition for learning improvements, then much more effort and investment should go into ensuring countries have regular, relevant, and reliable measures. The results of these assessments should be disseminated publicly to enable advocacy and accountability for action.
The good news is that there are more tools and approaches than before to do so.
Under the leadership of the UNESCO’s Institute for Statistics (UIS) and with support from partner organisations, solutions have been developed in the last few years to enable countries to improve their learning data, building on their existing assessments:
These new methods include amongst others:
Expanding comparability of learning data, regional assessments such as Africa’s PASEC and Latin America’s LLECE have now been rigorously linked to an international learning standard allowing participating countries to report on results on a comparable metric.
In addition to improving the quality of national assessments, more could be done to raise the political salience of the learning crisis and inform policy approaches to address it by analysing learning data in new ways. For example, learning trajectories are an approach to analysing learning data that examines the dynamics of children’s learning as they progress through school. They can show, among other things, when in primary school children begin to fall behind, and where and for whom action is needed.
Learning trajectories can be analysed using any data that covers children of multiple ages or grades. For instance, they have been analysed using UNICEF’s MICS household surveys which include a module measuring foundational learning. A tool, co-created by the UNESCO GEM Report team and the RISE Programme, enables analysis of learning trajectories through an online data visualisation tool as well.
Much progress has been made on developing tools to measure learning in a meaningful way to drive and inform action, but much is left to be done. Improving learning first requires an understanding of current learning, and a regular way to measure progress. As the overused (but accurate) trope goes, what gets measured gets done, and we want learning improvements to get done. Knowing and better using these available tools can be part of the solution.
Endnote: this article builds partly on an article by the Learning data compact partners: “Measure what matters: making progress on a common framework to measure learning”.
RISE blog posts and podcasts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.