How Many Children Know How to Read with Meaning? The Path towards Regular, Relevant, and Reliable Measures of Learning

Surprisingly many low- and middle-income countries have no regular measures of foundational learning outcomes. Better measurement can help communicate the magnitude and urgency of the learning crisis and be a trigger to action.


Image of ​​​​​​​Clio Dintilhac

​​​​​​​Clio Dintilhac

Bill & Melinda Gates Foundation

Image of Michelle Kaffenberger

Michelle Kaffenberger

RISE Directorate

Blavatnik School of Government, University of Oxford

In most low- and middle-income countries (LMICs), we do not have the data to know how many children can read with meaning. One of the key policy lessons to emerge from the RISE Programme’s research is the need to measure learning in order to drive systems change and learning improvements.

This body of work suggests learning measurement should follow three R’s: Learning measures should be regular, allowing the tracking of learning across grades and over time, beginning in the early primary grades. Learning measures should be relevant to many actors in the system, from informing high-level strategic decision making to informing daily instruction in the classroom, and learning measures must cover relevant skills such as foundational learning in the early grades. And learning measures should be reliable, avoiding distortions often introduced by high-stakes assessments that result in widespread cheating.

To what extent are these principles being applied, and what tools are available for making them a reality?

Most LMICs lack regular, relevant and reliable measures of learning, particularly for primary school

There is a learning crisis in the developing world: World Bank reports suggest that in Sub-Saharan Africa 9 out of 10 children do not know how to read with comprehension by age 10. Yet most LMICs lack regular, relevant, and reliable measures of learning. Twenty-four countries in Sub-Saharan Africa do not have data to report on learning poverty. In 86 percent of LMICs, we do not know how much learning has been lost due to COVID school closures.

This is a problem because the measurement of learning plays an important role in driving commitment to addressing low learning.  In places where we have seen substantial improvements in learning outcomes, publicly available learning measures helped focus attention on low learning and drive action to address it.

The significant improvements on schooling across the world were in part enabled by the ability to measure progress on schooling attainment in a regular, relevant, and reliable manner through EMIS systems. This helped generate political gains and support sustained commitment to ongoing improvements in schooling access. Generating regular, relevant, and reliable measures of learning in national systems would similarly have the potential to create a common understanding around learning levels and progress and incentivise political action to improve learning.

Once a commitment to improving learning is firmly in place, learning measures are critical for policymakers to be able to answer important questions such as: ‘Are we on the right track to improve learning outcomes?’ and ‘What do we need to change?’.

Why do so few countries have regular, relevant, and reliable learning measures for the primary grades?

It may seem surprising that most countries do not have these types of learning measures already. Most countries indeed have assessments. However, these assessments typically:

  1. Don’t measure what matters: Most assessments do not measure the skills that lead to reading with comprehension and instead prioritise the measurement of content knowledge. The measurement of skills such as the ability to decode is important to allow education actors to identify and target the specific gaps amongst learners who are unable to read with meaning.
  2. Are not comparable over time: Many assessments are not designed to be psychometrically comparable over time or across grades. This also makes it harder to analyse learning trajectories (which trace the pace of children’s learning across grades). Learning trajectories are useful for informing when in the schooling process (as well as where, and for whom) action is needed.
  3. Are not comparable between countries: Different countries’ assessments test different skills at different grades. It’s difficult to learn or benchmark across countries because difficulty levels are different. 


  1. International assessments may enable comparability, but they have low coverage in low-income and lower-middle income countries, particularly for the early grades of primary school. Moreover, primary grade regional assessments such as the Program for the Analysis of Educational Systems of CONFEM (PASEC) and the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SEACMEQ) take place in cycles, five to six years apart, that are often too long to provide regular information and inform decisions.
  2. Learning assessments within donor projects such as EGRA are often limited to the beneficiaries and timeline of the projects and not always easily available, limiting the utility of these assessment efforts.

It is worth noting that even research projects that measure learning use a wide variety of tools to do so and therefore do not typically provide comparable learning measures across contexts (cf. Bertling et al, CGD working paper, forthcoming).

Tools for improved learning measurement

If learning measurement is a necessary (even if not sufficient1 ) condition for learning improvements, then much more effort and investment should go into ensuring countries have regular, relevant, and reliable measures. The results of these assessments should be disseminated publicly to enable advocacy and accountability for action.

The good news is that there are more tools and approaches than before to do so.

Under the leadership of the UNESCO’s Institute for Statistics (UIS) and with support from partner organisations, solutions have been developed in the last few years to enable countries to improve their learning data, building on their existing assessments:

  1. There is now a commonly agreed-upon framework, called the Global Proficiency Framework, to measure key education outcomes such as reading and mathematics. This framework gives expert advice on how to measure the skills that students should acquire on the pathway to mastery of reading and mathematics.
  2. Rigorous methods have been developed to strengthen existing learning assessments (national, regional, international, and household based) and link them to this common framework.

These new methods include amongst others:

  • Test booklets (e.g., the Assessments for Minimum Proficiency Levels modules), targeted at measuring the attainment of a minimum proficiency level in reading and mathematics that are made available for integration into national assessments to strengthen their relevance and reliability and allow for comparisons over time and across countries. These are currently being piloted in five countries.
  • An expert-judgment-based methodology, policy linking, which allows countries to use their national assessment results for global reporting, under certain conditions.

Expanding comparability of learning data, regional assessments such as Africa’s PASEC and Latin America’s LLECE have now been rigorously linked to an international learning standard allowing participating countries to report on results on a comparable metric.

In addition to improving the quality of national assessments, more could be done to raise the political salience of the learning crisis and inform policy approaches to address it by analysing learning data in new ways. For example, learning trajectories are an approach to analysing learning data that examines the dynamics of children’s learning as they progress through school. They can show, among other things, when in primary school children begin to fall behind, and where and for whom action is needed.

Learning trajectories can be analysed using any data that covers children of multiple ages or grades. For instance, they have been analysed using UNICEF’s MICS household surveys which include a module measuring foundational learning. A tool, co-created by the UNESCO GEM Report team and the RISE Programme, enables analysis of learning trajectories through an online data visualisation tool as well.


Much progress has been made on developing tools to measure learning in a meaningful way to drive and inform action, but much is left to be done. Improving learning first requires an understanding of current learning, and a regular way to measure progress. As the overused (but accurate) trope goes, what gets measured gets done, and we want learning improvements to get done. Knowing and better using these available tools can be part of the solution.

Endnote: this article builds partly on an article by the Learning data compact partners: “Measure what matters: making progress on a common framework to measure learning”.


  • 1Recent studies like Singh and Muralidharan, 2020 suggest that improving measurement alone is unlikely to lead to significant improvements.

RISE blog posts and podcasts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.