Blog

What “Common Sense” Can’t Tell Us: Four Important and Surprising Findings from Recent Education Research

We discuss how RISE studies in Pakistan, Indonesia, India, and Vietnam produced counterintuitive results with larger implications for education research going forward.

 

Authors

Image of Lant Pritchett

Lant Pritchett

RISE Directorate

Blavatnik School of Government, University of Oxford

Image of Lillie Kilburn

Lillie Kilburn

RISE Directorate

Blavatnik School of Government, University of Oxford

In daily life, common sense helps us, telling us to bring an umbrella when the sky is grey or to set a timer so we don’t burn dinner. But “common sense” can also be used to mean assumptions that just feel right—and this kind of “common sense” is not so helpful if we mistakenly take it as fact and don’t probe whether it really is right. Many theories in modern physics, like quantum physics or general relativity, are not “common sense” but are empirically validated as generating accurate predictions.

Education research is a powerful tool for questioning “common sense” assumptions, allowing us to uncover the truths that will help us to improve education systems and learning for all children. Here, we delve into four moments when RISE researchers found results that contradicted what “common sense” might have predicted, and we explain why these findings are important for education systems research in the future.

Findings from Pakistan: Capital grants spur more competition—and improvement—when given to all private schools in a village

It is relatively easy to study the impact of giving additional resources, such as a capital grant, to a specific school and then track how that school is affected. A much tougher question is: what happens to all of the schools if we give a capital grant to every private school in a village? This requires not just data on a single school but a census of all the schools in a village and the ability to track them all.

Fortunately, the RISE Pakistan team has been following a set of villages and all schools in those villages since 2006. This allowed them to do the experiment of trying to do different “interventions” in the local market: giving a cash grant to only selected schools in a village versus giving the grant to all the private schools in a village, and examining what happens in this closed market with rich intra-village dynamics.

One might assume (with “common sense”) that giving grants to all schools would keep the playing field level, and hence that a universal grant would generate little change. But the opposite was true. Andrabi et al. found that if just one school gets the grant, they tend to invest in infrastructure; then, on the basis of that investment, that school attracts more students and its enrolment goes up, but not much else happens. In contrast, if all schools get the grant, then each school realises they need to “up the ante” (as the authors put it) in their performance to attract students. When all schools get the grant, schools compete more on learning quality, and test scores in the village increase. As test scores increase, schools are able to charge higher fees as well. Moreover, as schools attempt to raise quality, they compete for the limited pool of the better teachers, so teacher wages also increase.

This finding is important because it means a policy of expanding access to capital to only targeted schools might have very limited impacts, while a general policy shift of enabling all schools better access to finance (like changing banking regulations to allow low-cost private schools to borrow) could have much larger, better, and broader impacts.

Findings from Indonesia: An increase in schooling over a 14-year period was accompanied by an absolute decrease in mathematics performance

Nearly every country keeps track of the schooling of their population, both through administrative data on school enrolment and through census and labour force surveys. Yet there is a lack of data about most countries’ stock of skills, knowledge, and capabilities. The “common sense” assumption is that if schooling increases, learning also increases, and hence one might assume that tracking schooling is a good proxy for progress and that tracking learning directly is an optional frill.

The RISE Indonesia team decided to do the research.  To explore how mathematics skills had changed in Indonesia as schooling increased, the RISE Indonesia team took advantage of the fact that there is a long-term survey in Indonesia that has followed the same households and hence the same people for quite a long time (the justly famous Indonesia Family Life Survey) and its most recent round was 2014/15.

As part of that survey, youth are asked some simple, grade school level, arithmetic questions. This allowed the team to do the research to ask: "If we compare 2014 to 2000, how much better is the ability of youth to handle some simple arithmetic skills?" Surprisingly, they found that although youth in 2014 had much more schooling (almost 20 percentage points higher in completion of secondary school), the average mathematics performance of youth had actually decreased, not increased.

When Beatty et al. investigated why, they found two things that had never been documented before.

Firstly, performance on these questions got better from Grades 1 to 7—but then it stopped, at a fairly low percent of the youth population having attained mastery.  In other words, some children learned these skills in primary school, but many, or some tasks, even most, did not.  And if they didn't gain these skills in primary school, the children never gained them, even though they stayed in school for many more years.

Secondly, when they compared how well the children did in each grade, they found that the performance was lower at every grade in 2014 compared to 2000.  A child in school in Grade 7 in 2014 was at the level of arithmetic ability as a child in Grade 4 was in the year 2000.

The upshot was that substantial progress in expanding schooling over 14 years had not led to greater basic mathematics skills of the youth cohort at all, despite much higher spending and numerous efforts at reform.

Findings from India: Schools in Madhya Pradesh that were randomly selected to create their own diagnostics and improvement plans did so, but an impact evaluation revealed that no change in the schools resulted

In 2014, the Indian state of Madhya Pradesh adopted a “School Improvement Plan” approach in which every school would make their own diagnostic of their own school’s challenges and then make a plan to address those challenges. As they went to take that programme to state-wide scale, researchers from the RISE India team asked the government to randomise the rollout so that they could compare the SIP schools and the non-SIP schools and see how much, and in what ways, the programme made a difference.

The results were striking. The first step of the SIP programme was implemented: the participating schools did do a diagnostic and they did complete and file school improvement plans.  However, Muralidharan and Singh’s comparison of the (randomly assigned) SIP schools and the non-SIP schools found that no changes happened beyond the filing of the plans.  There were no changes in teaching practices, no changes in school supervision—and therefore, not surprisingly, no changes in student learning outcomes.

This research is important because at the implementation phase the temptation was to interpret the making of the school improvement plans as, in and of itself, a success. “Common sense” might lead one to assume that schools would implement their plans, supported by the districts and state, and outcomes would get better. Had there not been an impact evaluation, this reform might have been considered a success on the basis of the tracking data showing compliance with completing the “input” of doing schooling diagnostics and plans.

Findings from Vietnam: Individual teachers’ effectiveness in nurturing cognitive skills has little relation to their effectiveness in nurturing non-cognitive skills, and vice versa

There is increasing concern that the impact of schooling should not just be measured as performance on tests but also in a variety of non-cognitive outcomes, such as students’ ability to learn on their own. However, parsing out the effects that individual teachers have on student outcomes is difficult. Since parents tend to seek out good teachers, one cannot just naively look at teachers with high learning outcome students and attribute that as a causal impact of their superiority as teachers, as it may just be that good outcome teachers just get good outcome students to start with and their “value added” is small.

The RISE Vietnam team produced a data set that followed children over time and compared both cognitive (math and language) and a variety of non-cognitive skills. Their findings were unique in the world in the ability to estimate both reliably.

This forthcoming work found that, in the Vietnamese context, the difference in cognitive skill gains across teachers is much smaller than in many other countries (this low variance is, in many ways, good news and it means there are relatively few really bad teachers). However, they found that the variability in teacher impacts on the non-cognitive skills were much larger and that the students with teachers who were good at promoting those skills had much larger gains. They also found that there was very little association between teachers who were good at promoting cognitive skills and those who were good at promoting the gains for their students in non-cognitive domains.

“Common sense” might tell you that “good teachers are good teachers” and that those teachers who are good at expanding one set of skills are also good at promoting other skills because there are some common skills or practices used by “good teachers” that apply to both cognitive and non-cognitive skills.  And, in fact, there have been a quite large number of estimates of the “value added” of teachers at cognitive skills, but whether this applies to other elements of education had not yet been as explored.

These are first-of-their-kind findings and are potentially hugely important for how teachers are trained and evaluated as teachers—as teacher training could focus on the relative weaknesses of individual teachers in either conveying cognitive or non-cognitive skills rather than training all teachers in one or the other.  Reliable findings on important questions like these are impossible without reliable measures that follow students over time and hence can estimate their gain from having specific teachers.

In conclusion

These are just four of numerous examples from the RISE research programme to illustrate that doing the research—and doing it well—is important.  For each of these four studies it is easy to imagine the opposite finding—and even to assume that the opposite finding was “common sense.”

There is a famous anecdote in sports that a coach was asked by a reporter in advance of a crucial game who was going to win.  His response was: “I don’t know, that is why we play the game.”  It is easy for people to view academic research as an expensive frill, one that isn’t really needed or necessary as the answers to important questions are already known or can be arrived at through “common sense.”  But, as the world learned in the early 20th century with quantum mechanics and general relativity and the axiomatising of the foundations of mathematics, neither “common sense”—nor the accumulated wisdom of the existing experts—should stand in the way of “playing the game” as we—even those who have been working in the field for decades—can be very surprised by what actually wins.

RISE blog posts and podcasts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.