Blog

Role for Phone Based Assessments in Education Systems

Phone-based assessments can be a reliable mechanism for data collection and help governments strengthen the quality of learning outcomes data.

Authors

Image of  Pratibha Joshi

Pratibha Joshi

Central Square Foundation

Image of Sudhanshu Sharma

Sudhanshu Sharma

Central Square Foundation

Assessments are integral to the teaching-learning process and the education system as a whole. They act as gauges of students’ learning outcomes which can be aggregated upwards to get a sense of the overall health of the education system, at different levels. At the same time, conducting robust assessments at scale to understand the health of the education system is resource intensive, logistically challenging, and often has long feedback loops (Burdett, 2016). The COVID-19 pandemic was a shock to school education systems worldwide. With schools coming to a grinding halt in most parts of the world, at-home learning became increasingly important. The pandemic also induced innovations in assessments as it was no longer feasible to conduct in-person assessments. These innovations, mainly phone-based assessments, may have the potential to bring a paradigm shift in how assessments can be deployed. This blog reflects upon our recent experience of conducting phone-based assessments for early grade students and distils our learnings and insights for how they could be used.

Need for phone-based assessments

To contain the spread of the novel coronavirus, most countries, including India, closed their schools in early 2020. One year on, schools in many countries still remain closed. These school closures are likely to lead to learning loss1 , exacerbate existing inequities, and increase school dropouts. Addressing these concerns in the context of COVID-19 related constraints on physical interactions brought at-home learning to the fore. Though EdTech was central to the education response of Central and State Governments in India, the existing digital divide necessitated deploying a host of at-home learning initiatives ranging from hi-tech (Web-content and Apps) to low-tech (distribution of printed materials). However, in practice, the most prevalent at-home learning initiatives involved reaching out to parents and students through mobile phones for sharing educational content. Keeping in mind the different types of mobile phones used by people, various phone-based mechanisms like SMS, IVRS, WhatsApp, and phone calls were used by the government education departments and NGOs to reach out to teachers, parents, and students.

While the at-home learning initiatives made it possible to share educational content with students and keep them engaged during school closures, the extent of learning loss due to disruption of at-school learning remains an open question. While one can hope that the past year was an aberration and not a permanent decline in the students’ learning trajectories, understanding the extent of learning loss is crucial for designing appropriate remediation programmes for students when the schools reopen. But how does one measure learning outcomes when schools are closed and in-person assessments are not feasible due to the pandemic? Phone-based assessments -  modeled on phone surveys which have been used extensively for data collection in recent years - are being developed to fill this gap. Phone-based assessments open up the possibility of measuring the learning outcomes of students engaged in at-home learning. To be sure, phone-based assessments are still nascent. While these have been successfully deployed in recent research studies and by NGOs, adoption at a systemic level can take place once phone-based assessments and associated processes have stabilized.

Our experience with phone-based assessments

In September 2020, Central Square Foundation (CSF) in collaboration with Saarthi Education conducted a phone survey to measure early grade numeracy skills of students enrolled in Saarthi’s at-home learning program in New Delhi. Phone-based assessments were conducted as a part of this survey and were administered to more than 300 children aged between 2 to 112 . Trained enumerators were tasked with calling the parents, seeking their consent for the survey and for assessment of the child, and it was explained to them that these assessments would not adversely affect the child in any way. After asking the parents a few background questions, we requested them to pass on the phone to the child for administering the phone-based assessment. Our experience provides the following insights:

  • Only a limited set of competencies can be assessed through phone-based assessments. This is especially true for early grade students where assessments are typically conducted verbally, face-to-face, and with help of visual aids.
  • There are tradeoffs in terms of the length, scope, and the type of assessments that can be administered through a phone.
  • There is a high likelihood of non-response. When the caller gets through the parent, they might not be at home or the child might not be willing to participate in the phone-based assessment.
  • To ensure that the child was comfortable while replying to the phone assessments, parents too were asked to stay close to their children should their child require help with understanding the question.
  • Having parents around while the assessment was being conducted opened up the possibility that children might be assisted in responding to the assessment by the parent. Enumerators were trained to inform the parents in advance that they are not supposed to help their children in solving the assessment questions. Enumerators were asked to record if they sensed prompting of answers by the parents or siblings during the assessment. We found that about 15% of the children were assisted in responding to at least one of the assessment questions and only about 5% of the students were consistently helped.

Promises and challenges of phone-based assessments

Cost effectiveness and scalability

Conventionally, assessments are administered in-person or in a setting with active invigilation and hence, are resource intensive, especially for early grades where one-to-one administration of assessments is recommended. Conducting in-person assessments in India can cost between INR 400- 1000 (USD 5- USD 13) per observation. In comparison, the cost of conducting phone-based assessments in our study was INR 255 (USD 3.5) per observation. By virtue of being conducted through the phone, these assessments are cost effective relative to in-person assessments and thus, can be scaled to reach a wider sample.

To explore the feasibility of phone-based assessments at scale, consider the operational data from a call center setup by an Indian state to institute a toll-free helpline for its public service delivery schemes. It suggests that it costs the state, with an estimated population of 28 million, about INR 24 (USD 0.32) per call, on average, to keep the helpline running. The operational costs of instituting a phone-based assessment system can be rationalised by leveraging existing call centers set up for public service delivery by various state governments, and expanding their capacity by training the personnel in conducting phone-based assessments. Thus, phone-based assessments could provide a cost-effective and scalable means for gathering reliable learning outcomes data at more frequent intervals to gauge the health of the education system.

Data reliability

The school education system in India generates learning outcome data primarily through in-person assessments. However, reliability of this data is not always a given. In fact, Singh (2020) finds that scores on paper-based assessment proctored by teachers, both in government and private schools in an Indian state, are not reliable as they are prone to significant distortions due to cheating. The phenomenon of cheating and discrepancies in test scores have also been documented across different country contexts such as Indonesia (Berkhout et al, 2020) and USA (Jacob and Levitt, 2003). Phone-based assessments can improve the reliability of learning outcome data in at least two ways: (a) reducing the opportunity for cheating, and (b) altering the incentives of students and teachers involved in assessments. While it is still possible to cheat during a phone-based assessment, for example, by consulting other individuals in the household for answers, we found the occurrence of cheating to be quite rare in our study- only 5% of the students were consistently helped. Moreover, survey tools deployed for phone-based assessments can be designed to capture information on whether the students were helped by someone during the assessment, say by prompting the correct answers. This can act as an inbuilt data reliability3 mechanism.

Scope of phone-based assessment is an evolving area

Determining the type and range of questions that can be reliably tested through phone-based assessments would require active piloting and iterations. This is relevant both for early grades literacy assessments where visual aids are required and difficult to incorporate over the phone, and for higher grades where questions could get more complex and difficult to administer effectively in an audio-only format vs. paper-based formats.

Implementation and response rates

Phone-based assessments have other unique implementation challenges. Non-functional, duplicate, and frequently changing phone numbers of respondents make it difficult to reach and track them over time. Since these assessments are conducted entirely over the phone, the attrition from the sample is likely to be higher relative to assessments administered in-person. These challenges might be a predicament in small sample studies but are unlikely to be a limiting factor in large populations i.e. at a systemic level. 

Phone based assessments from a systems perspective

Phone based assessments can augment state capacity. As a means of state-led independent checks on the data being generated by the state education system, phone-based assessments could act as a mechanism to ensure the timeliness and quality of the data. Phone-based assessments are likely to be less time and resource intensive relative to in-person assessments, and can provide a useful mechanism for structured experiential learning that can inform programs and policies (Pritchett et al, 2013). Shorter feedback loops can help policymakers to assess the efficacy of interventions and re-calibrate them, if necessary; improve targeting of existing programmes; and make decisions on resource deployment on the basis of timely and good quality data. Together with recalibrating policies, information obtained through such phone assessments could also enable state governments to strengthen the quality of learning outcomes data. Once phone-based assessments mature into a reliable mechanism for data collection on the health of the system, their simultaneous integration into the framework of existing systems through iterative feedback loops would translate into a more robust state mechanism for independent checks on data which can further support decision making.  

References

Angrist N., Bergman P., Evans D.K., et al Practical lessons for phone-based assessments of learning. BMJ Global Health 2020;5:e003030. doi:10.1136/ bmjgh-2020-003030

Research Group, Azim Premji Foundation. (2021).  Loss of Learning during the Pandemic. Bengaluru: Azim Premji University. ttps://azimpremjiuniversity.edu.in/SitePages/pdf/Field_Studies_Loss_of_Learning_during_the_Pandemic.pdf

Berkhout, E. et al. 2020. From Cheating to Learning: An Evaluation of Fraud Prevention on National Exams in Indonesia. RISE Working Paper Series. 20/046. https://doi.org/10.35489/BSG-RISE-WP_2020/046

Burdett, N. 2016. The Good, the Bad, and the Ugly - Testing as a Key Part of the Education Ecosystem. RISE Working Paper Series. 16/010. https://doi.org/10.35489/BSG-RISE-WP_2016/010

CSF report [WIP] Assessments in the times of COVID-19: experiences from phone-based assessment in India

Jacob, B.A. and Levitt, S.D. 2003. Rotten apples: An investigation of the prevalence and predictors of teacher cheating. The Quarterly Journal of Economics, 118(3), 843-877.

Pritchett, L., Samji, S. and Hammer, J. 2013. “It‘s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space.” CGD Working Paper 322. Washington, DC: Center for Global Development. http://www.cgdev.org/publication/its-all-about-mee

Singh, A. 2020. Test Scores and Educational Opportunities: Panel Evidence from Five Developing Countries. RISE Working Paper Series. 20/042. https://doi.org/10.35489/BSG-RISE-WP_2020/042.

Footnotes

  • 1There are at least two ways of thinking about learning loss due to school closures. One, loss of specific skill or ability by a student (Azim Premji Foundation, 2021). Second, years of schooling that is lost due to decline in time spent on learning related activities.
  • 294% of our sample was between 2 and 8 years of age. For more details on this study please refer to the CSF report on at-home learning for early grade children (CSF 2020)
  • 3There are at least two ways of thinking about learning loss due to school closures. One, loss of specific skill or ability by a student (Azim Premji Foundation, 2021). Second, years of schooling that is lost due to decline in time spent on learning related activities.

RISE blog posts and podcasts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.