Skip to main content

A cost study for mobile phone health surveys using interactive voice response for assessing risk factors of noncommunicable diseases

Abstract

Background

This is the first study to examine the costs of conducting a mobile phone survey (MPS) through interactive voice response (IVR) to collect information on risk factors for noncommunicable diseases (NCD) in three low- and middle-income countries (LMIC); Bangladesh, Colombia, and Uganda.

Methods

This is a micro-costing study conducted from the perspective of the payer/funder with a 1-year horizon. The study evaluates the fixed costs and variable costs of implementing one nationally representative MPS for NCD risk factors of the adult population. In this costing study, we estimated the sample size of calls required to achieve a population-representative survey and associated incentives. Cost inputs were obtained from direct economic costs incurred by a central study team, from country-specific collaborators, and from platform developers who participated in the deployment of these MPS during 2017. Costs were reported in US dollars (USD). A sensitivity analysis was conducted assessing different scenarios of pricing and incentive strategies. Also, costs were calculated for a survey deployed targeting only adults younger than 45 years.

Results

We estimated the fixed costs ranging between $47,000 USD and $74,000 USD. Variable costs were found to be between $32,000 USD and $129,000 USD per nationally representative survey. The main cost driver was the number of calls required to meet the sample size, and its variability largely depends on the extent of mobile phone coverage and access in the country. Therefore, a larger number of calls were estimated to survey specific harder-to-reach sub-populations.

Conclusion

Mobile phone surveys have the potential to be a relatively less expensive and timely method of collecting survey information than face-to-face surveys, allowing decision-makers to deploy survey-based monitoring or evaluation programs more frequently than it would be possible having only face-to-face contact. The main driver of variable costs is survey time, and most of the variability across countries is attributable to the sampling differences associated to reaching out to population subgroups with low mobile phone ownership or access.

Peer Review reports

Contributions to the literature

  • This is the first study describing the costs of using national-representative mobile phone surveys to monitor chronic conditions in three lower- and middle-income countries.

  • This study identifies the main driver of variable costs for the deployment of these surveys.

  • This study identifies populations that are hard to reach and therefore, lead to an increase in the costs of deploying these surveys, providing alternatives to collect data from these hard-to-reach populations.

Background

Noncommunicable diseases (NCDs) are increasing in low- and middle-income countries (LMIC) due to the epidemiologic and nutritional transition, and the change in the population pyramid, phenomena already experienced in high-income nations [1, 2]. Governments, international agencies, and research organizations are increasingly discussing NCDs as the next challenge on the global health horizon. Noncommunicable diseases currently account for two thirds of all deaths taking place in LMICs [3, 4], but only receive 1% of global health funding [4].

In order to address the increasing burden of NCDs in LMICs, robust data streams are needed to prioritize policy options [5]. Countries need strong monitoring and evaluation systems to periodically assess risk factors. In LMICs, the high costs and logistic challenges associated with conducting household surveys, a common monitoring approach, may result in delays in data collection, or even in the omission of such surveys altogether [6].

Population health surveys administered through mobile phone technology (mobile phone surveys, or MPS) are a relatively new tool which can potentially obtain data on NCD risk factors in a timely manner [7]. If these surveys can be shown to collect robust risk factor data at a lower cost than existing population-based approaches, they could favorably shift the calculus of return on investment (ROI) of deploying these types of surveys in LMICs [8, 9]. While MPS cannot currently replace face-to-face household surveys due to constraints around collection of anthropometric or biochemical data, they have the potential to rapidly reach a large number of people in a dispersed set of locations. The shift in ROI and the possibility of near-real-time updates allows for more frequent deployments of MPS or even for continual deployment for a core set of survey questions and indicators. The United Nations has estimated that the use of MPS for NCD monitoring in LMICs may result in a reduction of up to 60% in survey costs [10].

Earlier studies have explored the sampling requirements, implementation, and effectiveness of NCD risk factor surveillance when conducted through surveys employing interactive voice response (IVR), consisting of prerecorded audio messages [11,12,13,14,15,16,17,18]. Some of these studies have shown that population surveys conducted through mobile phones can be less expensive than face-to-face surveys [19].

The Johns Hopkins University mHealth Initiative and the Bloomberg Philanthropies Data for Health Initiative (D4H) [20], along with local collaborators in Bangladesh, Colombia, and Uganda, have assessed the feasibility and validity of NCD risk-factor data collection through MPS using IVR in these countries through placing calls to randomly generated mobile phone numbers [21]. Nested in these surveys, we assessed the potential costs of developing and deploying one such MPS over a period of 1 year, using information about costs and resource utilization captured during our previous surveys. The aim of this study was to provide decision makers and funders information on the accounting average costs of conducting MPS for NCD risk factors in LMICs, to assess the economic feasibility and the related potential financial implications of its implementation.

Methods

We conducted a microcosting analysis of the economic costs incurred if population-representative IVR surveys were implemented at the national level in three countries: Bangladesh, Colombia, and Uganda. Microcosting analyses have previously been used to appraise the costs of conducting population surveys [19] and other types of mHealth interventions [22].

This microcosting study was done from the perspective of the payer/funder, with an analytic horizon of 1 year, where an initial phase of adaptation and validation of the survey questions and its structure was conducted along with one deployment of the full IVR survey at the national level. For this reason, no discounting was used in this study.

For this microcosting study, we used direct resource utilization inputs for (1) the required resources to adapt and validate a full IVR survey and its deployment and (2) the number of required survey calls that would need to be deployed in order to have a nationally representative survey of the adult population for each of the three countries studied. Regarding costs, we calculated (1) fixed costs, which are costs incurred to set up and conduct field adaptation and validation of IVR surveys (these represent the work required to conduct the formative and technology testing phase for these surveys), and (2) variable costs, which are incurred in the deployment of the IVR surveys (including associated incentives) [23].

This study has been approved by the Institutional Review Board of the Johns Hopkins School of Public Health, the Ethics Committee of the Institute of Public Health of Universidad Javeriana in Colombia, the Institutional Review Board of the Institute of Epidemiology, Disease Control and Research in Bangladesh, and the Institutional Review Board of Makerere University School of Public Health in Uganda.

Sampling

To calculate the sample size estimates for the projected nationally representative IVR surveys, we first used the Stepwise Approach to Surveillance (STEPs) surveys sample size calculator provided by the World Health Organization [24] to calculate the overall sample size for nationally representative STEPS surveys in each of the three countries, assuming a 95% confidence level, 0.05 margin of error, and 50% of estimated prevalence of risk factors. Moreover, we assessed the specific sample sizes required for each of eight age-sex categories sampled in STEPs surveys (consisting of both males and females, stratified in four age categories: 18–29, 30–44, 45–59, and 60 or older) [25].

Parameters on average time for both complete and incomplete surveys as well as paid calls required per complete survey were obtained from the IVR MPS study (Table 1). Data for this study was collected in Bangladesh between July 6, 2018, and February 18, 2019; in Colombia between November 14, 2018, and January 25, 2019; and in Uganda between June 22, 2018, and January 9, 2019.

Table 1 Calls sample, calls needed per completed survey by country

Using these parameters, we calculated weights based on the differences between the age-sex population distribution of the completed IVR surveys in each country and their corresponding national age-sex population distribution [23]. Weights were needed because some age-sex groups have lower levels of mobile phone use (low response rate). A larger number of calls are therefore necessary to achieve the minimum sample for some demographic strata.

Unlike in face-to-face surveys, the deployment of IVR surveys used random digit dialing or RDD [26]. RDD creates random digits that comprise a random phone number aiming at a phone number selection with less risk of being biased. Since we did not use quota-sampling, if a given age-sex category has already reached its target sample size, it will continue accepting surveys for that age-sex group until both of the following happen: (1) the total sample size (adding up for all age-sex categories) is achieved and (2) the specific sample size required for all other age-sex categories is achieved [26]. For example, this implies that survey among younger male respondents, who have higher rates of mobile phone use, is likely to yield a greater proportion of surveys completed than their corresponding proportion in the population. Conversely, age-sex categories with less mobile phone use (older women, for example) are less likely to be sampled and consequently require more outgoing calls and a greater number of completed surveys in order to achieve the target sample sizes. This sampling issue results in higher costs—the cost of the incentives provided for additional completed surveys, in addition to the costs of extra calls being made and extra survey-time—to reach desired sample size for all age-sex strata.

As this study was conducted under a scenario of non-quota sampling, quota sampling might reduce further the costs of deploying these surveys.

Resource utilization

The MPS project that served as a basis for this costing analysis was divided into two phases: formative work and technology testing. Briefly, the formative work consisted of focus group discussions and user groups to validate translation of questionnaire, to ensure it is comprehensible, and to adapt the questionnaire by using country-specific examples. Using the adapted questionnaire and IVR platform from the formative work, the technology testing phase sent out the IVR surveys to RDD participants to confirm that the platform could successfully send surveys and incentives and that the data was extractable from the platform.

The calculation of resource utilization in this study is drawn from the actual resources used during these two phases. Resource utilization associated to one-time costs during the formative phase represents the actual resource utilization the local partners experienced. Resource utilization associated with the survey deployment (i.e., variable costs including calls deployed, and incentives paid) was estimated based on the sampling strategy described above and the response rates found during the project.

Cost inputs

Costs inputs for the initial formative and adaptation work were based on actual costs incurred by in-country partner institutions and from the IVR technology platform providers (setup costs). Fixed costs related to platform programing and setup and the previous formative work were obtained in local currency units (LCU) from our collaborators and translated to 2017 United States dollars (USD).

Variable costs, which were charged by survey providers and consist of survey-time and incentives, are presented only in USD because survey providers generate charges in this currency. Since there are no local market forces involved (countries would pay for MPS in USD regardless of their purchasing power parity), we decided to present variable costs only in USD$ (Table 1).

Table 1 shows the inputs used in our estimation, including call sample sizes, calls needed per completed survey by country, average minutes for an incomplete call, average minutes for a complete call, and price per minute. Further data on age-sex distribution by country for complete calls, and calls in which respondent answered the age question (both complete and incomplete) with fixed incentive are provided in Appendix 1.

One-way sensitivity analyses and subgroup analysis

One-way sensitivity analyses were performed on the primary factors affecting variable costs, namely survey pricing structure, and type of incentive offered. In the former, we assessed two scenarios: (1) price per minute and (2) a flat fee per completed survey. For the first scenario, we calculated the average number of calls needed per completed survey (that is the number of missed calls and incomplete surveys to have one complete survey) and the average duration of both complete and incomplete surveys. In a price per minute scenario, the payer/funder pays for both the completed and incomplete calls on an as-is basis. In the second scenario, we calculated the costs when the provider charges a flat fee per completed survey, thus internalizing the costs of incomplete surveys from the payer/funder.

When reporting the costs of incentives offered, we assessed two different scenarios: (1) incentive per completed survey and (2) lottery incentive. Incentives were provided in all cases as mobile phone airtime credits. For the first incentive scenario, we used the amount and type of incentive provided per completed survey as cost inputs, i.e., 0.6 USD, 1.58 USD, and 1.34 USD per completed survey for Bangladesh, Colombia, and Uganda, respectively, and the estimated number of complete surveys required from the sampling obtained above. In the second scenario, the incentive for participants consisted of the chance of “winning” a lottery for mobile phone credit. One of every 20 completed surveys would receive the equivalent of 6 USD, 15.8 USD, and 26.8 USD for Bangladesh, Colombia, and Uganda, respectively [26]. This corresponds to 10X the incentive per completed survey in the case of Bangladesh and Colombia and 20X the incentive per completed survey in the case of Uganda. The amounts in each country were decided in consultation with local experts and to align with incentives used in other studies in the country. As these were the actual structures and amounts of the incentives offered during the formative phase, in this study, we have used the actual response rates obtained during the trials.

Given that costs increased significantly when trying to capture older populations (which have lower mobile phone ownership), we also assessed and compared variable costs for a population-level, nationally representative survey of all adults versus its counterpart only for adults aged less than 45 years old. This manuscript adheres to the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) [27].

Results

We calculated the target sample of calls required for Bangladesh, Colombia, and Uganda to be 20,467, 5800, and 4947 completed IVR surveys, respectively. In Fig. 1, we present the different cost components assessed and how they are presented in the tables included in this study.

Fig. 1
figure1

Cost components assessed in this study. The different cost components estimated to guide the reading of the paper as it relates each of the components to the respective tables

In Tables 2 and 3, we present the estimated total and fixed costs respectively across the three countries—Bangladesh, Colombia, and Uganda—in USD. From these numbers, we see that Bangladesh has the highest total and fixed costs ($203,122 and $74,455, respectively), followed by both Uganda ($84,282 and $53,976, respectively) and Colombia ($86,049 and $47,038, respectively).

Table 2 Distribution of total costs across countries
Table 3 Distribution of fixed costs across countries

Table 4 describes the variable costs of the survey reported in USD by (1) survey pricing structure and (2) type of incentive. There was no real pattern found in terms of the combination of pricing mechanisms when using incentives per completed survey. Of note, the costs were overall lower and less heterogeneous when a lottery incentive was applied.

Table 4 Distribution of variable costs by type of incentive used and survey pricing mechanism, reported in USD

Table 5 highlights the distribution of survey-time costs reported in USD across the different scenarios of survey-time pricing and incentive delivery. Survey-time costs are higher when charged a flat fee in Uganda but not in Bangladesh. Bangladesh had a much lower cost when using the lottery incentive mechanism than with a fixed incentive, which is a promising strategy to reduce surveys costs. Uganda does not present similar results because the incentive amount was the same for both lottery and incentive per completed survey.

Table 5 Distribution of survey-time costs by type of incentive used and survey pricing mechanism, reported in USD

Table 6 reports the costs in USD of offering incentives to those who complete the surveys, by two incentive mechanisms. The first, termed fixed incentive per completed survey, is a smaller incentive amount (equivalent to 1–2 USD in airtime credit) that is given to all participants who complete the survey. The second incentive is delivered through a lottery mechanism with 1 in 20 participants who complete the survey receiving the higher incentive amount. There was a lower incentive cost seen using the lottery mechanism in Bangladesh, and no cost difference was seen in Uganda (this could be because the lottery chance was 1 in 20, and the incentive provided in the lottery was 20 times that of the incentive per completed survey). The patterns across countries was maintained—incentive costs were lowest in Colombia, followed by Uganda, and trailed by Bangladesh which had the highest incentive costs due to the higher number of complete surveys required to obtain a population representative survey under a non-quota scenario.

Table 6 Distribution of incentive costs based on type of incentive used and survey pricing mechanism, reported in USD

From Tables 4, 5, and 6, it is clear that the main driver of variable costs is survey time. As explained in the sampling section, survey time increases when reaching to subpopulations with lower mobile phone use or ownership (e.g., older adults). Therefore, we assessed and compared the variable costs for a population-level survey nationally representative of all adults versus a similar survey focusing only on adults aged less than 45 years old (Table 7). When comparing both estimates, we found that variable costs reduced between 25 and 58% when surveying adults less than 45 years old compared to a nationally representative sample including all age categories.

Table 7 Estimate of variable costs incurred in the collection of nationally representative data for all age groups compared to surveys focusing on persons aged 18–44 years old, reported in US dollars

Discussion

Mobile phone surveys (MPS) are an emerging method to conduct population health surveys that may provide less expensive and more timely data to monitor NCDs and NCD-related risk factors in LMIC settings. In general, we found that overall costs are proportional to the sample sizes required, and sample sizes are inversely correlated with the mobile phone use; being charged per minute of survey airtime is not always more expensive than being charged a flat fee; using lottery incentive mechanisms to encourage participation is less expensive than a fixed amount for each completion; sampling adult respondents less than 45 years of age requires fewer random digit dials and is much less expensive.

We examined the various cost components of adapting and implementing an MPS in the differing contexts of Bangladesh, Colombia, and Uganda. We found that the total costs of adapting, refining, and deploying one nationally representative survey in these three countries ranged between 47,000 and 75,000 USD. Fixed costs are mostly related to implementation and largely homogenous. Variable costs, which are effectively the sum of survey time (which includes both number of calls and their duration) and the incentive costs, were more heterogeneous than fixed costs. The main variable cost driver is survey time. Total survey costs to deploy a nationally representative MPS is much less expensive than some other comparable household surveys. According to previous reports, the latter might reach costs upwards of up to1 million USD [28]. This is aligned with similar cost comparisons in the area of consumer research [29].

The number of calls required to obtain a nationally representative survey accounts for most of the variable costs across countries. Bangladesh has the highest variable costs, mainly due to the fact that it requires a larger number of calls in order to get the minimum sample size for each age-sex category. This is associated mostly to lower mobile phone use among older adults. Furthermore, it is possible that the literacy levels and the ability to respond to an automated survey like IVR might be more cognitively challenging to the elderly and hence play a role in the lower response rates in this population group [30]. For this reason, we observe that MPS are less expensive to conduct in Colombia and Uganda than in Bangladesh. This is the opposite of what we would see with a face-to-face household survey, where it is expected that labor costs and transport would increase survey costs in those two countries.

We should note that these costing estimates were generated using a non-quota sample. Use of quota sampling methods may reduce costs even further by eliminating survey distribution to individuals whose demographic group has already achieved a target sample. This can be implemented, for example, by imposing quotas by age at the beginning of the survey, so if a quota is already filled that call would only cost 1 min instead of the average 10 min that this survey might take if fully answered.

The limitations of the study are mostly due to the actual implementation of the MPS that served as a basis for cost estimates and in the data that can be collected via MPS. Given that the IVR technical platform used to generate calls and provide incentives that was operative in Uganda and Bangladesh was not operative in Colombia, our study team could not use the same technology provider in all study three countries. Nonetheless, the platforms were generally similar and the agreements reached with both platforms (one covering Bangladesh and Uganda, and the other one covering Colombia) were very similar. This costing study was conducted as part of the formative phase of the project and therefore costs are based on a deployment that is managed by collaborating academic/research institutions, and not national or international organizations. A full-scale national implementation may have different components that we may not have accounted for, including the possibility of further savings due to scale or additional unexpected fixed costs.

We evaluated costs for an IVR MPS on its own; however, MPS does not necessarily need to be implemented to the exclusion of other household or site-based survey methods. It may be used in conjunction with such modalities to potentially increase efficiency of traditional methods for monitoring risk factors. Different phone-based delivery modalities can also be mixed, for example, IVR can be used for younger populations and computer-assisted telephone interviewing (CATI) or call center may be better options for surveying older populations, as these strategies are more interactive and the respondent has a chance to get clarification from the human interviewer, which is not possible in IVR [18].

Mobile phone surveys can be included as part of the efforts to reduce costs and improve responsiveness in monitoring and evaluation of NCD risk factors. This study fills a gap in knowledge about costs of deployment within three regionally and socio-demographically different countries to help decision-makers to take better informed decisions related to the potential use of these surveys to evaluate NCD policy impact, or to monitor NCD risk factors more generally, within fiscal constraints. This study therefore provides data for better understanding of the economic feasibility of conducting MPS for risk factors on NCDs in LMICs.

Conclusion

The broader patterns in survey costs become evident. Mobile phone costs through IVR are relatively inexpensive; being charged per minute of survey airtime costs is not always more expensive than being charged a flat fee; using lottery incentive mechanisms to encourage participation is generally less expensive than a fixed amount for each completion; sampling adult respondents less than 45 years of age requires fewer random digit dials and number of phone calls deployed and is much less expensive.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CATI:

Computer-assisted telephone interviewing

CHEERS:

Consolidated Health Economic Evaluation Reporting Standards

D4H:

Bloomberg Philanthropies Data for Health Initiative

IVR:

Interactive voice response

LCU:

Local currency units

LMIC:

Low- and middle-income countries

MPS:

Mobile phone surveys

NCD:

Non-communicable diseases

RDD:

Random digit dialing

ROI:

Return on investment

STEPs:

Stepwise Approach to Surveillance

USD:

United States dollars

References

  1. 1.

    Bygbjerg IC. Double burden of noncommunicable and infectious diseases in developing countries. Science. 2012;337(6101):1499–501. https://doi.org/10.1126/science.1223466.

    CAS  Article  PubMed  Google Scholar 

  2. 2.

    Roth GA, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, et al. Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980–2017: a systematic analysis for the Global Burden of Disease Study 2017. The Lancet. 2018;392(10159):1736–88. https://doi.org/10.1016/S0140-6736(18)32203-7.

    Article  Google Scholar 

  3. 3.

    Nugent R, Bertram MY, Jan S, Niessen LW, Sassi F, Jamison DT, et al. Investing in non-communicable disease prevention and management to advance the Sustainable Development Goals. The Lancet. 2018;391(10134):2029–35. https://doi.org/10.1016/S0140-6736(18)30667-6.

    Article  Google Scholar 

  4. 4.

    Henning K. Addressing the gap in noncommunicable disease data with technology and innovation | Health Affairs. Health Aff Blog [Internet]. 2017 [cited 2019 Jan 21]; Available from: https://www.healthaffairs.org/do/https://doi.org/10.1377/hblog20170921.062094/full/

  5. 5.

    Alwan A, Maclean DR, Riley LM, d’Espaignet ET, Mathers CD, Stevens GA, et al. Monitoring and surveillance of chronic non-communicable diseases: progress and capacity in high-burden countries. Lancet Lond Engl. 2010;376(9755):1861–8. https://doi.org/10.1016/S0140-6736(10)61853-3.

    Article  Google Scholar 

  6. 6.

    Boerma JT, Stansfield SK. Health statistics now: are we making the right investments? Lancet. 2007;369(9563):779-86. https://doi.org/10.1016/S0140-6736(07)60364-X.

  7. 7.

    Gibson DG, Pereira A, Farrenkopf BA, Labrique AB, Pariyo GW, Hyder AA. Mobile phone surveys for collecting population-level estimates in low- and middle-income countries: a literature review. J Med Internet Res. 2017;19(5):e139.

    Article  Google Scholar 

  8. 8.

    Pariyo GW, Wosu AC, Gibson DG, Labrique AB, Ali J, Hyde AA. Moving the agenda on noncommunicable diseases: policy implications of mobile phone surveys in low and middle-income countries. J Med Internet Res. 2017;19(5):e115. https://doi.org/10.2196/jmir.7302.

    Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Drury P. Guidance for Investing in Digital Health [Internet]. Asian Development Bank; 2018 [cited 2019 Jan 21]. Available from: https://www.adb.org/publications/guidance-investing-digital-health

  10. 10.

    United Nations Sustainable Development Solutions Network. Data for development: a needs assessment for SDG monitoring and statistical capacity development [Internet]. 2015. Available from: http://unsdsn.org/wp-content/uploads/2015/04/Data-for-Development-Full-Report.pdf

  11. 11.

    Gibson DG, Farrenkopf BA, Pereira A, Labrique AB, Pariyo GW. The development of an interactive voice response survey for noncommunicable disease risk factor estimation: technical assessment and cognitive testing. J Med Internet Res. 2017;19(5):e112. https://doi.org/10.2196/jmir.7340.

    Article  PubMed  PubMed Central  Google Scholar 

  12. 12.

    Galán I, Rodríguez-Artalejo F, Tobías A, Gandarillas A, Zorrilla B. Vigilancia de los factores de riesgo de las enfermedades no transmisibles mediante encuesta telefónica: resultados de la Comunidad de Madrid en el período 1995-2003. Gac Sanit. 2005;19(3):193-205.

  13. 13.

    Link MW, Mokdad AH. Alternative modes for health surveillance surveys: an experiment with Web, mail, and telephone. Epidemiology. 2005;16(5):701–4. https://doi.org/10.1097/01.ede.0000172138.67080.7f.

    Article  PubMed  Google Scholar 

  14. 14.

    Labrique A, Blynn E, Ahmed S, Gibson D, Pariyo G, Hyder AA. Health surveys using mobile phones in developing countries: automated active strata monitoring and other statistical considerations for improving precision and reducing biases. J Med Internet Res. 2017;19(5):e121. https://doi.org/10.2196/jmir.7329.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Carstensen LS, Tamason CC, Sultana R, Tulsiani SM, Phelps MD, Gurley ES, et al. The cholera phone: diarrheal disease surveillance by mobile phone in Bangladesh. Am J Trop Med Hyg. 2019;100(3):510–6. https://doi.org/10.4269/ajtmh.18-0546.

    Article  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Lamanna C, Hachhethu K, Chesterman S, Singhal G, Mwongela B, Ng’endo M, et al. Strengths and limitations of computer assisted telephone interviews (CATI) for nutrition data collection in rural Kenya. PloS One. 2019;14(1):e0210050. https://doi.org/10.1371/journal.pone.0210050.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  17. 17.

    L’Engle K, Sefa E, Adimazoya EA, Yartey E, Lenzi R, Tarpo C, et al. Survey research with a random digit dial national mobile phone sample in Ghana: methods and sample quality. PLOS ONE. 2018;13(1):e0190902. https://doi.org/10.1371/journal.pone.0190902.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  18. 18.

    Pariyo GW, Greenleaf AR, Gibson DG, Ali J, Selig H, Labrique AB, et al. Does mobile phone survey method matter? Reliability of computer-assisted telephone interviews and interactive voice response non-communicable diseases risk factor surveys in low and middle income countries. PLOS ONE. 2019;14(4):e0214450. https://doi.org/10.1371/journal.pone.0214450.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Mahfoud Z, Ghandour L, Ghandour B, Mokdad AH, Sibai AM. Cell phone and face-to-face interview responses in population-based surveys: how do they compare? Field Methods. 2015;27(1):39–54. https://doi.org/10.1177/1525822X14540084.

    Article  Google Scholar 

  20. 20.

    Hyder AA, Wosu AC, Gibson DG, Labrique AB, Ali J, Pariyo GW. Noncommunicable disease risk factors and mobile phones: a proposed research agenda. J Med Internet Res. 2017;19(5):e133.

    Article  Google Scholar 

  21. 21.

    Mundt JC, Katzelnick DJ, Kennedy SH, Eisfeld BS, Bouffard BB, Greist JH. Validation of an IVRS version of the MADRS. J Psychiatr Res. 2006;40(3):243–6. https://doi.org/10.1016/j.jpsychires.2005.05.002.

    Article  PubMed  Google Scholar 

  22. 22.

    Mangone ER, Agarwal S, L’Engle K, Lasway C, Zan T, van Beijma H, et al. Sustainable cost models for mHealth at scale: modeling program data from m4RH Tanzania. PloS One. 2016;11(1):e0148011. https://doi.org/10.1371/journal.pone.0148011.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Gibson DG, Pariyo GW, Wosu AC, Greenleaf AR, Ali J, Ahmed S, et al. Evaluation of mechanisms to improve performance of mobile phone surveys in low- and middle-income countries: research protocol. JMIR Res Protoc. 2017;6(5):e81. https://doi.org/10.2196/resprot.7534.

    Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    World Health Organization. STEPS Sample Size Calculator [Internet]. WHO. 2019 [cited 2019 Jan 21]. Available from: http://www.who.int/ncds/surveillance/steps/resources/sampling/en/

  25. 25.

    Riley L, Guthold R, Cowan M, Savin S, Bhatti L, Armstrong T, et al. The World Health Organization STEPwise approach to noncommunicable disease risk-factor surveillance: methods, challenges, and opportunities. Am J Public Health. 2016;106(1):74–8. https://doi.org/10.2105/AJPH.2015.302962.

    Article  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Waksberg J. Sampling Methods for Random Digit Dialing. J Am Stat Assoc. 1978;73(361):40–6. https://doi.org/10.1080/01621459.1978.10479995.

    Article  Google Scholar 

  27. 27.

    Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ. 2013;346(mar25 1):f1049. https://doi.org/10.1136/bmj.f1049.

    Article  PubMed  Google Scholar 

  28. 28.

    Development Initiatives. Key facts on household surveys [Internet]. Development Initiatives. 2017 [cited 2019 Mar 29]. Available from: http://devinit.org/wp-content/uploads/2017/07/Key-facts-on-household-surveys.pdf

  29. 29.

    Szolnoki G, Hoffmann D. Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Wine Econ Policy. 2013;2(2):57–66. https://doi.org/10.1016/j.wep.2013.10.001.

    Article  Google Scholar 

  30. 30.

    Torres-Quintero A, Vega A, Gibson DG, Rodriguez-Patarroyo M, Puerto S, Pariyo GW, Ali J, Hyder AA, Labrique A, Selig H, Peñaloza RE, Vecino-OrtizAI. Adaptation of a mobile phone health survey for risk factors for noncommunicable diseases in Colombia: a qualitative study. Glob Health Action. 2020;13(1):1809841. https://doi.org/10.1080/16549716.2020.1809841.

Download references

Funding

This study was made possible by the generous support of Bloomberg Philanthropies (https://www.bloomberg.org/) and the people of Australia through the Department of Foreign Affairs and Trade (https://dfat.gov.au/pages/default.aspx) through award number 119668. The contents are the responsibility of the authors and do not necessarily reflect the views of Bloomberg Philanthropies or the Government of Australia. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Affiliations

Authors

Contributions

AV: Conceived the design of the study, conducted the analysis and contributed to the write up. MN: Facilitated the analysis and contributed to the write up. KRK: Collected data, contributed to the analysis and contributed to the write up. SA: Collected data, contributed to the analysis and contributed to the write up. RT: Collected data, contributed to the analysis and contributed to the write up. DG: Collected data, contributed to the analysis and contributed to the write up. JA: Contributed to the design of the study and to the write up. ER: Contributed to the design of the study, to the analysis and to the write up. IAK: Collected data, contributed to the analysis and contributed to the write up. AL: Contributed to the design of the study and to the write up. GP: Contributed to the design of the study and to the write up. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Andres I. Vecino-Ortiz.

Ethics declarations

Ethics approval and consent to participate

As described in the manuscript, this study was approved by the IRB of the main institution plus the IRB of each of the local partner institutions.

Consent for publication

N/A

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Appendix 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Vecino-Ortiz, A.I., Nagarajan, M., Katumba, K.R. et al. A cost study for mobile phone health surveys using interactive voice response for assessing risk factors of noncommunicable diseases. Popul Health Metrics 19, 32 (2021). https://doi.org/10.1186/s12963-021-00258-z

Download citation

Keywords

  • Mobile phone surveys
  • Noncommunicable chronic diseases
  • Cost study
  • Surveillance