Abstract
Keywords
Introduction
Timely detection of diseases, epidemics, and pandemics enables public health authorities to implement preventive measures such as quarantine, vaccination campaigns, and public awareness initiatives. 1 Hence, early warning systems for epidemics play a crucial role in preventing the rapid spread of infectious diseases and mitigating their impact on public health; they serve as a proactive defence against the escalation of epidemics, allowing for a faster and more targeted response to protect communities and save lives. These systems utilize a combination of surveillance, data analysis, and technology to detect and monitor the emergence of potential outbreaks. To speed up the detection of early signs of epidemics, it is necessary to analyze various data sources, including reports of unusual symptoms, laboratory results, and even social media trends, to identify patterns indicative of a potential epidemic. 2
Integrating modern technologies like artificial intelligence (AI) and machine learning can speed up the process of data analysis and anomaly detection. 3 Indeed, AI can process vast amounts of data from diverse sources, such as medical records, social media, and environmental monitoring devices, to detect patterns and anomalies that may indicate the onset of a disease, an infection outbreak, an epidemic, or a pandemic. Furthermore, AI can enhance the accuracy and efficiency of early warning systems, as they allow for detecting complex patterns in available large and small data sets. 4 AI models can continuously learn and adapt, improving their ability to identify potential threats. Integrating AI into early warning systems enhances the speed and efficiency of epidemic detection using complex factors contributing to disease outbreaks. By utilizing the capabilities of AI, early warning systems become more proactive, enabling authorities to implement timely interventions and preventive measures, ultimately contributing to the containment and management of epidemics.
However, the use of AI includes some risks, such as technical challenges (e.g., clinical integration), bias and lack of fairness, 5 limited generalization, 6 data privacy and security, 7 and ethical considerations (e.g., consent). 8
To this date, no framework exists to develop AI for a pandemic. Also, best practices, essential features to include, or pitfalls to avoid are unavailable. Given the importance of AI in the development of epidemic and pandemic early warning systems, the evidence demonstrating the effectiveness of AI in health, the rise of research on AI for health, including during the COVID-19 pandemic, the benefits and risks inherent in the use of AI for health, it is essential to synthesize the available evidence regarding the use of AI for epidemic and pandemic early warning systems. While there is a systematic review on the use of deep learning for epidemic detection, 9 a narrative overview on the use of AI in public health for early warning systems, 10 a systematic review on the use of AI for the detection of COVID-19 pandemic, 11 and a systematic review on the effectiveness of EWS in detecting infectious diseases outbreaks 12 ; to our knowledge, there is no review on the use of AI for epidemic and pandemic early warning systems.
We have conducted a systematic scoping review 13 we systematically reviewed the literature on AI and diseases, epidemics, and pandemic outbreaks to address this knowledge gap. The study objectives were to examine whether AI for effective early warning systems is adequate and the lessons learned from available evidence. We reviewed the literature on using AI and diseases, epidemics, and pandemic outbreaks. The questions that guided us were as follows: (1) “What type of AI did early warning systems use?”, (2) “Was the use of AI effective in detecting outbreaks?” and (3) “What type of strategies (if any) were implemented to mitigate the risks of AI bias?”. The results will inform future AI-based early warning systems design and development. This review provides public health practitioners, AI specialists, and policymakers insights on the items to consider when building AI-based early warning systems.
Methods
Search strategy
We have followed the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guideline. 14 The following five online databases were used to conduct a systematic scoping review: IEEE Xplore, ACM Digital Library, Web of Science, PubMed, and PsycINFO. The literature search strategy was conducted using key search terms established with the advice of a supervisor. The terms included in the search are as follows: surveillance, forecast, foresee, pandemic, epidemic, outbreak, artificial intelligence, AI, and machine learning. Of the 1450 citations extracted, 363 were duplicated or retracted, yielding the remaining 1087 articles. The search was conducted on August 11, 2023. All the search strategies are in the Appendix.
Eligibility criteria
The inclusion criteria were predetermined and included studies published in the last 5 years, limited to journal articles and conferences focusing on how artificial intelligence or machine learning is used to develop an early warning system for various disease outbreaks. The exclusion criteria were based on removing the studies that were not written in English and were not scholarly publications. As this was a scoping review, the search terms were limited to the titles and abstracts of the publications. The most recent search was done on August 11, 2023.
Selection process
The inclusion and exclusion criteria were applied to 1087 abstracts. Two authors (second author and third author) employed Rayyan software 15 for screening 1087 articles, and 64 were identified as potentially eligible. Out of the 64 one article was not retrievable, both authors reviewed the full-text version of the 63 articles retrieved to make the final selection. Only 33 articles fit the inclusion criteria and were selected for review and synthesis. The literature search and article collection were conducted by two authors (second author and third author), and the literature review and data collection from articles were conducted by both authors (second author and third author) and were reviewed by another author (first author). A discussion was held to reach an agreement in case of any ambiguities and disagreements. This review was not registered in a prospective review registry.
Results
In total, 1450 articles were identified, 363 of which were duplicated and retracted articles. Of the 1087 unique articles, 1023 were excluded based on the content of their abstracts, and 1 article was not available for retrieval. The inclusion criteria were applied to the 63 articles after reading their abstracts, and 33 were included in the analysis (Figure 1). A complete summary of the included studies is provided in Table 1. PRISMA Diagram. Studies summaries.
For the data charting process, we utilized spreadsheets to systematically organize and extract relevant data from the included studies. This process was conducted independently by two reviewers to ensure accuracy and consistency. Any discrepancies between the reviewers were resolved through discussion and consensus.
The data items collected from each study included the following: • • • • • • • • • • • • • • • • •
Studies’ models, performance, and outcomes.
Dataset characteristics.
We synthesized the results by collating and summarizing the extracted data to identify patterns and trends across the included studies. This narrative synthesis was complemented by visual representations (i.e., tables) to effectively convey the range and nature of the evidence. The findings were then interpreted in the context of the existing literature, discussing their implications for practice, policy, and future research directions.
Early warning systems description: Countries, purpose, algorithms, and data characteristics
Countries
The 33 studies covered 23 countries. 13 studies (40%) were in Asia, five (15%) in Latin America, four in North America (15%), five (12%) global conducted in multiple countries, three (9%) in Africa, and three (9%) in Europe (Table 1).
Purpose
13 studies (40%) aimed at predicting outbreaks of dengue fever,16–28 nine studies (28%) of COVID-19,29–37 four studies (12%) of influenza,38–41 three studies (9%) of Malaria,42–44 one study (3%) of cholera, 45 one study (3%) of tuberculosis, 46 one study (3%) of Zika virus, 47 and most notably only one system was “generalistic” as it targeted multiple diseases. 48
Algorithms
The predictions covered either weekly or monthly case estimates in the case of regression models, and occurrence of outbreaks one or more months in advance in the case of classification. Seven studies (21%) used classification techniques to predict the occurrence of an outbreak, and 25 studies (76%) used regression techniques to predict the number of cases; one study (3%) used a classification technique to predict the occurrence of an outbreak following a regression technique that predicted the number of cases. Both types of modelling targeted weekly, monthly, or yearly predictions. Also, the type of algorithms used varied widely, topped by Random Forest and Neural Network used in six studies each, followed by Long Short-Term Memory (LSTM) in five studies, boosting-based models in five studies, linear regression in three studies, and probabilistic approaches in two. SVM, SVM-based ensemble technique, K-Nearest Neighbour (KNN), Logistic Regression, and Seasonal Auto-Regressive Integrated Moving Average (SARIMA) were all used in one study. One study was unique as it combined a SARIMA model and a Boosting (XGBoost) model. A complete summary of the studies’ best-performing models, performance measurements, and outcomes can be found in Table 2.
Dataset characteristics
The data sources for the mentioned studies can be categorized into three groups: health, meteorological/climatic, and miscellaneous (e.g., search engines and social media). Table 3 summarizes the source per study. Notably, many studies combined health data with meteorological or environmental data; two studies have resorted to search engine trends (Google and Baidu) and one has resorted to social media (i.e., Sina Weibo microblogging) to supplement their features with human interaction with the world. The data sources and their types are summarized in Table 3. Notably, 19 (49%) of the studies used climate data in addition to health data, while only three studies (9%) used input from online sources, namely Google search trends from Google Trends 41 and Baidu search engine words,38,40 and one of the latter two studies also used microblogs from Weibo. 38
AI techniques: Types, effectiveness and performance
The 33 studies encompass a broad range of applications of machine learning and AI techniques in predicting and managing various disease outbreaks. There were commonalities and differences among these studies in terms of types of AI techniques used, their effectiveness, and the factors influencing their performance.
Types of AI techniques
The 33 studies employed a variety of AI techniques with a diversity of approaches. There were six types of algorithms: time series, instance-based models, linear models, neural networks (including recurrent neural network), Ensemble learning, and hybrid models (see Table 1).
Time series algorithms (SARIMA) and instance-based learning (KNN) algorithms were used in one study each (3%), Linear model algorithms were used in three studies (9%), both neural networks and ensemble learning were used in 9 studies (27%), while10 studies used hybrid approaches (31%).
Deep learning models were prevalent due to their ability to detect complex relationship within the data, particularly Long Short-Term Memory (LSTM) networks (5 out of 9 neural network studies) was used for time-series forecasting due to their capacity to capture temporal dependencies.26,27,29,40,41,49
Additionally, ensemble methods, which combine multiple models, particularly random forest (4 our of 9 ensemble studies),36,46,50,51 were particularly employed due to their ability to enhance prediction accuracy by reducing variance and bias.
Effectiveness of the AI techniques
The effectiveness of these AI techniques varied across studies, but they demonstrated promising results in predicting disease outbreaks. Advanced machine learning techniques like Random Forests and Deep Learning models consistently outperformed traditional statistical methods, achieving accuracy rates ranging from 55.6% 46 to 95%. 51
It worth noting that deep learning models, despite their high accuracy, were often considered “black boxes,” making their predictions difficult to interpret.18,42,43,50 This trade-off between accuracy and interpretability highlights a well-known challenge.
Performance of the AI techniques
Several factors influenced the
System challenges
The studies covered in this review underscore the complex challenges epidemic and outbreak prediction models face, ranging from data quality and availability issues to model generalizability and adaptability limitations in dynamic scenarios. System challenges were not reported in eight studies.28,30,35–39,43
Model’s biases
Generalizability
Another limitation involves the model’s focus on a specific population (infants under 5 years old), reducing its generalizability to the entire population and its tendency to underestimate severe and overestimate mild outbreaks.18,44 In the case of tuberculosis prediction models, the generalizability challenge was also encountered due to the geopolitical clustering of the data (while the disease does not know borders) and the exclusion of data from children. 46 In malaria prediction, errors are recognized due to diverse climate types and the inherent incompleteness and vagueness of health-related datasets, emphasizing the need for cautious use of prediction tools. 42 Similarly, studies on dengue prediction note the challenge of diminishing accuracy with forward forecasting, especially for small events, and the model’s limited adaptability to various scenarios (e.g., data from cities with different populations). 17 In one study assessing cholera outbreaks, overfitting was a concern as the model performed aggressively better in districts and states with superior healthcare, potentially compromising its generalizability. 45 Limitations in real-time or dynamic outbreak prediction are highlighted; additionally, the study’s focus on a specific region (Tamil Nadu, India) raises concerns about the model’s generalizability to different geographical areas. 23
One study notes that its model tends to underestimate the number of cases during severe outbreaks and overestimate them in mild outbreaks despite correctly identifying the occurrence of outbreaks. 18 Another study faces a significant disadvantage as the model fails to accurately identify primary rises in case numbers, resulting in the omission of significant outbreaks. 21
False positives
A model displays a high percentage of false negatives (i.e., low precision), raising the risk of unpreparedness for outbreaks and potential loss of lives; also, too many false positives erode confidence and can rapidly deplete limited resources. Only four studies reported on false positives in their AI models for epidemic and outbreak prediction.
Four studies report on false negatives. Harvey et al. noted a high number of false negatives when using lower percentiles to trigger alerts for malaria epidemics, resulting in many true epidemics being missed despite improved recall at higher percentiles. 52 Aleixo et al. discuss a case where their model predicted 72 dengue cases against an actual value of 79, highlighting the challenge of predicting borderline cases. 18 Campbell et al. identify false negatives in their cholera risk prediction model, particularly in districts with fewer outbreaks, underscoring the difficulty of accurate prediction in areas with limited data. 50 Similarly, Nguyen et al. report that their LSTM-ATT model missed true dengue outbreak months in some provinces, leading to false negatives despite generally high accuracy and specificity. 27 This highlights the ongoing challenge of accurately detecting all true outbreaks, especially in regions with limited data or less frequent occurrences.
The complexity of societal and political dimensions
As well-known in public health, social and political dimensions of human life influence behaviour, decisions, outbreaks, and outbreak responses. This was never part of the 33 studies; however, it was reflected upon in a few of them. Societal and political dimensions were a challenge reported during the COVID-19 prediction. 31 Additionally, another study acknowledges that the accuracy is affected by the complex nature of transmission that is influenced by the interplay of factors, including human behaviour, healthcare infrastructure, and public health measures. 33 Finally, a third study focusing on infectious disease transmission acknowledges limitations such as incomplete datasets, the uncertainty of pandemics, and the challenges posed by the interconnected and complex nature of the world. 29
Data challenges
The data challenges might impact the effectiveness and reliability of AI-based early warning systems. Challenges related to data volume and variety affect accurate predictions and evaluating AI effectiveness. While data velocity issues affect real-time prediction capabilities, and ensuring data availability and granularity is fundamental for implementing and testing AI techniques. Additionally, tackling data granularity and variety helps mitigate biases, ensuring fair and accurate predictions across different populations.
Many aspects of Early Warning Systems relate to data availability, be it in the form of lack of data volume, velocity, variety, availability, and granularity.
Data volume
The dependency between data sizes and model performance is well-known. 48 In the context of dengue prediction, a study encounters challenges related to dataset size and resource constraints. Analyzing only confirmed cases reduces the dataset, 24 and obtaining entomological data, crucial for accurate spatiotemporal predictions, is costly and time-consuming. 25 Another limitation identified is the small datasets available for deep learning models and the extended training time for LSTM models, impacting their effectiveness; also, models could not be effective for some cities, possibly due to relevant potential socio-economic factors not being present in the dataset. 26 A study focusing on Zika virus prediction highlights the limitation of relying solely on publicly available data, emphasizing the need for additional information from government and healthcare institutions. 47 The lack of data on a geographical scale, temperature, rainfall, or other attributes impacts the model’s accuracy. 48
Data velocity
Delays in obtaining the right data present a unique challenge in one study, requiring a 3-week time series prediction to forecast events 1 week ahead of real-time due to a 2-week data lag. 41
Data variety
A study on influenza prediction also indicates the need for more available and detailed influencing factors and symptom surveillance data (e.g., circulating virus strains, specimen collection rates, case selection bias, and healthcare–seeking behaviours). 40 Linked to variety, challenges were observed in addressing reporting space and time heterogeneity. 19 Additionally, accuracy could be improved by incorporating various data such as geographical scale, temperature, rainfall, or other attributes impacting individual epidemics. 48
Lack of availability
Disparities between data reporting rates and methodologies across different locations were reported as a limitation. 27
Lack of granularity
One study’s scope was constrained to a limited number of cities, primarily due to the absence of granular data. Non-climatic factors, including population demographics, community immunity levels, socioeconomic conditions, healthcare accessibility, and environmental intervention strategies, might not be considered while they are important in modelling outbreaks. Time-variant factors like changes in mosquito density, population movements, and vector control measures were sometimes overlooked. 22 Similarly, a forecasting model applied in a study focused on COVID-19 resurgence ignored crucial parameters, including the number of lockdowns, people vaccinated, social distancing measures, and self-isolation behaviours. 34
Discussion
We will discuss the following findings in relation to our three research questions: (1) “What type of AI did early warning systems use?”, (2) “Was the use of AI effective in detecting outbreaks?” and (3) “What type of strategies (if any) were implemented to mitigate the risks of AI bias?”.
Type of AI early warning systems
Most of the studies relied on regression (25 studies or 76%) to compute the number of cases in a week, month, or year. The classification was used less to detect the presence or absence of an epidemic (7 studies or 21%). The only study (3%) that used both techniques computed the number of cases and used a threshold to decide on the occurrence of an outbreak. Three studies gathered data from search engine search trends (Google 41 and Baidu38,40) and the Chinese Weibo microblogs. 38 Natural Language Processing (NLP) was not used in any case. In one of the three studies, the weekly counts of searched or blogged words related to the disease (Influenza) were included in the model. 38 In one study in China the weekly aggregate for several keywords related to influenza in Baidu search index was considered as a feature, 38 while another study in China considered solely the word “influenza” in the Baidu index. 40 The third study in the United States, Google Trends key phrases were stripped down to four words (flu, sore throat, cough, and Tamiflu) and their equivalent in Spanish and Portuguese and provided to the algorithm with the number of influenza cases sources from the World Health Organizations as features to the algorithm.
As presented in the results, the algorithms used varied across the studies, with Neural Networks Random Forest as the top-used one, followed closely by LSTM. While LSTM is a natural fit for time-series data and neural networks for complex patterns, the random forest presence as a top used may be seen as a surprise; however, this is in line with research that demonstrated that Random Forest algorithm provides not only equivalent but also enhanced predictive ability over other time series models for infectious disease outbreaks prediction such as retrospective and prospective ARIMA. 53 It is worth including Random Forest as part of the machine learning algorithms that could be used to build future early warning systems.
AI effectiveness in detecting outbreaks
The reviewed studies demonstrate the ability of machine learning to be used for early warning of a pandemic or epidemic. With the exception of two studies that lacked detailed reporting,32,37 all other studies showed a very good performance for the developed models. There was deterioration noticed in the models’ performance with longer time predictions, which is expected; for instance, the model’s Mean Absolute Error (MAE) in Valter et al. study increased from 3.18 infected for 1 week to 4.07 for 5 weeks and 5.62 for 10 weeks. 17 Otherwise, the models showed high performance and were satisfactory.
The effectiveness of AI models is measured using different performance metrics (e.g., AUC, precision, recall, MAE). However, a model’s performance can deteriorate with longer prediction horizons. This decline in performance over time underscores the challenge of maintaining performance in long-term forecasts, which is a critical factor in the practical application of AI models for outbreak detection.
The effectiveness of these AI models is evaluated not only through offline validation but also prospectively in real-world scenarios. Only one study implemented and tested its self-adaptive AI model (SAAIM) for real-time influenza forecasting in Chongqing, China and the model’s performance demonstrated its practical utility in guiding public health decisions and resource allocation during flu seasons. 38 However, there is a need for continuous monitoring and improvement of AI models to ensure their adaptability to evolving data and real-world conditions. While offline validation provides an initial assessment of model performance, real-world implementation often reveals additional challenges, such as data reporting delays and varying epidemiological patterns, that must be addressed to enhance the reliability and effectiveness of AI-based early warning system.
Strategies implemented to mitigate the risks of AI Bias
(3) “What type of strategies (if any) were implemented to mitigate the risks of AI bias?”.
Bias measurement and mitigation of AI bias are crucial for any AI use for health purposes, including public health, as they allow for the avoidance of unintended negative consequences of using AI in the health domain. A bias-free early warning system is reliable and trustworthy, increasing the chances of a timely and effective public health intervention by stakeholders. While addressing bias is a recent question in AI, it is an essential factor to consider when developing AI for health, as indicated by the COVID-19 pandemic, where racial/ethnic minority groups and those from low lows socio-economic status were more vulnerable to it. 54
While 12 of the reviewed studies touched on predictive models’ bias in terms of generalizability and complexity of the phenomenon to detect,16–18,21,23,29,31,33,42,44–46 none measured the model’s bias in terms of imbalanced decision-making affecting a specific population (e.g., people with disabilities, people with low socioeconomic status). This highlights the need to measure and mitigate bias in future studies related to early warning systems.
Challenges and future perspectives
Model explainability
Timely intervention and understanding the factors contributing to a model’s predictions are paramount in epidemic outbreak interventions. Hence, an explainable predictive model in an early warning system is crucial because it allows stakeholders to have an interpretable model. An interpretable model enables healthcare professionals and policymakers to grasp the rationale behind predictions, fostering trust in the evidence provided. 55
The interpretability of a predictive model of an early warning system adds much value to the associated predictions. Explainability allows for understanding the different variables (i.e., features) in play, providing targeted preventive measures and more fine-tuned policymaking. Moreover, explainability and interpretability enhance communication between stakeholders (e.g., public health officials, health professionals, and policymakers) as they facilitate communication and trust in the decisions being made.
In our review, only one study out of 33 (3%) addressed the explainability of the developed model, 18 which indicates the urgent need to embed explainability as a factor in early warning systems.
SDG consideration
The United Nations Sustainable Development Goals are created to attain a fair, just, and equitable world. 56 Developing an AI-based early warning system assists in achieving SDG 3 (Good Health and Well-being), especially if it has a mechanism to detect and mitigate bias. Indeed, such a system allows us to implement a mechanism for monitoring, early detection, and response to an epidemic or pandemic, facilitating outbreak containment and reducing its impact on the population and especially disadvantaged communities. Moreover, AI-based early warning systems can allow for an optimized targeted resource allocation. Finally, if used on a global scale, early warning systems can enhance SDG three globally.
Going beyond health and climate: Use of social data
Disease outbreaks are complex phenomena affected by climate, weather, human behaviour, vector density, virus serotypes, and public health prevention and control. The need for a multitude of data sources integrating such variables is essential for the success of an AI-based early warning system. However, online data proved effective in detecting diseases 57 and can be used in public health 58 ; in our review, two studies ventured to use data related to human behaviour from search engines 41 and microblogs. 59 By incorporating social media and online data sources, early warning systems could offer nuanced insights.
Expanding the data sources to include online sources was rarely observed in our review and was a source of data enrichment that led to accurate predictions. Expanding a dataset beyond conventional health and climate indicators to integrate online social data is a potentially beneficial route for future work in the field to allow for a holistic approach to epidemic/pandemic predictive models.
Adaptability and continuous model monitoring and improvement
While the limited data at the beginning of a pandemic is a challenge, innovative solutions such as data augmentation could be beneficial. This underscores the adaptability required in dealing with evolving data situations. Moreover, preparing for future epidemics/pandemics means continuously monitoring and improving AI models’ performance to adapt to a continually changing environment. AI-based early warning systems must be adaptive and constantly refined.
Study limitations
This is the first scoping review to address AI-based early warning systems; however, this it has several limitations. One key limitation is that the search was limited to the English language; there could be studies in other languages that were missed.
Also, scoping reviews are designed to map existing literature and identify gaps rather than provide detailed synthesis or meta-analysis. Consequently, the depth of analysis might be limited, and nuanced insights into specific factors influencing AI model performance might be underexplored.
Another significant limitation is the heterogeneity of the included studies. The diversity in study designs, AI models, evaluation metrics, and disease focus areas makes it difficult to draw uniform conclusions. This variability can obscure specific insights about the strengths and weaknesses of AI applications in particular settings or for specific diseases.
The review also faces temporal constraints due to the rapid pace of technological advancement in AI and machine learning. There may be newer models might surpass those reviewed. This limitation means that the review might not fully capture the latest best practices in AI-based outbreak detection.
Additionally, the review might not adequately address implementation challenges of AI models in real-world settings. While offline validation results are often promising, practical hurdles such as data quality issues and integration with existing public health infrastructure are not always thoroughly examined in the different studies.
Conclusion
The evidence reviewed suggests that AI can be used for pandemic preparedness and designing and developing early warning systems for diseases, epidemics, and pandemic outbreaks. However, while advanced models like deep learning and ensemble methods offer high accuracy, they also come with challenges related to interpretability and resource requirements. The choice of model should balance these factors, considering the specific context and available resources. AI-based EWS systems face data volume, velocity, variety, availability, and granularity challenges. When implementing an AI-based early warning system, there is a need to address several challenges, including data quality, model explainability, bias measurement and mitigation, sustainable development goals, use of social data, system adaptability, and continuous monitoring and improvement.
Footnotes
Acknowledgements
The authors express their sincere gratitude to colleagues within the Global South AI for Pandemic and Epidemic Preparedness & Response Network (AI4PEP), who exemplify an equitable, inclusive, innovative, and supportive environment, which has been instrumental in fostering this research. The authors are also deeply grateful for the vibrant intellectual community within York University, the Beirut Arab University, and the Lebanese Hospital-Geitaoui University Medical Center, which inspired this research and enriched our experience as scholars.
Authors’ contributions
CE, AS, YE, and ES designed the study and received the funds. CE supervised DO and YA, who performed and reported the summaries. CE verified the analysis and prepared the first draft. All authors provided critical feedback and revised it; they all approved the final version and agreed to be accountable for all aspects of the submitted paper.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is funded by Canada’s International Development Research Centre (IDRC) (Grant No. 10998).
