Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Effect of Integrated Capacity-Building Interventions on Malaria Case Management by Health Professionals in Uganda: A Mixed Design Study with Pre/Post and Cluster Randomized Trial Components

  • Martin Kayitale Mbonye ,

    mbonyemarti@yahoo.com

    Affiliation Training Department, Infectious Diseases Institute, Makerere University College of Health Sciences, Kampala, Uganda

  • Sarah M. Burnett,

    Affiliation Accordia Global Health Foundation, Washington DC, United States of America

  • Aldomoro Burua,

    Affiliation Management Sciences for Health, Kampala, Uganda

  • Robert Colebunders,

    Affiliations Department of Epidemiology and Social Medicine, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium, Department of Clinical Sciences, Institute of Tropical Medicine, Antwerp, Belgium

  • Ian Crozier,

    Affiliation Accordia Global Health Foundation, Washington DC, United States of America

  • Stephen N. Kinoti,

    Affiliations Center for Human Services, University Research Co., LLC, Bethesda, Maryland, United States of America, Fio Corporation, Toronto, Ontario, Canada

  • Allan Ronald,

    Affiliation Department of Medicine, University of Manitoba, Winnipeg, Manitoba, Canada

  • Sarah Naikoba,

    Affiliation Training Department, Infectious Diseases Institute, Makerere University College of Health Sciences, Kampala, Uganda

  • Timothy Rubashembusya,

    Affiliation Training Department, Infectious Diseases Institute, Makerere University College of Health Sciences, Kampala, Uganda

  • Jean-Pierre Van geertruyden,

    Affiliation Department of Epidemiology and Social Medicine, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium

  • Kelly S. Willis,

    Affiliation Accordia Global Health Foundation, Washington DC, United States of America

  • Marcia R. Weaver

    Affiliation Departments of Global Health and Health Services, University of Washington, Seattle, Washington, United States of America

Abstract

Background

The Integrated Infectious Diseases Capacity Building Evaluation (IDCAP) designed two interventions: Integrated Management of Infectious Disease (IMID) training program and On-Site Support (OSS). We evaluated their effects on 23 facility performance indicators, including malaria case management.

Methodology

IMID, a three-week training with two follow-up booster courses, was for two mid- level practitioners, primarily clinical officers and registered nurses, from 36 primary care facilities. OSS was two days of training and continuous quality improvement activities for nine months at 18 facilities, to which all health workers were invited to participate. Facilities were randomized as clusters 1∶1 to parallel OSS “arm A” or control “arm B”. Outpatient data on four malaria case management indicators were collected for 14 months. Analysis compared changes before and during the interventions within arms (relative risk = RR). The effect of OSS was measured with the difference in changes across arms (ratio of RR = RRR).

Findings

The proportion of patients with suspected malaria for whom a diagnostic test result for malaria was recorded decreased in arm B (adjusted RR (aRR) = 0.97; 99%CI: 0.82,1.14) during IMID, but increased 25% in arm A (aRR = 1.25; 99%CI:0.94, 1.65) during IMID and OSS relative to baseline; (aRRR = 1.28; 99%CI:0.93, 1.78). The estimated proportion of patients that received an appropriate antimalarial among those prescribed any antimalarial increased in arm B (aRR = 1.09; 99%CI: 0.87, 1.36) and arm A (aRR = 1.50; 99%CI: 1.04, 2.17); (aRRR = 1.38; 99%CI: 0.89, 2.13). The proportion of patients with a negative diagnostic test result for malaria prescribed an antimalarial decreased in arm B (aRR = 0.96; 99%CI: 0.84, 1.10) and arm A (aRR = 0.67; 99%CI: 0.46, 0.97); (aRRR = 0.70; 99%CI: 0.48, 1.00). The proportion of patients with a positive diagnostic test result for malaria prescribed an antibiotic did not change significantly in either arm.

Interpretation

The combination of IMID and OSS was associated with statistically significant improvements in malaria case management.

Introduction

In 2010, the World Health Organization (WHO) estimated that, globally, 219 million people had malaria and between 490 and 836 thousand died due to malaria [1]. In the same year, WHO estimated that in Uganda between 5 and 14 million malaria episodes and between 13,288 and 25,723 deaths due to malaria occurred [1]. Malaria remains the major cause of morbidity and one of the leading causes of mortality in Uganda [2]. Malaria also accounted for up to 50 percent of outpatient visits at health facilities, 20 percent of all hospital admissions and over 20 percent of all hospital deaths [3][5].

Current WHO guidelines call for parasitological diagnosis for malaria and Artemisinin-based Combination Therapies (ACTs) for first line treatment of uncomplicated malaria [6]. Within Africa however, presumptive treatment remains common practice [7] as staff and supplies for good quality diagnosis remain in short supply [8]. In some instances, health workers even ignore parasitological diagnosis, and prescribe (often inappropriate) malaria treatment for patients with a negative diagnostic test result for malaria [7][10]. Clinical diagnosis has its limitations and can lead to misdiagnosis of malaria and result in mismanagement of non-malaria febrile illnesses, wastage of antimalarial drugs and subject patients to the potential risk of developing resistance [6], [11].

Recently, the Joint Uganda Malaria Program (JUMP) and Uganda Malaria Surveillance Program (UMSP) evaluated the effects of an integrated team-based malaria training and surveillance program on facility level performance in eight sites. In 2006–7, integrated team-based malaria training and surveillance significantly increased referral for parasitological diagnosis of malaria among patients with suspected malaria, and decreased prescription of antimalarial treatment for patients with a negative diagnostic test result for malaria [9], [12]. These interventions however, did not improve the percentage of patients with a positive diagnostic test result for malaria prescribed an antimalarial treatment or the percentage of patients with suspected malaria prescribed an appropriate antimalarial.

The Integrated Infectious Diseases Capacity Building Evaluation (IDCAP) sought to build on the JUMP results with two capacity-building interventions that had a wider scope. The interventions were: 1) the Integrated Management of Infectious Disease (IMID) training program for mid-level practitioners (MLP) and 2) on-site support (OSS). Their scope was malaria, pneumonia, tuberculosis, HIV and related infectious diseases. IMID consisted of courses and distance learning, while OSS was an educational outreach and Continuous Quality Improvement (CQI) package. The interventions reflected the latest understanding of how clinicians build both routine and complex reasoning skills as described in Miceli et al. [13].

We evaluated the effects of the interventions on 23 facility performance measures at 36 health facilities. Results of additional measures have also been reported elsewhere [14]. Two MLP at each facility received IMID. Health facilities were randomized as clusters (1∶1) to parallel arms: 18 sites in arm A received OSS in Time 1 from April 2010 to December 2010, and 18 sites in arm B served as a control. The combined effect of IMID and OSS was measured by the pre/post change in indicators in arm A between Time 0 (November 2009 to March 2010) and Time 1, and the effect of IMID was measured by the pre/post change in arm B. The cluster randomized trial component measured the additional effect of OSS as the difference in the pre/post change across arms. The facilities were randomized as clusters, because the JUMP evaluation showed that the performance indicators depended on a team of clinicians, laboratory professionals and data entry staff rather than individuals.

The protocol for the interventions and evaluation is described in Naikoba et al. [15]. The full protocol and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1. The primary objective of this article was to report the detailed analysis of the effect of IMID and OSS on four facility performance indicators for malaria case management. The secondary objective was to conduct exploratory analyses of alternative performance indicators for malaria diagnoses.

Methods

Participants

The 36 sites were health center IVs (HCIV) or comparable facilities drawn from all four administrative regions of Uganda: central, east, north and west. The district health system of Uganda is organized by health subdistrict, which is usually led by a HCIV. Each HCIV provides basic preventive, and curative care to a population of 100,000 people, as well as referral services, for the health subdistrict [2]. MLP are among the cadre listed on the staffing norms for a HCIV, as well as medical officers, pharmacists, nursing assistants, other allied health professionals, administrative staff, support staff, and other staff. Two MLP from each site participated in IMID. The facility and MLP inclusion criteria were described in Miceli et al. and Naikoba et al. [13], [15]. Briefly, two key inclusion criteria for facilities were a functioning laboratory and accreditation to prescribe anti-retroviral therapy. Two key inclusion criteria for MLP were cadre (clinical officer, registered nurse, or registered midwife) and devoting the majority of their time to clinical care. All facility staff were invited to participate in OSS and all outpatients participated during the normal process of receiving care.

Interventions

IMID training program.

The IMID training program began with a three-week core course at the Infectious Diseases Institute (IDI) in Kampala, followed by two, one-week boost courses at 12 and 24 weeks after the core course, and distance learning as described in Miceli et al. and Naikoba et al. [13], [15]. Building on the WHO's integrated approach to training, such as the Integrated Management of Child Illnesses (IMCI) and Integrated Management of Adult Illnesses (IMAI) curricula, IDCAP developed a training program for malaria, tuberculosis, HIV and related infectious diseases for children, adults and pregnant women.

The IMID curriculum was case-based, and fever and malaria case management were the focus of six of 39 sessions [13]. The six sessions on malaria are displayed in Table 1. Sessions 5 and 6 were based on the JUMP curriculum, which was in turn based on guidelines issued by the Uganda Ministry of Health, Malaria Control Program [16], and the World Health Organization [6], including IMCI. Subsequent sessions introduced cases of increasing complexity. Malaria was also discussed in several other sessions with reference to differential diagnoses. Two clinical decision-making guides (CDG) on fever case management and malaria case management summarized the key decision-making steps to consider when managing patients with fever. The IDCAP training materials can be requested at: http://www.accordiafoundation.org/IDCAP/innovations-in-training, including CDGs and a distance-learning version with audio lectures.

thumbnail
Table 1. Malaria case management sessions in the IMID curriculum.

https://doi.org/10.1371/journal.pone.0084945.t001

OSS sessions.

The OSS sessions were delivered for two days every month for nine consecutive months by four-person mobile teams each consisting of a medical officer with expertise in continuous quality improvement (CQI), a clinical officer, a laboratory technologist and a registered nurse. Each monthly OSS session was devoted to a specific topic and all OSS topics are reported in Miceli et al, and Naikoba et al. [13], [15]. The second monthly topic was fever case management. During day one of the OSS session on fever case management, a multi-disciplinary team (MDT) session for all the health cadres at the facility sought to empower health workers with knowledge and skills to assess a febrile patient, properly diagnose malaria, and treat uncomplicated malaria. Three breakout sessions were organized: 1) a session for clinical officers and registered nurses on clinical management of complicated malaria, 2) a session for enrolled nurses and midwives on malaria in pregnancy, and 3) a laboratory staff session on laboratory testing for malaria. Then, individual mentoring sessions with selected clinicians and laboratory professionals reinforced key competencies required for proper management of malaria. Day two was devoted to CQI activities, as well as additional mentoring sessions. The two CQI activities were a meeting of facility CQI teams to review data on facility performance indicators, and an MDT session on patient flow and processes of care for patients with fever, in order to identify problems and implement strategies to address them.

As a training and CQI program, the medical interventions were not at the discretion of the investigators. For CQI, the facilities were assigned to a set of goals that were associated with performance indicators, but not to medical interventions. The facilities chose six of 13 goals and then created or adopted the processes of care to attain them. CQI is based on the philosophy that facility teams are more motivated when they select the goals, and facility teams create or adopt more effective processes, because they are most familiar with their work environment. The two goals for malaria case management were: 1) All patients with suspected malaria to have results for blood smear or rapid diagnostic test, and 2) To reduce the proportion of patients with a negative diagnostic test result for malaria treated with antimalarials. Fifteen and 12 of 18 sites in arm A chose to focus on goals 1 and 2 respectively. This article reports the analysis of all facilities in arm A. Weaver et al. (unpublished manuscript) conducted a sensitivity analysis with the arm A sites that chose to focus on the malaria goals and the results were similar to the ones reported in this article.

Outcomes

The four facility performance indicators for malaria case management are presented in Figure 1 and are defined in Table 2. The three alternative indicators for malaria diagnosis are defined in Table 2.

Information about drug availability was missing for some patients and as a result, the number of patients who received an appropriate antimalarial treatment was estimated from two intermediate measures: (a) proportion of patients prescribed an appropriate antimalarial among those with any antimalarial prescription, and (b) proportion of patients that received an appropriate antimalarial among those with an appropriate prescription and data about drug availability. The number of patients who received an appropriate antimalarial was estimated for each facility month as the product of the number of patients prescribed an appropriate antimalarial and (b).

Data Sources and Variable Definitions

Data were collected using a Uganda Ministry of Health Medical Form 5 (MF5), initially modified by UMSP [9] to link the clinical data to laboratory data, as well as to include tick boxes for history, laboratory investigations, diagnoses and drug prescriptions. It was further revised by IDCAP to capture detailed information on drug availability among other things. The form was reported in Mbonye et al. (unpublished manuscript). The data relevant for malaria case management included: fever or history of fever in the history section; blood smear for malaria, parasite density, and rapid diagnostic test for malaria in the laboratory section; malaria diagnosis (during and not during pregnancy) in the diagnosis section; and the treatment data described below in the treatment section.

Patients with suspected malaria were defined as all patients with a fever, referred for malaria laboratory testing, or given a clinical diagnosis of malaria as evidenced by either a record of malaria diagnosis or an antimalarial prescription.

An appropriate antimalarial referred to quinine or artesunate and the following ACTs: artemether & lumenfantrine, artesunate & amodiaquine, or dihydroartemisinin & piperaquine phosphate (Duocotecxin®).

Any antimalarial treatment included the appropriate antimalarials listed above and three drugs that did not comply with Uganda national guidelines: amodiaquine alone, chloroquine, and sulfadoxine/pyrimethamine (SP) (Fansidar®).

Any antibiotic treatment referred to 12 drugs listed on the MF5: Amoxicillin, Benzyl Penicillin, Chloramphenicol, Ciprofloxacin, Cloxacillin, Cotrimoxazole, Doxycycline, Erythromycin, Gentamicin, Metronidazole, PPF/Procaine Penicillin, Tetracycline. Data on these drugs was elicited by checking boxes on the MF5. It also included 19 antibiotics recorded as “other drugs:” Ampiclox® (Ampicilllin & Cloxacillin), Ampicillin, Ampicillin & Gentamicin, Azithromycin, Cefalexin, Cefixime, Ceftriaxone, Cefuroxime, Co-amoxiclav, Dapsone, Dicloxacillin, Gatifloxacin, Levofloxacin, Nalidixic acid, Nitrofurantoin, Ofloxacin, Pencillin (generic), Perfloxacin, Phenoxymethyl Penicillin.

Data Collection

Individual data were collected on every outpatient from November 2009 to December 2010. The MF5 was completed by various people involved in the process of care including but not limited to: records and clinical staff at the patient reception desks, clinicians during history taking, diagnosis, prescription and/or referrals, laboratory professionals during laboratory investigations and pharmacists/dispensers when dispensing prescribed drugs. Completed MF5 were electronically captured using Epi Info Version 3.2™ (U.S. Centers for Disease Control and Prevention, Atlanta, GA). Beginning in March 2010, data were entered by a Data Entry Assistant (DEA) stationed at each site and then electronically transmitted by an internet modem or a smartphone to IDI for further cleaning and analysis. The data from each site were merged using Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA) and after merging exported to Stata® version 11 (Stata Corp, College station, Texas, USA) for analysis. Using systematic random sampling, 5% of the completed MF5 were selected and re-entered carefully by a data technical support team from IDI and compared with ones entered by the DEA using the Epi Info “data compare” command. Results indicated over 99% level of concordance.

Randomization

The 36 sites were randomized as clusters (1∶1) to parallel arms: 18 sites in arm A (OSS in Time 1 from April 2010 to December 2010) and 18 sites in arm B (served as control during Time 1). The sites were randomized in two strata to balance allocation of two other on-site interventions across arms: a) previous participation in a national CQI program for HIV prevention and care and b) previous or current participation in the Baylor International Peadiatric AIDS Initiative (See http://www.bipai.org/Uganda/ for more information). The effects of these on-site interventions could have been confounded with OSS, which was also an on-site intervention. Site identification and selection was done by the Principal Investigator (MRW) and program managers. Randomization was conducted by the IDCAP biostatistician on 23rd February 2010. It was not possible to conceal site allocation to project staff and participating health professionals during the intervention.

Ethical Considerations

IDCAP was reviewed and approved by the School of Medicine Research and Ethics Committee of Makerere University (reference number 2009-175) and the Uganda National Council of Science and Technology (reference number HS-722). The IDCAP proposals were to electronically capture and extract data from the Ministry of Health, Health Management Information System (HMIS) for evaluating IDCAP interventions on facility performance indicators, a process which began after their approvals. IDCAP reinforced the HMIS, which routinely collects patient data at the facilities, a process that was ongoing before the committees' approvals and after IDCAP ended. Data collected for the HMIS was part of routine surveillance at the health facilities and did not require ethical approval. Informed consent of participants was not required, because the interventions were evaluated on facility performance using HMIS forms and registers rather than individual performance. Informed consent of patients was waived for the indicators reported in this article. The University of Washington Human Subjects Division determined that IDCAP did not meet the regulatory definition of research under 45 CFR 46.102(d).

Sample Size

Sample size calculations were based on testing the effect of OSS on facility performance with the facility as the unit of analysis rather than the patient as described in Naikoba et al. [15]. The patient data were anonymous, and we couldn't control for multiple visits by the same patient. Estimates based on patient as the unit of analysis would have underestimated the standard errors. We used the facility as the unit of analysis where the observation was a percentage, such as the percentage of patients with suspected malaria referred for parasitological diagnosis. It is rare to have indicator data on multiple facilities and we were fortunate to have the JUMP data on the average percentage and standard error across facilities for the sample size calculations [12].

Briefly, the calculations were based on JUMP results showing roughly a 20% mean absolute improvement in two malaria indicators: 1) percentage of patients with suspected malaria referred for parasitological diagnosis of malaria, and 2) percentage of patients with a negative diagnostic test result prescribed antimalarial treatment [16]. To detect a 20% mean absolute difference between the intervention and control arm with a power of 80% and an alpha of 0.05, 18 sites were needed in each arm. The calculations assumed a Gaussian distribution of the indicators and were performed with Stata version 10.

Statistical Methods

Descriptive statistics were used to describe patient populations across arms, time, and age groups. The effects of IMID and OSS were analyzed as binomial experiments with facility-month as the unit of analysis. For example, the number of patients with suspected malaria for whom a diagnostic test result for malaria was recorded would be the numerator or number of “successes” and the number of patients with suspected malaria would be the denominator or number of “trials”. We used a generalized linear model for the proportion of patients managed appropriately for a given indicator with main effects for arm, time period and their interaction. To analyze the effects of the interventions on each indicator, the pre/post difference in arm B measured the effect of IMID, the pre/post difference in arm A measured the combined effect of IMID and OSS, and the incremental difference between arm A and B measured the effect of OSS. In contrast to analyzing indicators as a proportion, the binomial experiments allowed the precision of the estimates to vary across facilities with different numbers of patients. All regression analyses were clustered on the facility with robust standard errors to adjust for over-dispersion and using the Poisson instead of the binomial family and a log link to estimate the relative risks (RR) [17]. Results for the interventions were presented with 99% confidence intervals (99% CI). Tests were based on a 1% level of significance because there were multiple comparisons. All analyses were performed with Stata® version 11.

Independent variables considered in the analysis were; patient age, facility level, facility type, BIPAI supported, CQI experienced, DEA stationed at site, malaria endemicity, and three staffing variables. The covariates apply to the sample as a whole; we did not interact the covariates with the variable for the main effects. Each site was assigned to one of four categories for endemicity as reported in Adoke et al. [18]: 1) very low or no malaria (prevalence <5% in children), 2) low (prevalence 5–10% in children), 3) medium-high (prevalence 10 to 50% in children except during seasonal peaks, and 4) very high (prevalence greater than 50% in children). The staffing variables were measured in quartiles: 1) proportion of ideal clinical staff assigned to the facility at baseline, where ideal was defined by Uganda Ministry of Health staffing norms [2], 2) number of clinical staff who saw at least five patients during a month divided by the number of patients at the facility during that month, and 3) proportion of laboratory professionals assigned to the facility at baseline as per the staffing norms.

The pre/post time periods for both arms were not the same, because the MLP in arm A attended the first two IMID sessions (March 15th–April 2nd 2011 and April 12th–30th) while those in arm B attended the last two sessions (May 3rd–21st, and June 7th–25th 2011). Therefore, in arm A, baseline (Time 0) was five months from November 2009 to March 2010. In arm B it was seven months from November 2009 to May 2010. In arm A, the intervention period (Time 1) began in April 2010 and extended for nine months to December 2010. In arm B it began in June 2010 and extended for seven months to December 2010.

Several sensitivity analyses were performed including estimating variance with bootstrapping rather than robust standard errors. SP monotherapy is considered appropriate for Intermittent Preventive Treatment during pregnancy, so we conducted a sensitivity analysis that omitted women aged 15–49 treated with SP monotherapy and not diagnosed with malaria, who make up 4.67% (1,754) of all inappropriate diagnoses. In addition, estimates were repeated without variables such as DEA stationed at site and the staffing variables that could potentially have been collinear with the main effects. Plotting of residuals and Cooks distance regression diagnostics were performed and the main model estimates were repeated with outliers and influential observations respectively omitted.

Results

Participant Flow

A total of 36 out of 38 sites that met the inclusion criteria were enrolled in IDCAP as shown in Figure 2; 17 and 10 sites were in the CQI and BIPAI strata respectively. In the random allocation process, the sites were evenly distributed between arms by facility type (three private-not-for-profit sites in each arm) and malaria endemicity (10 sites with very high endemicity in arm A and 11 in arm B). Only one of the five hospitals however, was randomly assigned to arm A. Two MLP from each site participated in the IMID. A total of 45 (24 in arm A and 21 in arm B) clinical officers, 23 (12 in arm A and 11 in arm B) registered nurses and four (all in arm B) registered midwives participated in IMID. One MLP in arm A did not participate in the second boost course, and three MLP in arm B did not participate in at least one of the boost courses.

thumbnail
Figure 2. Consort Flow Diagram – Recruitment and Randomization.

https://doi.org/10.1371/journal.pone.0084945.g002

For arm A, during the second OSS session on fever case management, 276 (64%) out of 431 eligible clinical and laboratory staff attended the MDT, 107 (60.1%) out of 178 attended the mentorship session, 101 (35.8%) out of 282 attended cadre specific breakout sessions and 266 (61.7%) out of 431 attended the CQI session.

During the 14 months included in the analysis, data on 777,667 outpatients were collected and 753,074 were analyzed. Data on age were missing for 24,593 (3.3%) of outpatients who were omitted from the analysis.

Recruitment

The sites were recruited between March and September 2009. Identification and recruitment of IMID participants took place between June 2009 and February 2010. Their registration and consent process took place between December 2009 and March 2010. Recruitment and registration of OSS participants began in April 2010 and continued during the intervention; all staff were encouraged to attend OSS sessions irrespective of previous attendance. Outpatients were seen when they sought care and their consent process was waived.

Baseline

Table 3 summarizes the patient population by age and EIR. At Time 0, data were collected on 290,183 outpatients; the smaller proportion of these patients were in arm A (33%), largely because more hospitals were randomly assigned to arm B and Time 0 was two months longer in arm B. In arm A and arm B respectively, 28% and 31% of the patients were children under five years. The proportion of patients with suspected malaria was generally higher among children under five years (85% in arm A and 87% in arm B) than in older patients (61% in arm A and 58% in arm B). Seropositivity rate for diagnostic tests for malaria was generally higher in children under fives years than it was in older patients. Baseline results for each indicator are reported in Tables 4, 5, and 6.

thumbnail
Table 3. Outpatient population by age and entomological inoculation rate.

https://doi.org/10.1371/journal.pone.0084945.t003

thumbnail
Table 4. Adjusted relative risk across time periods and arms for parasitological diagnosis of malaria.

https://doi.org/10.1371/journal.pone.0084945.t004

thumbnail
Table 5. Adjusted relative risk across time periods and arms for appropriate antimalarial treatment.

https://doi.org/10.1371/journal.pone.0084945.t005

thumbnail
Table 6. Adjusted relative risk across time periods and arms of prescribing based on diagnostic malaria test result.

https://doi.org/10.1371/journal.pone.0084945.t006

Outcomes and Estimation

Parasitological diagnosis.

The results for the proportion of patients with suspected malaria for whom a diagnostic test result for malaria was recorded are reported in Table 4. The indicator was reported in two steps as the proportion of patients with suspected malaria: a) for whom a diagnostic test for malaria was ordered, and b) for whom a diagnostic test result for malaria was recorded. The steps distinguish the clinicians' practices from the laboratory capacity and laboratory personnel practices. Between Time 0 and Time 1, the proportion of patients with suspected malaria for whom a diagnostic test for malaria was ordered increased by 28% in arm A compared to a 1% decrease in arm B. The proportion of patients with suspected malaria for whom a malaria diagnostic test result was recorded increased by 25% in arm A compared to a 3% decrease in arm B. The 28% difference between the changes in arm A and arm B was attributable to OSS (adjusted ratio of relative risks (aRRR) = 1.28, 99%CI: 0.93, 1.78). Patients under five years with suspected malaria were 14% (aRR = 1.14 99% CI: 1.00–1.30) more likely to have a diagnostic test result for malaria recorded than older patients.

Appropriate antimalarial treatment.

Table 5 shows results for (a) proportion of patients prescribed an appropriate antimalarial among those with any antimalarial prescription, (b) proportion of patients that received an appropriate antimalarial among those with an appropriate antimalarial prescription and data about drug availability and (c) estimated proportion of patients that received an appropriate antimalarial among those prescribed any antimalarial treatment. In arm A, the proportion of patients prescribed an appropriate antimalarial increased by 10% and the proportion of patients who received an appropriate antimalarial among those with an appropriate antimalarial prescription and data on drug availability increased by 51% between Time 0 and Time 1. At the same time, the estimated proportion of patients who received an appropriate antimalarial among those prescribed any antimalarial increased by 50%. The increases were statistically significant for indicators (b) and (c) in arm A. The indicators were relatively stable in arm B. The difference between arms attributed to OSS was 11% (aRRR = 1.11; 99%CI = 0.99, 1.25), 38% (aRRR = 1.38; 99%CI =  0.94, 2.04), and 38% (aRRR = 1.38; 99%CI = 0.89, 2.13), respectively.

Antimalarial treatment among patients with a negative diagnostic test result for malaria.

Table 6 shows that the proportion of patients with a negative diagnostic test result for malaria prescribed an antimalarial decreased significantly by 33% in arm A and decreased by 4% in arm B, with a 30% difference attributed to OSS (aRRR = 0.70, 99%CI: 0.48, 1.00). Patients under five years with a negative diagnostic test result for malaria were 39% more likely to be prescribed an antimalarial than older patients (aRR = 1.39, 99%CI: 1.25, 1.55).

Antibiotic treatment among patients with a positive diagnostic test result for malaria.

Table 6 shows that, the proportion of patients with a positive diagnostic test result for malaria prescribed an antibiotic (with or without an accompanying antimalarial) did not change in either arm. Patients under five years with a positive diagnostic test result for malaria were however, 25% (aRR = 1.25, 99%CI: 1.14, 1.36) more likely to be prescribed an antibiotic than older patients.

Sensitivity Analysis

In sensitivity analyses, the results were robust in variance estimates with bootstrapping and estimates that omitted outliers and influential observations with one exception. For the estimated proportion of patients that received an appropriate antimalarial, the standard error of the coefficient increased and the p-value dropped to 0.05. For this same indicator, the results did not change when the women who were prescribed SP monotherapy were omitted from the analysis. In estimates without the variable for DEA stationed at the site, the coefficients for the interventions generally showed larger effect sizes and smaller standard errors.

Measure of Dispersion

Figures 3 to 6 show the baseline values and improvement for all sites for each of the four indicators. Values at Time 0 are on the horizontal axis and the absolute change in percentage between Time 0 and Time 1 are on the vertical axis. Arm A sites are noted by an “x” and arm B sites are noted by a dot.

thumbnail
Figure 3. Percentage of patients with suspected malaria who had a diagnostic test result for malaria recorded, dispersion by facility and arm.

https://doi.org/10.1371/journal.pone.0084945.g003

thumbnail
Figure 4. Percentage of patients with a negative diagnostic test result for malaria who were prescribed an antimalarial, dispersion by facility and arm.

https://doi.org/10.1371/journal.pone.0084945.g004

thumbnail
Figure 5. Percentage of patients with a positive diagnostic test result for malaria who were prescribed an antibiotic, dispersion by facility and arm.

https://doi.org/10.1371/journal.pone.0084945.g005

thumbnail
Figure 6. Estimated percentage of patients who received an appropriate antimalarial, dispersion by facility and arm.

https://doi.org/10.1371/journal.pone.0084945.g006

As shown in Figure 3, Time 0 values for the percentage of patients with suspected malaria for whom a test result was recorded ranged from 2% to 80% in arm A and from 4% to 58% in arm B with the majority (12 in arm A and 13 in arm B) of the sites recording results for less than 50% of the patients with suspected malaria. Regardless of performance at Time 0, sites in arm A had greater improvements than sites arm B as shown by the “x”s above the dots for most baseline values.

Figure 4 shows that the range of values at Time 0 for the estimated percentage of malaria cases that received an appropriate antimalarial was broad in both arms. Again, sites in arm A had greater improvements than arm B, with 10 sites showing improvements of more than 20% in arm A compared to seven sites in arm B.

Figure 5 shows the percentage of patients with a negative diagnostic test result for malaria prescribed an antimalarial. For this indicator, the distribution of sites was comparable across arms at baseline. Regardless of performance at Time 0, sites in arm A had greater improvements than sites in arm B as shown by the “x's” below the dots for most baseline values.

Finally, Figure 6 shows that the percentage of patients with a positive diagnostic test result for malaria prescribed an antibiotic at Time 0 clustered between 20% and 80% in both arms. Changes over time were relatively small with no clear difference across arms.

Exploratory Analysis

Results in Table 7 revealed that the proportion of patients with a negative diagnostic test result for malaria who were subsequently diagnosed with malaria was relatively high among children under five years (41% in arm A and 52% in arm B) and adults (31% in arm A and 38% in arm B) at Time 0. This proportion reduced by 35% in arm A but remained the same in arm B at Time 1. Compared to the results in Table 6 for patients with a negative diagnostic test result for malaria, the number with a malaria diagnosis is always less than the number prescribed an antimalarial. Children under five years with a negative diagnostic test result for malaria were 34% (aRR = 1.34, 99%CI: 1.23, 1.45) more likely to be diagnosed with malaria than older patients.

The proportion of patients with a malaria diagnosis among those with a positive diagnostic test result for malaria was high in both arms at Times 0 and 1 and did not increase significantly in either arm.

Interestingly, the proportion of patients with a positive diagnostic test result for malaria among those with a malaria diagnosis was low in both arms at Time 0. The proportion increased by 34% in arm A in Time 1, while it decreased by 12% in arm B. Children under five years with a malaria diagnosis were 50% (aRR = 1.50, 99%CI: 1.23, 1.84) more likely to have a positive diagnostic test result for malaria than older patients.

Discussion

Interpretation

This article focused on four malaria case management indicators out of 23 facility performance indicators analyzed for IDCAP. The pre/post analysis showed that IMID was not associated with large or statistically significant improvements in those indicators. A combination of both IMID and OSS was however, associated with statistically significant improvements in two indicators: estimated proportion of patients who received an appropriate antimalarial among those prescribed any antimalarial treatment and proportion of patients with a negative diagnostic test result for malaria prescribed an antimalarial.

These results for patients with a negative diagnostic test result for malaria were encouraging after several previous studies showed that increased use of microscopy did not guarantee reduced prescription of antimalarials among patients with a negative diagnostic test result for malaria [8], [19][23]. The IDCAP results suggest that clinicians trusted the laboratory test results during the OSS intervention that included team-based training, as well as training and mentoring for laboratory professionals. In addition, reducing prescriptions of antimalarials among patients with a negative diagnostic test result for malaria may leave a larger supply of drugs available to patients with a positive diagnostic test result for malaria.

The effect of OSS measured by the randomized trial was not statistically significant at the 1% level for any of the four indicators. The effects sizes were large however, for three of the four indicators: 28% increase in the proportion of patients with suspected malaria for whom a diagnostic test result for malaria was recorded, 38% increase in the estimated proportion of patients that received an appropriate antimalarial, and 30% decrease in the proportion of patients with a negative test result for malaria prescribed an antimalarial treatment.

The IDCAP results compared favorably to the earlier JUMP results [12]. IDCAP's pre/post analysis of IMID and OSS showed a 25% increase in the proportion of patients with suspected malaria for whom a diagnostic test for malaria was recorded, compared to JUMP's 16 and 19% increases in the percentage of children under five years and older patients, respectively, referred for microscopy. IDCAP showed a 49% increase in the estimated proportion of patients who received an appropriate antimalarial, compared to JUMP's increases of three and one percent, respectively, which were not statistically significant. IDCAP showed a 33% decrease in the proportion of patients with a negative diagnostic test result for malaria prescribed an antimalarial, compared to JUMP's decreases of 28% and 23%, respectively. The JUMP tests were based on a 5% level of significance, and two of these IDCAP results were significant at that level. Finally, nine months of follow-up data in arm A were analyzed in IDCAP compared to four months in JUMP. UMSP later showed continued improvements with continued data collection [9].

There were important differences between the IDCAP interventions and JUMP besides the scope. Only two MLP from each site attended IMID, whereas a multi-disciplinary team of clinicians, laboratory professionals, and records staff attended JUMP. The multidisciplinary OSS activities were structured over the course of two days per month, whereas the JUMP follow-up visits focused on data collection and were briefer. To the extent that IMID affected the performance of the two MLP, the effect may not have been reflected in overall facility performance. OSS would however, reflect the effects of multi-disciplinary team training on overall facility performance. Future analyses will compare the performance of the two MLP who attended IMID with the other clinicians.

IDCAP was implemented at the same time as the data collection system that measured its effects, which could be considered an intervention. For the JUMP evaluation, it was not possible to distinguish the effects of the training program from UMSP's data collection [9], [24]. IDCAP controlled for the effects of the data collection system, which was based on the UMSP platform, and offered a model for addressing the simultaneous effects of capacity-building and data collection. The IDCAP data management system was introduced at all the sites by November 2009, and the DEA variable measured the effect of the full-time DEA at the sites beginning in March 2010. The DEA were not associated with significant changes in the malaria case management indicators at the 1% level of significance. If we relax the standard of evidence to a 5% level of significance, the DEA were associated with a 17% increase in the proportion of patients with suspected malaria for whom a diagnostic test result for malaria was recorded.

Neither IMID nor a combination of IMID and OSS affected antibiotic prescription among patients with a positive diagnostic test result for malaria. The IDCAP interventions admittedly focused more on case management of patients with a negative diagnostic test result for malaria than on limiting antibiotic use among patients with a positive diagnostic test result for malaria. Antibiotic use appeared high however; for both IDCAP arms and time periods combined, 51% of children under five years with a positive diagnostic test result for malaria were prescribed an antibiotic. In comparison, Batwala et al., recently reported that 26% of children with a positive rapid diagnostic test result and 18% with positive laboratory test results were prescribed antibiotic treatment in Uganda [25]. In Means et al.'s (unpublished manuscript) investigation of antibiotic use among patients with a positive diagnostic test result for malaria, they conducted separate analyses for patients with a clinical indication for antibiotics and patients without one. This distinction could be the basis for a revised indicator for antibiotic use among patients with positive diagnostic test results for malaria.

From the results of exploratory analysis, a high number of patients with a negative diagnostic test result for malaria, especially children under five, were diagnosed with malaria. Even though the numbers reduced by more than 30% after OSS, convincing the clinicians to believe in laboratory results was an uphill task. Also, close to 20% of patients with a positive test result for malaria were not diagnosed with malaria. The results improved following OSS and the presence of the DEA significantly contributed to these improvements. The numbers who were diagnosed with malaria without any laboratory confirmation remained very high.

Limitations

In the definition of a patient with suspected malaria, we considered patients with any of the following four criteria: fever, malaria test ordered, malaria diagnosis, and any malaria treatment prescribed. This definition is slightly different from the current definitions being used by the Uganda Ministry of Health and could have overestimated the total number of patients with suspected malaria. The appropriate treatment for malaria variable used was based on the Uganda national guidelines, but did not measure appropriate dose or duration of treatment.

Generalizability

IDCAP's eligibility criteria focused on HC IV that were not currently participating in on-going national CQI programs for HIV prevention and care to isolate the effect of OSS. Although the criteria would restrict generalizability of the results to only a handful of other facilities in Uganda, the confounding effect of an HIV program may be less relevant for malaria case management. The results may generalize to other primary care facilities that serve populations at risk for malaria in Africa.

The statistical tests addressed whether or not the results would generalize to other primary care facilities. The IDCAP tests were based on a 1% level of significance to adjust for multiple comparisons, because we tested the effects of the integrated intervention on 23 facility-performance indicators. Only two indicators met this standard of evidence in the pre/post analysis.

Replication also provides evidence on whether or not the results would generalize to other primary care facilities. Both JUMP and IDCAP showed training and on-site support significantly improved case management for malaria, albeit at the 5% level of significance.

Conclusions

The combination of IMID and OSS was associated with statistically significant improvements in case management of malaria. A series of papers will provide results on other performance indicators and cost-effectiveness.

Supporting Information

Checklist S1.

CONSORT checklist for documentation of the article content.

https://doi.org/10.1371/journal.pone.0084945.s002

(DOCX)

Acknowledgments

Accordia Global Health Foundation is leading IDCAP in partnership with four organizations; Ugandan Ministry of Health, Infectious Diseases Institute, International Training and Education Center on Health, and University Research Co. LLC.

The authors would like to thank the IDCAP Steering Committee for guidance: Drs. Geoffrey Bisoborwa, Alex Coutinho, Beatrice Crahay, Warner Greene, King Holmes, Nigel Livesly, Fred Wabwire-Mangen, M. Rachad Massoud, Alex Opio, W. Michael Scheld, and Gisela Schneider. The authors would also like to thank the IDCAP data management team for their tireless efforts in ensuring collection of quality data, the mobile teams for delivery of the OSS, and the IDI training department for conducting the IMID training and content development. We acknowledge the health facilities for their participation in the study, both during management of patients, data collection and management and training.

All remaining errors and omissions are the responsibilities of the authors.

The findings and conclusions contained within are those of the authors and do not necessarily reflect positions or policies of the Bill & Melinda Gates Foundation.

Author Contributions

Conceived and designed the experiments: AR KSW MRW. Performed the experiments: MKM SMB AB IC SNK TR MRW. Analyzed the data: MKM SMB MRW. Contributed reagents/materials/analysis tools: MKM SN KSW MRW. Wrote the paper: MKM MRW. Reviewed the manuscript to meet submission requirements: RC JPVG. Reviewed the manuscript before resubmissions during the different stages of peer review: MKM SMB AB RC IC SNK AR SN TR JPVG KSW MRW. Shared relevant literature material: MKM SMB AB RC IC SNK AR SN TR JPVG KSW MRW.

References

  1. 1. World Health Organization (2012) World Malaria Report 2012. Geneva.
  2. 2. Government of Uganda (2004), Health Sector Strategic Plan II, 2004/05-2009/10. Ministry of Health. Available: http://aidsalliance.3cdn.net/e9266246309cee49e2_qbm6bt9wn.pdf. Accessed 2013 Sep 14.
  3. 3. Government of Uganda (2010) Uganda National Household Survey 2009/10. Uganda Bureau of Statistics.
  4. 4. Government of Uganda (2011) Uganda National Malaria Control Strategic Plan 2010/11-2014/15. Ministry of Health.
  5. 5. Government of Uganda (2011) Annual Health Sector Performance Report 2010/2011. Ministry of Health.
  6. 6. World Health Organization (2006) Guidelines for the treatment of malaria: second edition. Geneva.
  7. 7. Ndyomugyenyi R, Magnussen P, Clarke S (2007) Malaria treatment-seeking behaviour and drug prescription practices in an area of low transmission in Uganda: implications for prevention and control. Trans R Soc Trop Med Hyg 101: 209–215 S0035-9203(06)00202-1 [pii];10.1016/j.trstmh.2006.06.004 [doi].
  8. 8. Hamer DH, Ndhlovu M, Zurovac D, Fox M, Yeboah-Antwi K, et al. (2007) Improved diagnostic testing and malaria treatment practices in Zambia. JAMA 297: 2227–2231 297/20/2227 [pii];10.1001/jama.297.20.2227 [doi].
  9. 9. Sserwanga A, Harris JC, Kigozi R, Menon M, Bukirwa H, et al. (2011) Improved malaria case management through the implementation of a health facility-based sentinel site surveillance system in Uganda. PLoS One 6: e16316 10.1371/journal.pone.0016316 [doi].
  10. 10. Osterholt DM, Rowe AK, Hamel MJ, Flanders WD, Mkandala C, et al. (2006) Predictors of treatment error for children with uncomplicated malaria seen as outpatients in Blantyre district, Malawi. Trop Med Int Health 11: 1147–1156 TMI1666 [pii];10.1111/j.1365-3156.2006.01666.x [doi].
  11. 11. Opoka RO, Xia Z, Bangirana P, John CC (2008) Inpatient mortality in children with clinically diagnosed malaria as compared with microscopically confirmed malaria. Pediatr Infect Dis J 27: 319–324 10.1097/INF.0b013e31815d74dd [doi].
  12. 12. Ssekabira U, Bukirwa H, Hopkins H, Namagembe A, Weaver MR, et al.. (2008) Improved malaria case management after integrated team-based training of health care workers in Uganda. Am J Trop Med Hyg 79: 826–833. 79/6/826 [pii].
  13. 13. Miceli A, Sebuyira LM, Crozier I, Cooke M, Naikoba S, et al. (2012) Advances in clinical education: a model for infectious disease training for mid-level practitioners in Uganda. Int J Infect Dis 16: e708–e713 S1201-9712(12)01208-8 [pii];10.1016/j.ijid.2012.07.003 [doi].
  14. 14. Weaver MR, Crozier I, Eleku S, Makanga G, Mpanga SL, et al. (2012) Capacity-building and clinical competence in infectious disease in Uganda: a mixed-design study with pre/post and cluster-randomized trial components. PLoS One 7: e51319 10.1371/journal.pone.0051319 [doi];PONE-D-12-22600 [pii].
  15. 15. Naikoba S, Colebunders R, van Geertruyden JP, Willis SK, Kinoti NS, et al. (2013) Design of a cluster randomized trial assessing integrated infectious diseases training and on-site support for midlevel practitioners in Uganda. Journal of Clinical Care Pathways 0: 1–8.
  16. 16. Government of Uganda (2005) National Policy on Malaria Treatment 2005. Ministry of Health.
  17. 17. Lumley T, Kronmal R, Ma S. Relative Risk Regression in Medical Research: Models, Contrasts, Estimators, and Algorithms. University of Washington Biostatistics Working Paper #293. This working paper site is hosted by The Berkeley Electronic Press (bepress). Available: http://www.bepress.com/uwbiostat/paper293. Accessed 2013 Sep 4.
  18. 18. Yeka A, Gasasira A, Mpimbaza A, Achan J, Nankabirwa J, et al. (2012) Malaria in Uganda: challenges to control on the long road to elimination: I. Epidemiology and current control efforts. Acta Trop 121: 184–195 S0001-706X(11)00061-1 [pii];10.1016/j.actatropica.2011.03.004 [doi].
  19. 19. Yeka A, Banek K, Bakyaita N, Staedke SG, Kamya MR, et al. (2005) Artemisinin versus nonartemisinin combination therapy for uncomplicated malaria: randomized clinical trials from four sites in Uganda. PLoS Med 2: e190 05-PLME-RA-0041R3 [pii];10.1371/journal.pmed.0020190 [doi].
  20. 20. Reyburn H, Mbakilwa H, Mwangi R, Mwerinde O, Olomi R, et al. (2007) Rapid diagnostic tests compared with malaria microscopy for guiding outpatient treatment of febrile illness in Tanzania: randomised trial. BMJ 334: 403 bmj.39073.496829.AE [pii];10.1136/bmj.39073.496829.AE [doi].
  21. 21. Reyburn H, Ruanda J, Mwerinde O, Drakeley C (2006) The contribution of microscopy to targeting antimalarial treatment in a low transmission area of Tanzania. Malar J 5: 4 1475–2875-5-4 [pii];10.1186/1475-2875-5-4 [doi].
  22. 22. Zurovac D, Midia B, Ochola SA, English M, Snow RW (2006) Microscopy and outpatient malaria case management among older children and adults in Kenya. Trop Med Int Health 11: 432–440 TMI1587 [pii];10.1111/j.1365-3156.2006.01587.x [doi].
  23. 23. Barat L, Chipipa J, Kolczak M, Sukwa T (1999) Does the availability of blood slide microscopy for malaria at health centers improve the management of persons with fever in Zambia? Am J Trop Med Hyg 60: 1024–1030.
  24. 24. Namagembe A, Ssekabira U, Weaver MR, Blum N, Burnett S, et al. (2012) Improved clinical and laboratory skills after team-based, malaria case management training of health care professionals in Uganda. Malar J 11: 44 1475-2875-11-44 [pii];10.1186/1475-2875-11-44 [doi].
  25. 25. Batwala V, Magnussen P, Nuwaha F (2011) Antibiotic use among patients with febrile illness in a low malaria endemicity setting in Uganda. Malar J 10: 377 1475-2875-10-377 [pii];10.1186/1475-2875-10-377 [doi].