Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Diagnostic Accuracy of Point-of-Care Tests for Hepatitis C Virus Infection: A Systematic Review and Meta-Analysis

Abstract

Background

Point-of-care tests provide a plausible diagnostic strategy for hepatitis C infection in economically impoverished areas. However, their utility depends upon the overall performance of individual tests.

Methods

A literature search was conducted using the metasearch engine Mettā, a query interface for retrieving articles from five leading medical databases. Studies were included if they employed point-of-care tests to detect antibodies of hepatitis C virus and compared the results with reference tests. Two reviewers performed a quality assessment of the studies and extracted data for estimating test accuracy.

Findings

Thirty studies that had evaluated 30 tests fulfilled the inclusion criteria. The overall pooled sensitivity, specificity, positive likelihood-ratio, negative likelihood-ratio and diagnostic odds ratio for all tests were 97.4% (95% CI: 95.9–98.4), 99.5% (99.2–99.7), 80.17 (55.35–116.14), 0.03 (0.02–0.04), and 3032.85 (1595.86–5763.78), respectively. This suggested a high pooled accuracy for all studies. We found substantial heterogeneity between studies, but none of the subgroups investigated could account for the heterogeneity. Genotype diversity of HCV had no or minimal influence on test performance. Of the seven tests evaluated in the meta-regression model, OraQuick had the highest test sensitivity and specificity and showed better performance than a third generation enzyme immunoassay in seroconversion panels. The next highest test sensitivities and specificities were from TriDot and SDBioline, followed by Genedia and Chembio. The Spot and Multiplo tests produced poor test sensitivities but high test specificities. Nine of the remaining 23 tests produced poor test sensitivities and specificities and/or showed poor performances in seroconversion panels, while 14 tests had high test performances with diagnostic odds ratios ranging from 590.70 to 28822.20.

Conclusions

Performances varied widely among individual point-of-care tests for diagnosis of hepatitis C virus infection. Physicians should consider this while using specific tests in clinical practice.

Introduction

Hepatitis C is a global health problem [1]. Approximately 2–3% of the world’s population is chronically infected with hepatitis C virus (HCV), which amounts to an estimated 170 million persons. Chronic hepatitis C is associated with significant morbidity and mortality. HCV contributes to 27% of cirrhosis, 25% of hepatocellular carcinoma, and causes more than 350,000 deaths each year [2]. Screening of HCV infection is therefore mandatory in many high-risk epidemiologic settings [3,4]. In addition, testing of blood and blood products is essential for preventing HCV infection of recipients [5]. The use of an enzyme immunoassay (EIA) to detect HCV antibodies (anti-HCV) followed by nucleic acid testing for HCV RNA is standard practice for diagnostic evaluation of HCV infection in developed countries [6,7]. These tests require sophisticated equipment, trained technicians, a continuous supply of electricity, and high facility costs. Hence they are unsuitable for use in regions with limited resources [8].

Since the 1990s, several point-of-care tests have been developed that primarily use serum, plasma, whole blood and oral fluid to test for anti-HCV [9]. Manufacturers claim that these tests have high clinical and analytical sensitivity, so these tests are used widely in many settings in developing countries, including blood banks. However, several vital uncertainties remain regarding their use, including: (i) the accuracy of individual tests, (ii) the comparative efficacy of the different tests, and (iii) how the performances of tests vary based on HCV genotype diversity, HIV/HCV co-infections, HCV performance panels, or other factors [10].

A recent meta-analysis of the accuracy of point-of-care tests for hepatitis C attempted to address some of those uncertainties [11]. However, the study’s relevance was limited for many reasons. (i) A well-conducted, major study of the accuracies of point-of-care tests had been published by then but was not included in the meta-analysis [12]. (ii) The study did not address the issue of inconclusive test results, or how to analyze and report such data. We believe that complete transparency about the handling of inconclusive results in meta-analyses is essential for the reader to understand how key summary statistics have been calculated [13]. (iii) It did not evaluate heterogeneity of the data (i.e., the differences in reported estimates among studies) and its potential causes, an important component of meta-analyses [14]. (iv) The analytical sensitivities of the tests based on seroconversion panels, HCV genotypes, and cross-reactive sera (especially for HIV) were not assessed, which would have affected conclusions about the tests being evaluated. (v) The accuracies of individual tests were not evaluated, which obfuscates the real purpose of the meta-analysis.

We believe that recommendations for the use of point-of-care tests can have far-reaching effects on healthcare in developing countries, so they must be made carefully. For example, recommending tests of low analytical sensitivity for blood banks can pose a serious threat to recipients from infected donors. Accordingly, we conducted a comprehensive systematic review and meta-analysis of studies that assessed the diagnostic accuracy and applicability of point-of-care tests for hepatitis C.

Materials and Methods

Protocol

We conducted a systematic review and meta-analysis of studies that evaluated the accuracy of point-of-care tests for HCV. We established a protocol that specified several aspects of the meta-analysis, following the guidelines of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analysis) (S1 PRISMA Checklist [15]). The protocol has been published and is available at: http://www.crd.york.ac.uk/PROSPERO/display_record.asp? [ID = CRD42014008919] (S1 Protocol).

Acquisition of Data

On March 15th, 2014 we conducted a literature search using the metasearch engine “Mettā” (accessible at http://mengs1.cs.binghamton.edu/metta/search.action) [16]. Mettā is a query interface that helps systematic reviewers to retrieve, filter, and assess articles from five leading medical databases: PubMed, EMBASE, CINAHL, PsycINFO, and the Cochrane Central Register of Controlled Trials. Medical Subject Headings (MeSH) terms used for key and text word searches included “Hepatitis C” OR “Hepatitis C Antibodies” OR “Hepatitis C Virus” AND “Point-of-Care Tests” OR “Rapid Test” OR “Rapid Assay”. We also searched the bibliographies and reference lists of eligible papers and related reviews, consulted experts in the field, and contacted several authors from the included articles to locate additional studies. The titles and abstracts of all of the articles identified in the primary search were evaluated, and we established a list of eligible potential studies for consideration in the full-text review. Studies that fulfilled all of the selection criteria were included in the systematic review and meta-analysis.

Criteria for Inclusion of Studies in the Meta-Analysis

To recreate the 2×2 diagnostic table for estimating test accuracy, we included studies that employed point-of-care tests to detect anti-HCV (i.e., the index test), compared the results with a reference standard, and reported the results. A point-of-care test was defined as any commercially available assay that identified anti-HCV at or near the site of patient care. The test had to have a quick turnaround time (less than 30 min), allow for easy sampling, execution and reading of results, and have no requirement, or a minimal requirement, for cold chain and specialized equipment. The test results had to be available to the patient, physician and care team within an hour, which allowed for clinical management decisions to be made during the clinical encounter. Acceptable reference standards included a third-generation enzyme immunoassay (EIA), a microenzyme immunoassay, or a chemiluminescent immunoassay for the detection of anti-HCV. Two additional tests (recombinant immunoblast assay [RIBA] and nucleic acid testing) were considered to improve the reference standard [6]. We included studies of adults (>18 years old) published as abstracts or as full-text articles using any study design and conducted in any study settings (i.e., laboratory or field-based). Studies were not excluded based on sample size, study location, language of publication, or country of origin of the test. However, we excluded studies that dealt with the accuracy of laboratory-based tests, those with data that were unsuitable for recreating the 2×2 diagnostic table, reports from manufacturers and package inserts that could be subject to overt conflict-of-interest, and duplicate reports.

Data Extraction

Two independent reviewers conducted the literature search, performed a quality assessment of the studies included in our analysis, and extracted the data necessary for estimating test accuracy. Any discrepancies were referred to a third reviewer, who adjudicated any unresolved discrepancies. The following information was extracted from each study: author, year of publication, location of study, index test (one or more), reference standard, study design, source of sera, sample size, the characteristics of the population sampled for sera collection, the cross-reactive sera included in panel, and the analytical sera included in evaluation of test performance. Detailed information about the index test was extracted from each study, including: the name of the test, the country of origin and the name of the manufacturer, the time required to read the results, the specimen type (serum, plasma or blood) sampled for the test, the volume of the sample (μL) needed for the test, the storage conditions for maintaining the test kit, special equipment needed (if any) to perform the test, and the shelf life of the test kit.

Quality Assessment

We assessed the quality of the studies using the checklist of the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) tool [17] and STARD (Standards for the Reporting of Diagnostic Accuracy Studies) [18]. The QUADAS-2 sheet was completed following stepwise guidelines to assess the risk of bias (in four domains) and any concerns about applicability (in three domains) in each study. The STARD checklist consisted of 25 questions that were weighted equally (yes = 1, no = 0). The total score (out of 25) was calculated for each study.

Data Synthesis

Data were extracted to construct 2×2 tables (reference test results vs. index test results (S1 Table). We defined anti-HCV positive as those subjects with disease and anti-HCV negative as those without disease, based on the reference test results. The index test results were reported as a true positive, a false positive, a false negative, or a true negative. Valid inconclusive index test results were combined with either false negative results (i.e., sera that were anti-HCV positive according to the reference test and inconclusive by index test) or false positive results (i.e., sera that were anti-HCV negative according to the reference test and inconclusive by index test) [13].

Statistical Analysis

To estimate test accuracy, we calculated the sensitivities and specificities, positive and negative likelihood ratios (LRs), and the diagnostic odds ratios (DOR), along with 95% confidence intervals (CIs) (S1 File) [19]. We pooled test estimates using a bivariate random-effects regression model [20]. The model was used to draw hierarchical summary receiver operating characteristic (HSROC) curves. The curve of a test is a graph of sensitivity plotted against specificity that is obtained by varying the positivity threshold across all possible values. The graph plots sensitivity (the true positive rate) against 1- specificity (the false positive rate). To assess the levels of heterogeneity among test estimates, we calculated the inconsistency index (I2) [14]. We considered values of ≤ 25%, > 25–50%, > 50–75%, and > 75% to represent low, moderate, substantial, and considerable statistical heterogeneity, respectively. Multiple variables were selected a priori, and examined to investigate the potential sources of heterogeneity. Summary estimates and 95% CIs for each covariate were generated and compared in the meta-regression model. The DOR summarizes test accuracy as a single number, and can be used in a subgroup/meta-regression model to derive statistical values. A P value below 0.05 for the DORs was considered to be a significant difference among the levels of a particular covariate [21]. In order to compare the relative efficacy of tests, we pooled the estimates of individual tests that had three or more data points, and compared the estimates of individual tests with each other in the meta-regression model [21].

We performed all statistical analysis using the software program Meta-Analyst (Tufts Medical Centre, Boston, MA) [22].

Results

Literature Search & Study Characteristics

Our search returned 1300 reports, of which 30 satisfied all of the inclusion criteria (Fig. 1) [12, 2351]. Twenty full-text articles were excluded for the reasons shown in S2 Table. Twenty-five studies were published full-length manuscripts in peer-reviewed journals [12,2328,33,34,3651], two studies were published official reports [29,30], one was a draft report of the WHO [31], and two studies were published letters to the editor [32,35]. One study was published in Spanish [27] and the remaining 29 studies were in English. Sixteen studies were conducted in developing countries [23,24,26,28,3237,43,44,4749,51] and 14 studies were conducted in developed countries. The samples of 11 studies were collected from healthy blood donors, the general population, or from low-risk populations [12,24,2931,35,37,38,48,50,51]. Those from a further 17 studies were taken from hospital inpatients, medical and surgical clinic patients, or high-risk populations. The nature of the patient population was not known in two studies [34,44]. The sample size in each study varied from 60 to 2754 subjects. In 17 studies, the type of specimen tested was serum alone [12,23,24,2631,33,36,40,44,4649]. Plasma alone was tested in three studies [34,50,51], whole blood alone was tested in 5 studies [25,32,35,37,45], oral fluid alone was tested in one study [42], oral fluid plus serum was tested in one study [43], and oral fluid plus whole blood was tested in one study [41]. The remaining two studies tested other combinations of those specimens [38,39] (Table 1).

thumbnail
Fig 1. Flow diagram of study selection.

List of 20 full-text excluded articles, with the reasons for exclusion shown in S2 Table.

https://doi.org/10.1371/journal.pone.0121450.g001

The index test characteristics are listed in Table 2. Thirty studies used 30 test brands that generated 73 data points. A single data point was generated by each of 16 tests. Two data points were generated for each of six tests (SPAN, ImmunoRAPIDO, SM-HCV, Bioeasy, Immunocomb and, Hexagon), and three or more data points were generated for each for seven tests (OraQuick, Genedia, SDBioline, TriDot, Chembio, Spot, and Multiplo). Estimates of TriDot and TriDot 4th were similar and combined together. The number of data points generated for each specimen type were: serum (42 data points), whole blood/finger stick (13 data points), plasma (nine data points), and oral fluid (nine data points).

Study Quality

Ten studies employed a cross-sectional design [24,26,33,3638,4042,51], and 20 studies were case-controls. Nine studies used EIA alone as the reference standard [26,32,35,37,43,44,46,50,51]. Twenty-one studies used EIA in combination with RIBA or/ PCR. All of the studies administered the same reference test to all patients, thus avoiding partial or differential verification biases. Four studies reported that the index test readers were blinded [24,25,32,50]. The other studies made no reference to blinding of the index test readers. One study tested all samples twice [28]. The results of the index tests were independently read by more than one technician in five studies [12,29,30,40,41]. Eight studies received funding from, or reported another financial relationship with the pharmaceutical industry [25,28,34,38,42,46,48,49]. Nine studies received test kits from manufacturers [23,29,30,37,40,41,45,51], and five studies were funded by non-profit official organizations [33,36,43,44,50]. Seven studies explicitly declared no conflict of interest [41,4345,4951]. The remaining studies made no disclosure of conflict of interest.

The QUADAS-2 sheet reveals any major risk of bias in patient selection, index test, or reference standard (Table 3). Among the included studies, bias in patient selection resulted from either i) a case-control study design, or ii) a poor description of patient selection and clinical scenario. Bias in the index test was primarily due to a lack of reported blinding while reading test results. Bias in the reference standard was due to the inability to use a Center for Disease Control (CDC)-recommended reference standard (EIA plus RIBA and/or nucleic acid testing). The quality of study reporting ranged from poor to good (i.e., STARD scores from 8 to 23, out of a maximum possible score of 25), with a number of items missing from reporting of diagnostic accuracy.

Pooled Test Accuracy

A forest plot of the sensitivity and specificity estimates and 95% confidence intervals (CIs) for the 30 studies, stratified by their test brands, is shown in S1 Figure. Tests were ranked in descending order based on the test estimates. The sensitivity (range: 16.0–99.9%) and specificity (range: 77.8–99.7%) of individual tests were heterogeneous and varied widely. The overall pooled sensitivity, specificity, positive LR, negative LR, and DOR for all tests were 97.46% (95% CI: 95.92–98.43), 99.58% (99.28–99.75), 80.17 (55.35–116.14), 0.03 (0.02–0.04), and 3032.85 (1595.86–5763.78) respectively (Fig. 2, S2 Figure). This suggested a high pooled accuracy among all studies. The ROC curve also indicated high sensitivity with high specificity, as the curve approached the upper left hand corner of the graph where both measures equal 1 (Fig. 3).

thumbnail
Fig 2. Forest plot of the diagnostic odds ratio on a log scale and 95% confidence intervals (CIs) of 30 studies stratified by specimen type.

Whole blood, finger stick, plasma, and serum samples generated 64 data points (subgroup B) and oral fluid samples generated 9 data points (subgroup OF). Diagnostic odds ratios for each data point are shown as solid squares. Solid lines are 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs are denoted by the diamond at the bottom of each subgroup list. I2 and P values indicate the heterogeneity of the studies.

https://doi.org/10.1371/journal.pone.0121450.g002

thumbnail
Fig 3. The summary receiver operating characteristic (SROC) plot based on a bivariate random effects model.

The sensitivities and specificities of rapid point-of-care tests with the reference test for Anti-HCV are shown. The sensitivity of a test is plotted against 1-specificity, allowing the comparison of multiple tests at the same time. The circles represent single test data points from individual studies. The sizes of the circles are proportional to the number of patients included in the study. The curved line is the regression of the ROC curve summarizing the overall diagnostic accuracy.

https://doi.org/10.1371/journal.pone.0121450.g003

Sources of Heterogeneity

We found substantial heterogeneity among studies in the pooled sensitivity (I2 = 94%), specificity (I2 = 88%), positive LR (I2 = 89%), negative LR (I2 = 90%), and DOR (I2 = 90%) (Table 4). Multiple covariates were used in subgroup analysis and tested statistically in the meta-regression model to determine the reason(s) for the heterogeneity. None of the following factors investigated accounted for the heterogeneity: specimen tested (whole blood, plasma, serum or oral fluid), year of publication (before vs. after 2005), location of study (developed vs. developing countries), reference standard (EIA alone vs. EIA with a recombinant immunoblast assay or nucleic acid test), study design (cross-sectional vs. case-control), source of sera (blood banks vs. hospitals/clinics), or study quality (STARD score high, >15 vs. low, ≤15). Studies conducted in developed and developing countries both had high pooled accuracy values; however, DORs were 4.8 times higher in studies conducted in developed countries (5348.36, 95% CI: 2332.97–12261.18) compared to developing countries (1113.39, 95% CI: 450.40–2752.33; P = 0.015). None of the other variables had a significant effect on test performance.

Qualitative Analysis

Four studies evaluated the impact of HCV genotype on index test performance [12,29,30,33] (S3 Table). These studies found that HCV genotype diversity had no or minimal influence on the performances of index tests. Three studies evaluated the performances of four index tests (Spot, OraQuick, Chembio, and Multiplo) in HCV sera panels, admixed with HIV cross-reactive sera [36,40,41] (S4 Table). OraQuick was the only index test brand that did not show false positive results in HCV-reactive sera, or oral fluid samples that were cross-reactive for HIV. In addition, OraQuick did not interfere with the HCV positivity of oral fluid by interacting with the conditions in the mouth, chemical substances (bilirubin, hemoglobin, lipids, etc.), or conditions of storage or testing (i.e., hot storage, cold storage, or hot testing). Eight studies evaluated 21 tests (TriDot, TriDot 4th, Advanced, Serodia, Spot, SeroCard, Genedia, Acon, Hexagon, HepaScan, i+Lab, Dipstick, Assure, SPAN, ImmunoRAPIDO, OraQuick, SDBioline, CORE, Instant View, Axiom, FirstVue) against seven commercially available seroconversion panels [12,29,30,38,39,43,47,50] (S5 Table). OraQuick, evaluated by four studies, was the only test that detected HCV antibodies in seroconversion panels earlier than the reference test; it had the most consistent performance compared to the reference test [38,39,43,50]. All of the other tests detected HCV antibodies in seroconversion panels later than the reference test. Eight tests (Serodia, SeroCard, Genedia, TriDot 4th, CORE, Instant view, AXIOM and FirstVue) failed to pick up one or more positive panels, and/or showed inconsistent panel results.

Individual Test Accuracy

Seven tests (OraQuick, Genedia, SDBioline, TriDot, Chembio, Spot, and Multiplo) generated three or more (range: 3–12) data points each for tests of serum, plasma, and whole blood. OraQuick and Chembio generated six and three data points each, respectively, for tests of oral fluids (Table 5). The estimates for these tests were evaluated statistically in the meta-regression model. OraQuick’s performance in tests of serum, plasma and whole blood (12 data points) showed the highest pooled sensitivity and specificity and moderate heterogeneity among studies (I2 = 36%). The pooled DOR of OraQuick was significantly higher than that of the other six tests. After OraQuick, the pooled DOR for TriDot and SDBioline were the next highest, followed by those of Genedia and Chembio. Spot and Multiplo tests showed poor sensitivity estimates (S3 Figure).

thumbnail
Table 5. Accuracy estimates of 7 tests with three or more data points evaluated by the meta-regression model.

https://doi.org/10.1371/journal.pone.0121450.t005

The pooled estimates of OraQuick from tests of serum, plasma, and whole blood were significantly higher than those obtained from tests of oral fluid samples, while OraQuick had significantly higher test estimates for oral fluids than Chembio (S4 Figure).

Of the 23 tests that could not be evaluated in the meta-regression model, we identified nine that were unlikely to be useful in routine assays, based on the following parameters: sensitivity < 90% (BiDot, Hexagon, i+Lab), specificity < 95% (Onsite), −LR > 0.1(BiDot, FirstVue, Hexagon and i+Lab), +LR < 10 (Onsite), and poor performance in seroconversion panels (CORE, Instant, Axiom, and FirstVue) [19]. The DORs of the remaining 14 tests (range: 590.70–28822.20) are shown in order of their estimates to illustrate comparative test performance (S5 Figure).

Discussion

The meta-analysis that we conducted has six major strengths, as it was based on: (i) a global and complete literature search, with strict inclusion criteria; (ii) the definition of a strategy to include and analyze inconclusive results; (iii) the use of a bivariate statistical model; (iv) a statistical comparison of summary estimates for diagnostic accuracy within subgroups using a meta-regression model, to make relevant conclusions; (v) the evaluation of heterogeneity and its potential sources in the meta-regression model; (vi) an assessment of the tests’ analytical sensitivity, which included the influences of genotype, cross-reactive sera, interfering substances, and seroconversion panels on the performances of various test brands. We found sufficient information to enable an evaluation of the performances of several individual tests, and hence disagree with an earlier report that stated that the accuracy of individual tests cannot be evaluated due to lack of data [11]. We believe a bivariate model was appropriate for our meta-analysis for three reasons. First, our reference standards were enzyme immunoassays that employed consistent standard positive thresholds across the studies, as per the manufacturer’s guidelines. Second, index tests yielded positive and negative results as consistent cut-offs on the device (appearance of a colored line or dot), across the studies. Finally, all studies administered the same reference standard to all patients, thereby avoiding partial or differential biases [52].

The reference standard used by each study was not found to influence the accuracy of the index test. Third generation enzyme immunoassays incorporate HCV antigens from the core, NS3, NS4, and NS5, and have a diagnostic sensitivity and specificity of >99% [53]. The CDC has recommended RIBA as an additional, more specific test for the detection of anti-HCV in serum or plasma specimens that have been found reactive in ELISA [6]. However, RIBA assays have low sensitivity, they require high costs, complex manual attention, and a long time for processing, and they do not differentiate active infection from past exposure with spontaneous clearance [53]. Manufacturers have therefore discontinued RIBA HCV, and the CDC has updated the algorithm for HCV diagnosis to recommend nucleic acid testing for HCV RNA in subjects who test anti-HCV positive [7].

Our meta-analysis showed high pooled accuracy in point-of-care tests for the detection of anti-HCV. However, the sensitivities (range: 16.0–99.9%) and specificities (range: 77.8–99.7%) of individual tests were heterogeneous and varied widely. We must be cautious when evaluating the accuracy of tests, as our meta-analysis was subject to the detection, spectrum, and sampling bias of the original studies [14]. A case-control design in studies is apt to overestimate accuracy, with the potential for spectrum bias. In addition, only four studies explicitly mentioned blind reading of the index test results, suggesting that there could have been detection bias in the remaining studies. This could artificially inflate the sensitivity and specificity estimates of the index test. Lastly, there was substantial heterogeneity among studies in the pooled estimates, and none of the covariates tested were found to account for the observed heterogeneity.

In our meta-analysis, all of the tests performed better in studies that were conducted in developed countries than in developing countries. It is well-known that the performance characteristics of any test vary markedly with the prevalence of the condition in the population being assessed [54,55]. This should be kept in mind given that these tests are primarily meant to be employed in regions of the world where resources are limited. In addition, HCV has substantial genetic diversity, with six known genotypes, and HCV genotype is known to have a significant effect on diagnostic accuracy. Four studies in our analysis evaluated the impact of HCV genotype on index test performance [12,29,30,33], but this factor did not influence the performances of the assays.

During the evaluation of any of the tests that we considered, the impact of co-infections (with HIV, HBV, syphilis, etc.) on test performance must be included. Among them, HIV/HCV co-infection has special importance due to its significant burden worldwide, especially among injection drug users and within the Asia-Pacific region [56]. All three of the studies that addressed this issue found that false positive HCV results for Spot, Chembio, and Multiplo test brands were associated with HIV positivity. Data on this important issue are limited and further studies are needed to evaluate the influence of HIV, HBV, and syphilis co-infection on test performance. This issue will become even more important once multiplex point-of-care tests for HBV, HCV, and HIV are marketed for integrated HIV-sexually transmitted diseases screening programs [57].

Another consideration for the evaluation of test performance is the time at which the anti-HCV test becomes positive during the course of acute HCV infection. Eight studies used seroconversion panels, and determined the time difference (in days) between the first sample from the panel that was detected to be positive with the index test and that of the reference test. With the exception of OraQuick, all tests lagged behind the reference test in the detection of the first positive sample by a few days to a few weeks. Several tests performed poorly and failed to pick up one or more panels, and/or showed inconsistent results in some panels. Thus, when there is a high clinical suspicion of infection but a point-of-care test is negative, further testing should be conducted with a conventional laboratory test or nucleic acid test.

Based on the accuracy and analytical performance of OraQuick, we conclude that this test has the potential to be used as a rapid diagnostic test for HCV infection. The results obtained with it are comparable (and even possibly better) than those of third-generation enzyme immunoassays. In fact, the U.S. Food and Drug Administration (FDA) has approved OraQuick as the first rapid blood test for HCV antibodies in individuals 15 years and older [58]. Subsequent to this, in their update the CDC included OraQuick as the first-line rapid screening test for HCV infection using finger stick capillary blood and venipuncture whole blood [7]. The Clinical Laboratory Improvement Amendments [CLIA]-waiver provides wider testing access to persons at risk for HCV infection, permitting the use of the assay in physician offices, clinics, emergency rooms, and other counseling and testing sites.

It is essential for policymakers and hospital administrators to know that the performances of point-of-care tests for detection of anti-HCV vary widely. Amongst the seven tests evaluated in our meta-regression, OraQuick had the highest test sensitivity and specificity and showed better performance than a third generation enzyme immunoassay in seroconversion panels. The next highest test sensitivities and specificities were from TriDot and SDBioline, followed by Genedia and Chembio. The Spot and Multiplo tests had produced poor test sensitivities but high specificities. Nine of the remaining 23 tests produced poor test sensitivities and specificities and/or poor performances in seroconversion panels, while 14 tests had high test estimates with wide diagnostic odds ratios ranging from 590.70 to 28822.20. Many of these tests also had problems in the qualitative analysis, namely interference by cross-reactive HIV sera and/or poor performances in seroconversion panels.

Oral fluid-based point-of-care tests have significant advantages. Sampling does not require staff training in venipuncture, and noninvasive sampling would improve compliance and make HCV screening more acceptable [59]. However, such oral tests have lower sensitivities compared to blood-based tests. This is possibly due to a lower concentration of antibodies in oral fluid than in blood, or to dilution of the sample by the collection buffer. Also, oral fluid HCV positivity may be affected by oral pathology, or the collection of oral fluid after the use of mouth wash or acidic beverages by subjects. Lee et al [39] examined this possibility with the OraQuick test and found no interference with positive or negative oral fluid samples in the presence of fluids related to gingivitis, including bleeding gums, wearing of dentures, use of tobacco, etc. Similar studies are needed with other oral fluid-based point-of-care tests which have potential to be used as first-line tests in expanded screening initiatives. Blood-based tests could then be used to detect infections missed by oral fluid tests.

While the studies included in the present meta-analysis addressed accuracy of point-of-care tests for HCV antibodies, we found that little information was available pertaining to the evaluation of these tests for HCV-related, patient-centered outcomes. The most important information would be the feasibilities and outcomes of these tests in settings where anti-HCV testing is routinely conducted. Future studies are needed to assess how these tests perform in various situations, including: (i) seroepidemiologic studies in low and high endemic zones; (ii) screening of blood and blood products for anti-HCV and the prevention of transfusion-transmitted HCV infection; (iii) diagnosis and follow-up of acute hepatitis and fulminant hepatic failure; (iv) diagnosis of HCV-related chronic hepatitis and cirrhosis and the evaluation of anti-viral therapy. This information will be fundamental to shaping global diagnostic policies as they become more patient-centered over time. There is also a need for studies that address how accuracy varies with the prevalence of infection in different populations and settings.

Global hepatitis C treatment is evolving quickly, and there is an urgent need to increase screening for this infection in high-risk populations [60]. Point-of-care tests offer major benefits for screening and control of HCV infection, especially in the widely dispersed regions of the world that struggle with endemic poverty [10]. At present, practitioners are guided in the selection of individual tests only by manufacturers’ claims, which are expected to be biased. For the first time, our meta-analysis critically evaluated the performances of commercially available, individual point-of-care tests for hepatitis C. We found that the performances of individual tests varied widely. Health care policy makers, health administrators and laboratorian should consider these data in the selection of appropriate point-of-care tests for extended screening of hepatitis C in areas with limited financial resources.

Supporting Information

S1 Figure. Forest plot of the sensitivity and specificity estimates and 95% confidence intervals (CIs) of 30 studies stratified by test brands.

Tests are placed in descending order based on the test estimates. Thirty studies had used 30 test brands and generated 73 data points. Estimates of TriDot and TriDot 4th were similar and have been clubbed together. Estimates of two tests (OraQuick and Chembio) obtained on oral fluid testing are shown separately. Estimates of sensitivity and specificity from each study are shown as solid squares. Solid lines represent the 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs is denoted by the diamond at the bottom. I^2 and p values represents heterogeneity of studies.

https://doi.org/10.1371/journal.pone.0121450.s002

(TIF)

S2 Figure. Forest plot of the sensitivity and specificity estimates and 95% confidence intervals (CIs) of 30 studies stratified by specimen collected for testing.

Whole blood, finger stick, plasma, and serum samples had generated 64 data points (b) and oral fluid samples had generated 9 data points (of). Estimates of sensitivity and specificity from each study are shown as solid squares. Solid lines represent the 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs is denoted by the diamond at the bottom. I^2 and p values represents heterogeneity of studies.

https://doi.org/10.1371/journal.pone.0121450.s003

(TIF)

S3 Figure. Forest plot of the diagnostic odds ratio on a log scale of 7 tests with ≥3 data points arranged in descending order.

Diagnostic odds ratio from each data point are shown as solid squares. Solid lines represent the 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs is denoted by the diamond at the bottom. I^2 and p values represents heterogeneity of studies.

https://doi.org/10.1371/journal.pone.0121450.s004

(TIF)

S4 Figure. Forest plot of the diagnostic odds ratio on a log scale of 2 tests with oral fluid as the test sample.

Diagnostic odds ratio from each data point are shown as solid squares. Solid lines represent the 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs is denoted by the diamond at the bottom. I^2 and p values represents heterogeneity of studies.

https://doi.org/10.1371/journal.pone.0121450.s005

(TIF)

S5 Figure. Forest plot of the diagnostic odds ratio on a log scale of 14 tests with less than three data points.

Diagnostic odds ratio each data point are shown as solid squares. Solid lines represent the 95% CIs. Squares are proportional to the weights based on the random effect model. Pooled estimates and 95% CIs is denoted by the diamond at the bottom. I^2 and p values represents heterogeneity of studies.

https://doi.org/10.1371/journal.pone.0121450.s006

(TIF)

S1 File. Methodology document.

Systematic review methodology & definitions of relevant accuracy estimates.

https://doi.org/10.1371/journal.pone.0121450.s007

(DOCX)

S1 Table. 2×2 data table of included 30 studies evaluating 30 test brands and 73 data points.

https://doi.org/10.1371/journal.pone.0121450.s009

(DOCX)

S2 Table. List of 20 full-text excluded articles, with the reasons for exclusion.

https://doi.org/10.1371/journal.pone.0121450.s010

(DOCX)

S3 Table. Performance of Index tests in relation to HCV genotype diversity.

https://doi.org/10.1371/journal.pone.0121450.s011

(DOCX)

S4 Table. Performance of Index tests for anti-HCV in relation to cross-reactive HIV sera, oral pathologies and conditions, biological substances and storage and testing conditions.

https://doi.org/10.1371/journal.pone.0121450.s012

(DOCX)

S5 Table. Performance of Index test in seroconversion panels.

https://doi.org/10.1371/journal.pone.0121450.s013

(DOCX)

Acknowledgments

The authors acknowledge Dr. Khuroo’s Medical Trust.

Author Contributions

Analyzed the data: Mohammad Sultan Khuroo, Mehnaaz Sultan Khuroo. Wrote the paper: Mohammad Sultan Khuroo, Mehnaaz Sultan Khuroo. Conception and design: Mehnaaz Sultan Khuroo. Data abstraction: Mehnaaz Sultan Khuroo, Naira Sultan Khuroo. Accepted final draft: Mehnaaz Sultan Khuroo, Naira Sultan Khuroo, Mohammad Sultan Khuroo.

References

  1. 1. Hanafiah KM, Groeger J, Flaxman AD, Wiersma ST. Global epidemiology of hepatitis virus C infection: New estimates of age specific antibody to HCV sero-prevalence. Hepatology. 2013;57: 1330–1342.
  2. 2. Lee M-H, Yang H-I, Yuan Y, L'Italien G, Chen C-J. Epidemiology and natural history of hepatitis C virus Infection. World J Gastroenterol. 2014;20(28): 9270–9280. pmid:25071320
  3. 3. Centers for Disease Control and Prevention. Recommendations for prevention and control of hepatitis C virus (HCV) infection and HCV-related chronic disease. MMWR. 1998;47(RR-19): 1–33. pmid:9790221
  4. 4. Centers for Disease Control and Prevention. Vital signs: evaluation of hepatitis C virus infection testing and reporting—eight U.S. sites, 2005–2011. MMWR. 2013;62(RR-18): 357–361.
  5. 5. Ward JW. Testing for HCV: the first step in preventing disease transmission and improving health outcomes for HCV-infected individuals. Antivir Ther. 2012;17: 1397–1401. pmid:23321543
  6. 6. Centers for Disease Control and Prevention. Guidelines for laboratory testing and result reporting of antibody to hepatitis C virus. MMWR. 2013;52(RR–3).
  7. 7. Centers for Disease Control and Prevention. Testing for HCV infection: An update of guidance for clinicians and laboratorian. MMWR. 2013;62(RR-18): 362–365.
  8. 8. John AS, Price CP. Economic Evidence and Point-of-Care Testing. Clin Biochem Rev. 2013;34: 61–74. pmid:24151342
  9. 9. Smith BD, Jewett A, Drobeniuc J, Kamili S. Rapid diagnostic HCV antibody assays. Antivir Ther. 2012;17: 1409–1413. pmid:23322678
  10. 10. Ward R, Willcox M, Price CP, Abangma G, Heneghan C, Thompson M, et al. Diagnostic Technology: Point-of-care testing for Hepatitis C virus. Horizon Scan Report 0018. Dated September 27th 2011.
  11. 11. Shivkumar S, Peeling R, Jafari Y, Joseph L, Pal NP. Accuracy of Rapid and Point-of-care Screening Tests for Hepatitis C. A Systematic Review and Meta-analysis. Ann Intern Med. 2012;157: 558–566. pmid:23070489
  12. 12. Scheiblauer H, El-Nageh M, Nick S, Fields H, Prince A, Diaz S. Evaluation of the performance of 44 assays used in countries with limited resources for the detection of antibodies to hepatitis C virus. Transfusion. 2006;46: 708–718. pmid:16686838
  13. 13. Shinkins B, Thompson M, Mallett S, Perera R. Diagnostic accuracy studies: how to report and analyze inconclusive test results. BMJ. 2013;346: f2778. pmid:23682043
  14. 14. Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21: 1539–1558. pmid:12111919
  15. 15. Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009;6(7): e1000097. pmid:19621072
  16. 16. Smalheiser NR, Lin C, Jia L, Jiang Y, Cohen AM, Yu C et al. Design and implementation of metta, a metasearch engine for biomedical literature retrieval intended for systematic reviewers. Health Information Science and Systems. 2014;2: 1.
  17. 17. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB et al; QUADAS-2 Group. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155: 529–536. pmid:22007046
  18. 18. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis PP, Glasziou PP, Irwig LM et al. Standards for Reporting of Diagnostic Accuracy. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Ann Intern Med. 2003;138: 40–44. pmid:12513043
  19. 19. Vamvakas EC. Applications of Meta-Analysis in Pathology Practice. Am J Clin Pathol. 2001;116 (Suppl1): S47–S64.
  20. 20. Reistma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58: 982–990. pmid:16168343
  21. 21. Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM. The diagnostic odds ratio: a single indicator of test performance. J Clin Epidemiol. 2003;56: 1129–1135. pmid:14615004
  22. 22. Wallace CB, Schmid CH, Lau J, Trikalinos TA. Meta-Analyst: Software for meta-analysis of binary, continuous and diagnostic data. BMC Med Res Methodol. 2009;9: 80. pmid:19961608
  23. 23. Poovorawan Y, Theamboonlers A, Chumdermpadetsuk S, Thong CP. Comparative results in detection of HCV antibodies by using a rapid HCV test, ELISA and immunoblot. Southeast Asian J Trop Med Public Health. 1994;25: 647–649. pmid:7545315
  24. 24. Mvere D, Constantine NT, Katsawde E, Tobaiwa O, Dambire S, Corcoran P. Rapid and simple hepatitis assays: encouraging results from a blood donor population in Zimbabwe. Bull World Health Organ. 1996;74: 19–24. pmid:8653812
  25. 25. Montebugnoli L, Borea G, Miniero R, Sprovieri G. A rapid test for the visual detection of anti-hepatitis C virus antibodies in whole blood. Clin Chim Acta. 1999;288: 91–96. pmid:10529461
  26. 26. Kaur H, Dhanao J, Oberoi A. Evaluation of rapid kits for detection of HIV, HBsAg and HCV infections. Indian J Med Sci. 2000;54: 432–434. pmid:11262859
  27. 27. Buti M, Cotrina M, Chan H, Jardi R, Rodriguez F, Costa X, et al. Rapid method for the detection of anti-HCV antibodies in patients with chronic hepatitis C. Rev Esp Enferm Dig 2000;92(3): 140–146. pmid:10799944
  28. 28. Yuen MF, Hui CK, Yuen JC, Young JL, Lai CL. The accuracy of SM-HCV rapid test for the detection of antibody to hepatitis C virus. Am J Gastroenterol. 2001;96: 838–841. pmid:11280561
  29. 29. Department of Blood Safety and Clinical Technology. Hepatitis C Assays: Operational Characteristics (Phase I) Report 1. Geneva: World Health Organization; 2001.
  30. 30. Department of Blood Safety and Clinical Technology. Hepatitis C Assays: Operational Characteristics (Phase I) Report 2. Geneva: World Health Organization; 2001.
  31. 31. Department of Blood Safety and Clinical Technology. WHO Evaluation (Phase 1) of SDHCV Bioline [draft report]. Geneva: World Health Organization; 2002.
  32. 32. Hui AY, Chan FK, Chan PK, Tam JS, Sung JJ. Evaluation of a new rapid whole-blood serological test for hepatitis c virus [Letter]. Acta Virol. 2002;46: 47–48. pmid:12197634
  33. 33. Daniel HDJ, Abraham P, Raghuraman S, Vivekanandan P, Subramaniam T, Sridharan G. Evaluation of a rapid assay as an alternative to conventional enzyme immunoassays for detection of hepatitis C virus-specific antibodies. J Clin Microbiol. 2005;43: 1977–1978. pmid:15815036
  34. 34. Njouom R, Tejiokem MC, Zanga MC, Pouillot R, Ayouba A, Pasquier C et al. A cost-effective algorithm for the diagnosis of Hepatitis C virus infection and prediction of HCV viremia in Cameroon. J Virol Methods. 2006;133: 223–226. pmid:16360220
  35. 35. Torane VP, Shastri JS. Comparison of ELISA and rapid screening tests for the diagnosis of HIV, hepatitis B and hepatitis C among healthy blood donors in a tertiary care hospital in Mumbai [Letter]. Indian J Med Microbiol. 2008;26: 284–285. pmid:18695340
  36. 36. Nyirenda M, Beadsworth MB, Stephany P, Hart CA, Hart IJ, Munthali C et al. Prevalence of infection with hepatitis B and C virus and coinfection with HIV in medical inpatients in Malawi. J Infect. 2008;57: 72–77. pmid:18555534
  37. 37. Ivantes CA, Silva D, Messias-Reason I. High prevalence of hepatitis C associated with familial history of hepatitis in a small town of south Brazil: efficiency of the rapid test for epidemiological survey. Braz J Infect Dis. 2010;14: 483–488. pmid:21221477
  38. 38. Lee SR, Yearwood GD, Guillon GB, Kurtz LA, Fischl M, Friel T et al. Evaluation of a rapid, point-of-care test device for the diagnosis of hepatitis C infection. J Clin Virol. 2010;48: 15–17. pmid:20362493
  39. 39. Lee SR, Kardos KW, Schiff E, Berne CA, Mounzer K, Banks AT et al. Evaluation of a new, rapid test for detecting HCV infection, suitable for use with blood or oral fluid. J Virol Methods. 2011;172: 27–31. pmid:21182871
  40. 40. Smith BD, Teshale E, Jewett A, Weinbaum CM, Neaigus A, Hagan H et al. Performance of premarket rapid hepatitis C virus antibody assays in 4 national human immunodeficiency virus behavioral surveillance system sites. Clin Infect Dis. 2011;53: 780–786. pmid:21921221
  41. 41. Smith BD, Jewett A, Drobeniuc J, Kamili S. Rapid diagnostic HCV antibody assays. Antivir Ther. 2012;17: 1409–1413. pmid:23322678
  42. 42. Drobnik A, Judd C, Banach D, Egger J, Konty K, Rude E. Public health implications of rapid hepatitis C screening with an oral swab for community based organizations serving high-risk populations. Am J Public Health. 2011;101: 2151–2155. pmid:21940910
  43. 43. Cha YJ, Park Q, Kang ES, Yoo BC, Park KU, Kim JW et al. Performance Evaluation of the OraQuick Hepatitis C Virus Rapid Antibody Test. Ann Lab Med. 2013;33:184–189. pmid:23667844
  44. 44. Maity S, Nandi S, Biswas S, Sadhukhan SK, Saha MK. Performance and diagnostic usefulness of commercially available enzyme linked immunosorbent assay and rapid kits for detection of HIV, HBV and HCV in India. Virol J. 2012;9: 290. pmid:23181517
  45. 45. Jewett A, Smith BD, Garfein RS, Cuevas-Mota J, Teshale EH, Weinbaum CM. Field-based performance of three pre-market rapid hepatitis C virus antibody assays in STAHR (Study to Assess Hepatitis C Risk) among young adults who inject drugs in San Diego, CA. J Clin Virol. 2012;54(3): 213–217. pmid:22560051
  46. 46. Kant J, Möller B, Heyne R, Herber A, Bohm S, Maier M et al. Evaluation of a rapid on-site anti-HCV test as a screening tool for hepatitis C virus infection. Eur J Gastroenterol Hepatol. 2013 25(4): 416–420. pmid:23211286
  47. 47. Kim MH, Kang SY, Lee WI. Evaluation of a new rapid test kit to detect hepatitis C virus infection. J Virol Methods. 2013;193(2): 379–382. pmid:23871756
  48. 48. Al-Tahish G, El-Barrawy MA, Hashish MH, Heddaya Z. Effectiveness of three types of rapid tests for the detection of hepatitis C virus antibodies among blood donors in Alexandria, Egypt. J Virol Methods. 2013;189: 370–374. pmid:23541785
  49. 49. da Rosa L, Dantas-Corrêa EB, Narciso-Schiavon JL, Schiavon Lde L. Diagnostic Performance of Two Point-of-Care Tests for Anti-HCV Detection. Hepat Mon. 2013;13(9): e12274. pmid:24282422
  50. 50. O'Connell RJ, Gates RG, Bautista CT, Imbach M, Eggleston JC, Bardsley SG et al. Laboratory evaluation of rapid test kits to detect hepatitis C antibody for use in predonation screening in emergency settings. Transfusion. 2013;53(3): 505–517. pmid:22823283
  51. 51. Tagny CT, Mbanya D, Murphy EL, Lefrère JJ, Laperche S. Screening for hepatitis C virus infection in a high prevalence country by an antigen/antibody combination assay versus a rapid test. J Virol Methods. 2014;199: 119–123. pmid:24487098
  52. 52. Reistma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58: 982–990. pmid:16168343
  53. 53. Kamili S, Drobeniuc J, Araujo AC, Hayden TM. Laboratory Diagnostics for Hepatitis C Virus Infection. Clin Infect Dis. 2012;55(S1): S43–48.
  54. 54. Leeflang MM, Bossuyt PM, Irwig L. Diagnostic test accuracy may vary with prevalence: implications for evidence-based diagnosis. J Clin Epidemiol. 2009;62(1): 5–12. pmid:18778913
  55. 55. Khuroo MS, Khuroo NS, Khuroo MS. Accuracy of Rapid Point-of-Care Diagnostic Tests for Hepatitis B Surface Antigen-A Systematic Review and Meta-Analysis. Journal of Clinical and Experimental Hepatology. 2014;4(3): 226–240. http://dx.doi.org/10.1016/j.jceh.2014.07.08 pmid:25755565
  56. 56. Soriano V, Vispo E, Labarga P, Medrano J, Barreiro P. Viral hepatitis and HIV co-infection. Antiviral Res. 2010;85(1): 303–315. pmid:19887087
  57. 57. Nantachit N, Thaikruea L, Thongsawat S, Leetrakool N, Fongsatikul L, Sompan P, et al. Evaluation of a multiplex human immunodeficiency virus-1, hepatitis C virus, and hepatitis B virus nucleic acid testing assay to detect viremic blood donors in northern Thailand. Transfusion. 2007;47(10): 1803–1808. pmid:17880604
  58. 58. R Klein and K Struble (Food and Drug Administration). FDA Approves Rapid Test for Antibodies to Hepatitis C Virus. Press release. June 25, 2010. www.fda.gov/NewsEvents/Newsroom/.../ucm217318.htm
  59. 59. Yeh CK, Christodoulides NJ, Floriano PN, Miller CS, Ebersole JL, Weigum SE et al. Current Development of Saliva/Oral fluid-based Diagnostics. Tex Dent J. 2010;127(7): 651–661. pmid:20737986
  60. 60. Nelson DR. Hepatitis C drug development at a crossroads [Editorial]. Hepatology. 2009;50: 997–999. pmid:19787813