Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Automated Detection of Healthcare Associated Infections: External Validation and Updating of a Model for Surveillance of Drain-Related Meningitis

  • Maaike S. M. van Mourik ,

    M.s.m.vanmourik-2@umcutrecht.nl

    Affiliation Department of Medical Microbiology, University Medical Utrecht, Utrecht, The Netherlands

  • Karel G. M. Moons,

    Affiliation Julius Center for Health Sciences and Primary Care, University Medical Utrecht, Utrecht, The Netherlands

  • Wouter W. van Solinge,

    Affiliation Department of Clinical Chemistry and Hematology, University Medical Utrecht, Utrecht, The Netherlands

  • Jan-Willem Berkelbach-van der Sprenkel,

    Affiliation Department of Neurosurgery, Rudolf Magnus Institute of Neuroscience, University Medical Utrecht, Utrecht, The Netherlands

  • Luca Regli,

    Affiliation Department of Neurosurgery, Rudolf Magnus Institute of Neuroscience, University Medical Utrecht, Utrecht, The Netherlands

  • Annet Troelstra,

    Affiliation Department of Medical Microbiology, University Medical Utrecht, Utrecht, The Netherlands

  • Marc J. M. Bonten

    Affiliations Department of Medical Microbiology, University Medical Utrecht, Utrecht, The Netherlands, Julius Center for Health Sciences and Primary Care, University Medical Utrecht, Utrecht, The Netherlands

Abstract

Objective

Automated surveillance of healthcare-associated infections can improve efficiency and reliability of surveillance. The aim was to validate and update a previously developed multivariable prediction model for the detection of drain-related meningitis (DRM).

Design

Retrospective cohort study using traditional surveillance by infection control professionals as reference standard.

Patients

Patients receiving an external cerebrospinal fluid drain, either ventricular (EVD) or lumbar (ELD) in a tertiary medical care center. Children, patients with simultaneous drains, <1 day of follow-up or pre-existing meningitis were excluded leaving 105 patients in validation set (2010–2011) and 653 in updating set (2004–2011).

Methods

For validation, the original model was applied. Discrimination, classification and calibration were assessed. For updating, data from all available years was used to optimally re-estimate coefficients and determine whether extension with new predictors is necessary. The updated model was validated and adjusted for optimism (overfitting) using bootstrapping techniques.

Results

In model validation, the rate of DRM was 17.4/1000 days at risk. All cases were detected by the model. The area under the ROC curve was 0.951. The positive predictive value was 58.8% (95% CI 40.7–75.4) and calibration was good. The revised model also includes Gram stain results. Area under the ROC curve after correction for optimism was 0.963 (95% CI 0.953– 0.974). Group-level prediction was adequate.

Conclusions

The previously developed multivariable prediction model maintains discriminatory power and calibration in an independent patient population. The updated model incorporates all available data and performs well, also after elaborate adjustment for optimism.

Introduction

Surveillance and feedback of healthcare-associated infection (HAI) rates to healthcare workers is considered a cornerstone of infection prevention programs [1], [2]. Policy makers and the public increasingly demand transparent reporting of infection rates to quantify quality of healthcare, for example through surveillance networks such as the National Healthcare Safety Network (NHSN) in the United States or the PREZIES network in the Netherlands [3][6]. Because of the potential impact of HAI rates on healthcare utilization and reimbursement, the development of efficient and reliable surveillance methods is of increasing importance. In many circumstances, manual chart review of all patients is still the only available method for surveillance, although it is prone to error due to effort dependent case-finding and the possibility of inconsistent interpretation of case definitions [7], [8]. Possibilities for automated surveillance of HAI using a variety of data sources have been investigated over the past two decades with varying success [9].

A HAI for which routine surveillance is implemented in our institution is drain-related meningitis (DRM), a relatively frequent complication of the use of external ventricular (EVD) and lumbar (ELD) cerebrospinal fluid drains in neurosurgical patients. DRM rates range from 2 up to 25% per drain placed [10][12] or 7.5 to 32 infections per 1000 days at risk (DAR) [13][15]. Causative micro-organisms are often skin flora, such as coagulase-negative staphylococci and Staphylococcus aureus, although in some settings Gram-negative micro-organisms (eg enterobacteriaceae) play an important role [11], [13]. Infection rates also depend on the definition applied. Since surveillance aims to generate insight into rates and characteristics of DRM, definitions are not necessarily identical to a clinical diagnosis entailing treatment consequences. Importantly, some case-definitions, including the CDC-definition for healthcare-associated meningitis, allow for diagnosis of an infection without the presence of bacterial growth from clinical cultures [16], [17].

Recently, an accurate prediction model for the automated surveillance of DRM has been proposed which combines predictors from multiple sources to identify those patients which have a high probability of having developed DRM during their admission, both cases of DRM with and without documented pathogens in microbiological cultures (Figure 1) [14]. Such a model can provide more timely and reliable rates of DRM and manual chart review can then be limited to high-risk patients (with a high predicted probability of DRM) while maintaining sensitivity of detection. Importantly, the predictors are all collected during routine clinical care which facilitates applicability of the model in practice [18].

thumbnail
Figure 1. Previously derived prediction rule for drain-related meningitis.

For each individual patient, the model returns a predicted probability of DRM which can be used to classify patients. Abbreviations: P(DRM) – probability of drain-related meningitis, LP – linear predictor, EVD – external ventricular drain, CRP – C-reactive protein, CSF – cerebrospinal fluid.

https://doi.org/10.1371/journal.pone.0051509.g001

Prediction models require validation in independent patient populations to assess their validity and performance in future use [19], [20]. This research presents the temporal validation of the DRM prediction model. Besides validation, optimal model performance in future patients can be achieved by updating the model using both derivation and validation data [21], [22]. Several newly available predictors were also considered in model updating.

Methods

Ethics statement

As described previously, use of anonymized data from the clinical data warehouse has been exempted from review by the institutional review board of our institution [23].

Development study details

For details on model development, please refer to [14]. In brief, logistic regression was used to develop a prediction model aimed at identifying patients that developed DRM after placement of an EVD or ELD. The study was conducted at University Medical Center Utrecht, a 1042-bed tertiary medical center. Patients who entered the routine surveillance performed by the department of hospital hygiene and infection control between January 1st 2004 and December 31st 2009 were included, with the exception of children, patients with less than one day of follow-up, patients with known meningitis at the time of placement of the first drain, patients admitted with a drain in situ or multiple simultaneous drains, military personnel and multiple (independent) admissions within the study period (n = 537 in analysis). All EVDs were placed in operating theatres and are tunneled five centimeters under the skin. Drains were not coated with antibiotics and all patients received perioperative antibiotic prophylaxis. ELDs were either inserted in the operating theatres or in sterile conditions on the neurology ward. Drains were not exchanged on a prophylactic basis and CSF samples were collected for culture and biochemical analysis only when infection was clinically suspected. Clinical care data were obtained from the Utrecht Patient Oriented Database (UPOD), a clinical data warehouse developed for research purposes which links patient characteristics to results from clinical chemistry and medical microbiology laboratories and pharmacy records [23]. Missing data were imputed using multiple imputation, and internal validation was performed [24], [25].

Outcome

As in model development, the outcome or reference standard was the development of DRM, which is defined as the occurrence of meningitis when the drain is in situ or within seven days of drain removal. Meningitis is defined according to the CDC-definition for healthcare-associated meningitis as applied by the department of hospital hygiene and infection control during routine manual surveillance. Presence of healthcare-associated meningitis requires either a positive culture or a combination of clinical signs, cerebrospinal fluid (CSF) analysis indicative of meningitis and initiation of empiric antimicrobial therapy by the physician. Importantly, this definition for meningitis allows for classification as a meningitis without bacterial growth from microbiological cultures and requires that cultures with skin flora are evaluated for possible contamination (Figure 2) [14][16], [26]. All charts were manually reviewed, and possible cases of infection were reviewed by at least two infection control professionals. In case of disagreement, consensus was reached through discussion.

thumbnail
Figure 2. Modified CDC-definition for healthcare-associated meningitis (reference standard).

https://doi.org/10.1371/journal.pone.0051509.g002

Model validation and patient population

The previously developed model for the prediction of DRM was validated on an independent cohort of consecutive patients that received an EVD or ELD, selected from the same center though from a later time period (January 2010 to June 11th 2011), a so-called temporal validation [19], [20]. In this time period, surveillance for ELDs was limited to drains placed in operating theatres in 2010 and discontinued in 2011. Data on device utilization is currently collected manually using electronic operating theatre and ICU records. Children (n = 13), patients with a meningitis at drain placement (11), patients who died within 24 hours (5) or who received multiple simultaneous drains (2) and those who were admitted with a drain already in situ (1) were excluded from the analysis, leaving 105 patients in the validation set. Approximately three-quarters of the EVD patients (75 of 99 patients) received a drain due to hydrocephalus after intracranial hemorrhage; nine percent received an EVD to treat increased intracranial pressure caused by a tumor. Five out of six ELD's were placed as a per-operative preventive measure. For each patient in the validation set, the reference standard was determined and predictor data was collected.

Predictors

Predictors were defined, collected and interpreted as in model development [14]. Predictors were selected for their ability to predict the development of DRM, irrespective of a causal association. Patient characteristics, administrative data (e.g. length of stay, ICU admissions) and the clinical parameters used in the prediction model (Figure 1) were extracted from the clinical data warehouse. Besides the predictors obtained during model development, Gram stain results, the location to which the patient was discharged (i.e. deceased, home or other care facility) and urgency of admission as recorded in administrative files were also available. For each patient, all results obtained throughout the surveillance episode (duration of drainage plus seven days or up to discharge) were retrieved. For each predictor, the value most indicative of infection was used as parameter value; for example, for the peripheral blood leukocyte count the highest value measured during the surveillance episode was entered in the prediction rule.

Statistical analyses

Missing data were imputed using multiple imputation (10 iterations) to prevent bias that would have occurred if the analysis had been limited to complete cases only [24]. In Table S1, a comparison of cases with and without missing data is given. For model validation, imputation was performed on the validation set only. A new imputation was run on the development and validation set combined for model update (see below). The original model depicted in Figure 1 was validated. Discrimination, classification and prediction at the group level (calibration-in-the-large) were assessed.

Model updating

Since datasets from both model development and validation were available, we investigated whether the original model could be improved or updated using both datasets combined and hence make maximal use of all data [21], [27]. As opposed to model derivation, patients with multiple simultaneous drains were no longer excluded since they are not expected to be different in terms of diagnosis of DRM. Furthermore, during the updating process, one misclassification error in the model development data was resolved and all data were adapted accordingly thus slightly improving performance characteristics obtained in the model development. All predictors from the original model were included in the revised model to re-estimate their coefficients. In addition, the new predictors (Gram stain, urgency of admission and discharge destination) were added to the model if they significantly improved the model (likelihood ratio test, p-value of 0.05). Gram stain results were combined with CSF culture result and CRP was included as a fractional polynomial to accommodate the non-linear association between CRP and risk of DRM [28]. Estimates were derived from the 10 imputation sets and pooled using Rubin's rule, a method that takes into account variation within and between multiple imputation data sets [25].

Then, internal validation was performed by bootstrapping (100 samples per imputation set, including predictor selection using all predictors considered in model development and update) and a uniform shrinkage factor was applied, this to prevent over-optimism and to make the model generalizable to future patient populations [29]. The final model is presented along with its optimism-corrected performance characteristics. Analyses were performed with SPSS® version 19 (SPSS Inc, Chicago IL) and R version 2.14.1 (www.r-project.org).

Results

Model validation

Model validation was performed on 105 patients who received 134 drains. Nearly all patients in the surveillance received an EVD (94.3%), due to discontinuation of ELD surveillance. The infection rate in the validation period was 17.3 per 1000 drainage days at risk (DAR). All infections occurred in patients receiving an EVD. In fifty percent of infections, no positive culture was obtained. Median age in the validation set was 59.3 years (model development 58.5 years), 65.7% of patients were female (model development 54.0%) and 71.5% received a drain to treat hydrocephalus after subarachnoid bleeding, (intraventricular) hemorrhage or infarction (model development 49.0%) and in-hospital mortality after exclusion of patients who died within 24 hours of drain placement was 21.9%. The area under the ROC curve, which is a measure of discrimination, was 0.951 (95% confidence interval (CI) 0.914 to 0.988); during model development an area under the ROC curve of 0.976 (95% CI 0.965–0.987, without correction for optimism) was observed [14]. Calibration-in-the-large, a measure of the total number of infections in a specified time period, predicted 13.46 infections in 2010 (observed  = 13 infections) and 6.06 infections between January 1st and June 10th 2011 (observed  = 7). Table 1 gives the contingency table obtained after application of the original prediction model and threshold.

thumbnail
Table 1. Contingency table with results of model validation with 95% confidence intervals for sensitivity, specificity and predictive values.

https://doi.org/10.1371/journal.pone.0051509.t001

Model update

The model was updated to incorporate newly available data and optimize performance in new patients. The total 2004–2011 dataset included 653 patients which received 863 drains. The observed infection rate was 14.1/1000 DAR (16.7/1000 DAR for EVDs, 6.0/1000 DAR for ELDs). Baseline characteristics and the results of model re-estimation are presented in Table 2. Patients who developed DRM received multiple courses of antibiotics during their surveillance episode; most likely they suffered from or were suspected of other concomitant infections. The higher mortality in the non-affected group is in part caused by the shorter duration of follow-up in the deceased patients; hence they had less time to develop a DRM.

thumbnail
Table 2. Model update results for 2004–2011 data, including baseline characteristics and results of univariable and multivariable analysis.

https://doi.org/10.1371/journal.pone.0051509.t002

In the multivariable analysis, using a fractional polynomial to fit the model to the CRP levels did not lead to the inclusion of higher power terms in the model and only the linear term was retained, albeit with a reversed direction. This is most likely because patients with a very high CRP level suffered from a different infection than DRM. The area under the ROC curve of the updated model was 0.972 before correction for optimism and 0.963 (95% CI 0.953–0.974) after correction for over-optimism. Table 3 shows classification results with varying predicted probability cut-offs.

thumbnail
Table 3. Model classification results with different predicted probability cut-offs.

https://doi.org/10.1371/journal.pone.0051509.t003

Finally, yearly infection rates can be estimated by summing predicted probabilities (calibration-in-the-large) for all patients in each year group (Figure 3).

thumbnail
Figure 3. Observed and predicted group-level infection rates using updated model, per 1000 days at risk with 95% confidence intervals.

Abbreviations: DRM – drain-related meningitis, DAR – days at risk, N pat – number of patients, N DAR – number of days at risk, N DRM – number of cases of drain-related meningitis.

https://doi.org/10.1371/journal.pone.0051509.g003

Discussion

The results of the present study demonstrate that the previously proposed model for the surveillance of DRM, in unaltered form, maintains its high discriminatory power and adequate group-level prediction in a new patient population from the same center. Patients included in the validation set were on average more seriously ill than in the derivation set, probably due to the discontinuation of surveillance of patients receiving ELDs. However, this did not impact model performance. Model update was performed to include predictors that recently became available and optimize the model. Performance of the updated model is similar to the original model. As described in Table 3, a choice needs to be made between sensitivity and specificity in selecting a predicted probability cut-off; increasing the predicted probability cut-off reduces the number of charts to review at the cost of sensitivity. Since this model is applied to patients retrospectively and does not affect clinical decision making, it is worthwhile to, accept a sensitivity of 95.2% as opposed to 100.0% which will reduce the workload for manual review from 223 to 159 charts. As in model development, longitudinal surveillance at the group level can be performed using this model. Comparison of the original and revised model regression coefficients yields similar directions and slightly more conservative magnitudes due to the more stringent model shrinkage procedure used in the model update.

The observed rates of DRM in this study are in the upper part of the spectrum of rates published. The use of a broad definition that includes infections in which no micro-organisms were cultured from CSF may play a role (26% of the infections) [10]. Furthermore, as opposed to benchmarking data from Germany [30], both ICU and non-ICU patients are included in the surveillance and follow-up is extended beyond ICU discharge. The lower infection rate observed in patients who received an ELD (16.7 vs 6.0/1000 DAR) may be explained by the less severe underlying disease in these patients. Schade et al. also found lower DRM rates in patients receiving an ELD as compared to an EVD [31]; as in our population (data not shown), these patients often received an ELD for the prevention or treatment of CSF leakage. In other studies, a higher DRM rate was found in patients receiving ELDs which may be due to the inclusion of almost exclusively patients with underlying intracranial hemorrhage [15].

The updated model presented in this research is, to our knowledge, the only model developed to specifically survey the development of meningitis complicating the use of external CSF drains that has undergone temporal validation. Compared to other automated surveillance systems for (procedure-specific) HAI, this model is one of the few using data from multiple sources in a multivariable model which weights the individual predictors to generate a prediction. This is in contrast to the often seen binary classification algorithms which use fewer data sources and often require positive cultures for case-finding [9], [32], [33]. In this model, positive cultures and antibiotic use are important predictors but no absolute requirement for the detection of infection, thus making it possible to identify those infections in which a positive culture was not obtained or for which the patient was not treated with standard empiric therapy. The study presented here confirms that this multivariable approach is valid for the surveillance of HAI, and may possibly be applied to other infections as well. Currently, use of the model requires extraction of predictor data from the electronic medical records and subsequent data processing prior to application of the prediction rule; ongoing developments in healthcare information technology are expected to facilitate the widespread implementation of such systems.

Since the number of external drains placed on a yearly basis is limited, the validation could only be performed on a relatively small patient population. Therefore, performing multiple imputation on this set of data required very relaxed settings which may cause unstable results. However, the model was subsequently revised and extended using the total population, one of the largest DRM cohorts to date, to make optimal use of available data and return the most reliable model possible. Although model update considered several new variables that have become available in the data warehouse, not all potential risk factors and diagnostic markers of DRM could be included. For example, there is no (field-defined) data on whether the drains were placed during an emergency procedure, how often drains were manipulated or whether there was cerebrospinal fluid leakage at the insertion site [10], [11], [34]. Markers of meningitis under investigation, such as procalcitonin and interleukins [35][37], are not routinely determined and thus not included. Furthermore, since the model is dependent on clinical practices, such as culture frequency and antibiotic use policies, the model may need to be adapted when implemented in new settings. However, the model will not be affected by differences in occurrence of causal risk factors assuming that clinical presentation and diagnostic workup remain unaffected. The effect on model performance of differences that may affect clinical presentation, such as use of antibiotic coated catheters, will need to be investigated further. When interpreting the results of this study, it must be realized that it has been developed for the purpose of infection surveillance after the fact, and not for realtime surveillance of infections. Several studies have attempted to identify parameters which can predict the onset of DRM, however with inconclusive results [37], [38]. The current model could be used for more timely feedback of infection rates and may return more consistent results than manual surveillance.

This model for the surveillance of drain-related meningitis has now been temporally validated in a single center, and maintained performance despite small changes in case-mix of the validation set. Multi-center validation is currently ongoing to investigate transportability to other hospitals and validity in patients with a different case-mix; also the effect of the use of antibiotic-coated catheters on model performance will be assessed. Several challenges still remain to achieve implementation in routine surveillance. Methods for handling of missing data in future patients need to be tested, and with the implementation in multiple centers, risk adjustment methods will be necessary to allow for valid comparison between centers. Another aspect that will require attention in the future is quantification of device utilization rates to generate infection rates with reliable numbers both in the numerator (this model) and the denominator.

Supporting Information

Table S1.

Comparison of patients with and without missing data. Complete cases have different underlying disease and are more likely to have developed DRM than non-complete cases.

https://doi.org/10.1371/journal.pone.0051509.s001

(DOC)

Acknowledgments

The authors would like to thank H.E.M. Blok, H. den Breeijen and O. Cremer, for their contribution to data collection, as well as the infection control professionals from the department of hospital hygiene and infection control (M.C.E. van der Jagt-Zwetsloot, S.M. van Dijk).

Author Contributions

Conceived and designed the experiments: MvM KM MB. Analyzed the data: MvM KM. Wrote the paper: MvM KM AT WvS JWB LR MB. Coordinated the clinical data warehouse: WvS.

References

  1. 1. Haley RW, Culver DH, White JW, Morgan WM, Emori TG, et al. (1985) The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 121: 182–205.
  2. 2. Gaynes R, Richards C, Edwards J, Emori TG, Horan T, et al. (2001) Feeding back surveillance data to prevent hospital-acquired infections. Emerg Infect Dis 7: 295–8.
  3. 3. Rosenthal MB (2007) Nonpayment for performance? Medicare's new reimbursement rule. N Engl J Med 357: 1573–5.
  4. 4. HAI Reporting Laws and Regulations 2011 Jul 6 [cited 2012 Feb 13] Available from http://www.apic.org/Resource_/TinyMceFileManager/Advocacy-PDFs/HAI_map.gif.
  5. 5. van der Kooi TI, Mannien J, Wille JC, van Benthem BH (2010) Prevalence of nosocomial infections in The Netherlands, 2007–2008: results of the first four national studies. J Hosp Infect 75: 168–72.
  6. 6. Tokars JI, Richards C, Andrus M, Klevens M, Curtis A, et al. (2004) The changing face of surveillance for health care-associated infections. Clin Infect Dis 39: 1347–52.
  7. 7. Gastmeier P, Kampf G, Hauer T, Schlingmann J, Schumacher M, et al. (1998) Experience with two validation methods in a prevalence survey on nosocomial infections. Infect Control Hosp Epidemiol 19: 668–73.
  8. 8. Lin MY, Hota B, Khan YM, Woeltje KF, Borlawsky TB, et al. (2010) Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates. JAMA 304: 2035–41.
  9. 9. Klompas M, Yokoe DS (2009) Automated surveillance of health care-associated infections. Clin Infect Dis 48: 1268–75.
  10. 10. Lozier AP, Sciacca RR, Romagnoli MF, Connolly ES Jr (2002) Ventriculostomy-related infections: a critical review of the literature. Neurosurgery 51: 170–81.
  11. 11. Chi H, Chang KY, Chang HC, Chiu NC, Huang FY (2010) Infections associated with indwelling ventriculostomy catheters in a teaching hospital. Int J Infect Dis 14: e216–e219.
  12. 12. Hoefnagel D, Dammers R, Ter Laak-Poort MP, Avezaat CJ (2008) Risk factors for infections related to external ventricular drainage. Acta Neurochir (Wien) 150: 209–14.
  13. 13. Arabi Y, Memish ZA, Balkhy HH, Francis C, Ferayan A, et al. (2005) Ventriculostomy-associated infections: incidence and risk factors. Am J Infect Control 33: 137–43.
  14. 14. van Mourik MS, Groenwold RH, Berkelbach van der Sprenkel JW, van Solinge WW, Troelstra A, et al. (2011) Automated detection of external ventricular and lumbar drain-related meningitis using laboratory and microbiology results and medication data. PLoS One 6: e22846.
  15. 15. Scheithauer S, Burgel U, Ryang YM, Haase G (2009) Schiefer J, et al (2009) Prospective surveillance of drain associated meningitis/ventriculitis in a neurosurgery and neurological intensive care unit. J Neurol Neurosurg Psychiatry 80: 1381–5.
  16. 16. Horan TC, Andrus M, Dudeck MA (2008) CDC/NHSN surveillance definition of health care-associated infection and criteria for specific types of infections in the acute care setting. Am J Infect Control 36: 309–32.
  17. 17. Holloway KL, Barnes T, Choi S, Bullock R, Marshall LF, et al. (1996) Ventriculostomy infections: the effect of monitoring duration and catheter exchange in 584 patients. J Neurosurg 85: 419–24.
  18. 18. Moons KG, Royston P, Vergouwe Y, Grobbee DE, Altman DG (2009) Prognosis and prognostic research: what, why, and how? BMJ 338: b375.
  19. 19. Altman DG, Vergouwe Y, Royston P, Moons KG (2009) Prognosis and prognostic research: validating a prognostic model. BMJ 338: b605.
  20. 20. Toll DB, Janssen KJ, Vergouwe Y, Moons KG (2008) Validation, updating and impact of clinical prediction rules: a review. J Clin Epidemiol 61: 1085–94.
  21. 21. Janssen KJ, Moons KG, Kalkman CJ, Grobbee DE, Vergouwe Y (2008) Updating methods improved the performance of a clinical prediction model in new patients. J Clin Epidemiol 61: 76–86.
  22. 22. Steyerberg EW (2009) Updating for a new setting. In: Steyerberg EW, editor. Clinical Prediction Models. New York: Springer. p. 361–90.
  23. 23. ten Berg MJ, Huisman A, van den Bemt PM, Schobben AF, Egberts AC, et al. (2007) Linking laboratory and medication data: new opportunities for pharmacoepidemiological research. Clin Chem Lab Med 45: 13–9.
  24. 24. Donders AR, van der Heijden GJ, Stijnen T, Moons KG (2006) Review: a gentle introduction to imputation of missing values. J Clin Epidemiol 59: 1087–91.
  25. 25. Rubin DB (1987) Multiple Imputation for Nonresponse in Surveys. Hoboken: J. Wiley & Sons. 288 p.
  26. 26. Leverstein-Van Hall MA, Hopmans TEM, Van Der Sprenkel JWB, Blok HEM, Van Der Mark WAMA, et al. (2010) A bundle approach to reduce the incidence of external ventricular and lumbar drain-related infections: Clinical article. J Neurosurg 112: 345–53.
  27. 27. Steyerberg EW, Borsboom GJ, van Houwelingen HC, Eijkemans MJ, Habbema JD (2004) Validation and updating of predictive logistic regression models: a study on sample size and shrinkage. Stat Med 23: 2567–86.
  28. 28. Royston P, Ambler G, Sauerbrei W (1999) The use of fractional polynomials to model continuous risk variables in epidemiology. Int J Epidemiol 28: 964–74.
  29. 29. Steyerberg EW, Harrell FE, Borsboom GJ, Eijkemans MJ, Vergouwe Y, et al. (2001) Internal validation of predictive models: efficiency of some procedures for logistic regression analysis. J Clin Epidemiol 54: 774–81.
  30. 30. KISS (2012) ITS-KISS Reference data. Available: http://www.nrz-hygiene.de/fileadmin/nrz/module/its/200701_201112_ITS_reference_NEUROCHIRURGISCH.pdf. Accessed 2012 September 4.
  31. 31. Schade RP, Schinkel J, Visser LG, Van Dijk JM, Voormolen JH, et al. (2005) Bacterial meningitis caused by the use of ventricular or lumbar cerebrospinal fluid catheters. J Neurosurg 102: 229–34.
  32. 32. Trick WE, Zagorski BM, Tokars JI, Vernon MO, Welbel SF, et al. (2004) Computer algorithms to detect bloodstream infections. Emerg Infect Dis 10: 1612–20.
  33. 33. Pokorny L, Rovira A, Martin-Baranera M, Gimeno C, Alonso-Tarres C, et al. (2006) Automatic detection of patients with nosocomial infection by a computer-based surveillance system: a validation study in a general hospital. Infect Control Hosp Epidemiol 27: 500–3.
  34. 34. Korinek AM, Reina M, Boch AL, Rivera AO, De BD, et al. (2005) Prevention of external ventricular drain–related ventriculitis. Acta Neurochir (Wien) 147: 39–45.
  35. 35. Martinez R, Gaul C, Buchfelder M, Erbguth F, Tschaikowsky K (2002) Serum procalcitonin monitoring for differential diagnosis of ventriculitis in adult intensive care patients. Intensive Care Med 28: 208–10.
  36. 36. Lopez-Cortes LF, Cruz-Ruiz M, Gomez-Mateos J, Viciana-Fernandez P, Martinez-Marcos FJ, et al. (1995) Interleukin-8 in cerebrospinal fluid from patients with meningitis of different etiologies: its possible role as neutrophil chemotactic factor. J Infect Dis 172: 581–4.
  37. 37. Schade RP, Schinkel J, Roelandse FW, Geskus RB, Visser LG, et al. (2006) Lack of value of routine analysis of cerebrospinal fluid for prediction and diagnosis of external drainage-related bacterial meningitis. J Neurosurg 104: 101–8.
  38. 38. Pfisterer W, Muhlbauer M, Czech T, Reinprecht A (2003) Early diagnosis of external ventricular drainage infection: results of a prospective study. J Neurol Neurosurg Psychiatry 74: 929–32.