Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Does Attrition during Follow-Up of a Population Cohort Study Inevitably Lead to Biased Estimates of Health Status?

  • Rosie J. Lacey ,

    r.lacey@keele.ac.uk

    Affiliation Research Institute for Primary Care & Health Sciences, Keele University, Keele, Staffordshire, United Kingdom

  • Kelvin P. Jordan,

    Affiliation Research Institute for Primary Care & Health Sciences, Keele University, Keele, Staffordshire, United Kingdom

  • Peter R. Croft

    Affiliation Research Institute for Primary Care & Health Sciences, Keele University, Keele, Staffordshire, United Kingdom

Abstract

Attrition is a potential source of bias in cohort studies. Although attrition may be inevitable in cohort studies of older people, there is little empirical evidence as to whether bias due to such attrition is also inevitable. Anonymised primary care data, routinely collected in clinical practice and independent of any cohort research study, represents an ideal unselected comparison dataset with which to compare primary care data from consenting responders to a cohort study. Our objective was to use this method as a novel means to assess if (i) responders at follow-up stages in a cohort study remain representative of responders at baseline and (ii) attrition biases estimates of longitudinal associations. We compared primary care consultation morbidities and prescription prevalences among circa 32,000 patients aged 50+ who contribute to an anonymised general practice database (Consultations in Primary Care Archive (CiPCA)) with those from patients aged 50+ in the North Staffordshire Osteoarthritis Project (NorStOP) cohort, United Kingdom (2002–2008; n = 16,159). 8,197 (51%) persons responded to the NorStOP baseline survey and consented to medical record review. 5,121 and 3,311 responded at 3- and 6-year follow-ups. Differences in consulting prevalence of non-musculoskeletal morbidities between NorStOP responders and CiPCA comparison population did not increase over the two follow-up points except for ischaemic heart disease. Differences observed at baseline for osteoarthritis-related consultations were generally unchanged at the two follow-ups (standardised prevalence ratios for osteoarthritis (1.09–1.13) and joint pain (1.12–1.23)). Age and gender adjusted associations between baseline consultation for chronic morbidity and future new osteoarthritis and related consultations were similar in CiPCA (adjusted Hazard Ratio: 1.40; 95% Confidence Interval: 1.34,1.47) and NorStOP 6-year responders (1.32; 1.15,1.51). There was little evidence that responders at follow-ups represented any further selection bias to that present at baseline. Attrition in cohort studies of older people does not inevitably indicate bias.

Introduction

Cohort studies of health investigate the link between factors measured at baseline and subsequent events in the future. Selective recruitment into a cohort study may result in a difference in prevalence of baseline characteristics between the ‘selected’ cohort and the wider population from which it was derived [1][5]. However, simulation studies suggest the validity of associations between baseline exposures and future outcomes is relatively unaffected by such baseline selectivity [6].

Of more potential importance is future loss or drop-out of initially recruited cohort participants (attrition). Although such attrition may be inevitable in cohort studies of older people as health and social difficulties develop with age, there is little empirical evidence as to whether bias due to such attrition is also inevitable - specifically whether (i) responders at follow-up stages in a cohort study remain representative of responders at baseline and (ii) attrition biases estimates of longitudinal associations. If cohort attrition results in data that is missing not at random (MNAR), i.e. the probability of drop-out depends on the outcome of interest and cannot be explained by the observed exposures, then simulation studies indicate this may lead to biased estimates of longitudinal associations [7], [8]. Although complete follow-up of all baseline participants with no attrition is the best protection against possible bias, this is rarely achieved in practice.

Predictors of attrition in cohort studies have often been investigated by comparing the baseline characteristics of those who participate at follow-up with those who drop out [4], [9][16]. Other methods have been used to estimate the impact of non-response: some studies have assumed that later responders (e.g. those who respond to the second or third mailings of a questionnaire) are more like non-responders than those who respond to the first mailing [17], whilst others have collected minimum data from those not responding to standard survey reminders [18], [19].

An alternative approach is to directly compare independent data from cohort responders with a comparison population which represents the underlying population from which the cohort was drawn. Routinely collected sources of information on health, such as medical records from primary care or national registers, have been used as comparison populations to assess response bias in cross-sectional studies [5], [20][23] and initial selection into a birth cohort study [3]. We report here a novel extension of this methodology, involving routine primary care medical record data from survey responders, which is distinct and separate from their survey data, and an available comparison population, also with routinely collected primary care data, from an anonymised general population medical record dataset which was broader than but included the full populations from which survey participants had been drawn. The comparison is performed at baseline and at each subsequent follow-up stage of the cohort study. In this way, consultation and prescription patterns in cohort responders can be compared with those in the underlying population as an estimate of the presence and extent of attrition bias.

We selected an existing cohort study of older people to test this method empirically. The hypotheses tested were that, if there is no systematic selectivity with respect to health among responders to follow-up in the cohort study, then consultation-recorded morbidity in such responders, collected from routine clinical practice and independently of the cohort research, will not differ from routine consultation morbidity frequency in a larger but comparable general population unselected by study participation, and that associations between record-based baseline measures and record-based outcomes in the two populations will be similar.

Methods

Ethics Statement

For the North Staffordshire Osteoarthritis Project (NorStOP), ethical approval was obtained from the North Staffordshire Research Ethics Committee UK and written consent for medical records to be reviewed was given by NorStOP participants. For the Consultations in Primary Care Archive (CiPCA) database, ethical approval was given by the North Staffordshire Research Ethics Committee, UK to download and store anonymised medical record information for research use from participating general practices. All general practices participating in CiPCA inform their patient populations that their anonymised records will be used in this way and all patients are offered the opportunity to withdraw their records from inclusion in CiPCA.

The Comparison Population

CiPCA is a database of routinely collected and anonymized primary care data from 13 general practices in North Staffordshire, United Kingdom (UK), for which there is established published quality with respect to the completeness and consistency of morbidity recording during all consultations from their registered populations [24]. In the UK, about 98% of persons are registered with a general practice [25] for all routine primary care, and their age-sex registers are considered to be representative of the general population [26].

Annual consultation figures for musculoskeletal conditions drawn from the CiPCA database have been shown to be similar to national databases [27]. Data from the 11 CiPCA practices which contributed data continuously from 2001 to 2008 were included in this study. These 11 practices include 5 practices which participated in NorStOP and which were used for the current analysis.

The NorStOP Cohort

NorStOP is a prospective cohort study of joint pain and general health in older adults [28]. As part of this cohort study in North Staffordshire, during 2002 and 2003 all people (n = 16,159) aged 50 years and over who were registered with five general practices were sent a postal questionnaire which incorporated a range of validated self-report measures regarding joint pain, general health, disability, psychological status and socio-demographics [28]. The questionnaire also asked for consent to view medical records. Electronic consultation and prescribing records for participants consenting to review of their medical records was linked to questionnaire self-report data. In order to limit the possibility of people with joint pain being more likely to take part in the study, the questionnaire was entitled “Health Questionnaire” and the covering letter stated “We are very interested in your reply even if you have not had any pain or other symptoms in the recent past”, although some reference to the study topic was made: “Researchers…are trying to find out about joint pain and other symptoms experienced by people…”. A 3-year follow-up questionnaire was sent in 2005/2006 to those who responded to the baseline survey and were still alive and registered with the practices. A 6-year follow-up questionnaire was sent in 2008/2009. Details of response within the NorStOP cohort at each time point have been published previously [29][33].

The Analysis Design

The objective of the current analysis was to compare consultation morbidity and prescription prevalence obtained anonymously from the routine health care records of the CiPCA comparison population, with the consultation morbidity and prescription prevalence obtained from the routine health care records of NorStOP responders at baseline, at three years and at six years who had consented to use of their medical records. The time periods for each comparison were determined by the timing of the NorStOP surveys: i) the two years prior to the baseline survey; ii) the two years prior to the 3-year follow-up survey; and iii) the two years prior to the 6-year follow-up survey.

Consultations for nine specific morbidities were identified for each time period based on recorded Read codes (see Appendix S1). Read codes are a system of morbidity recording commonly used in UK primary care [34]. Around 95% of consultations with a general practitioner in CiPCA practices are Read coded. Consultations for osteoarthritis (OA) and joint pain were included as this was the main focus of the NorStOP study. Five other chronic problems (ischaemic heart disease, diabetes, chronic obstructive pulmonary disease (COPD), asthma and depression) were included to give an indication of the general health of the participants in NorStOP compared with the CiPCA comparison population. Two acute conditions (otitis media and upper respiratory tract infection (URTI)) were included as markers of general consultation propensity and frequency.

Prescriptions for pain medication were also identified from medical records in each time period. A hierarchical classification of analgesia developed previously [35], [36] was used and prescriptions identified for any pain medication, basic analgesia (e.g. paracetamol, topical non-steroidal anti-inflammatory drugs (NSAIDs)), weak or moderate strength analgesia (e.g. coproxamol, codeine less than 30 mg), strong or very strong analgesia (e.g. codeine 30 mg, morphine) and oral NSAIDs.

For analysis of the comparison population, only patients registered at the practices in CiPCA at both the start and end of that period were included in the analysis. For the first time period (the two years prior to the date of the baseline NorStOP survey), the comparison population in CiPCA consisted of all fully registered patients who were aged 50 years and over at the end of the period. For the second (the two years prior to the 3-year follow-up NorStOP survey) and third (the two years prior to the 6-year follow-up NorStOP survey) time periods, the comparison population were all registered patients aged 53 and over, and aged 56 and over, respectively.

For analysis of the NorStOP responders, analysis of medical records for the first time period was performed on all responders at baseline who consented to record review. For the second period, the analysis was undertaken on the subgroup who also responded at three years. For the third period, analysis was further restricted to those who also responded at six years (Table 1).

thumbnail
Table 1. The three time periods and denominator populations for NorStOP and CiPCA, North Staffordshire, UK (2000–2008).

https://doi.org/10.1371/journal.pone.0083948.t001

Data Availability

The Research Institute for Primary Care & Health Sciences has established data sharing arrangements to support joint publications and other research collaborations. Applications for access to anonymised data from our research databases are reviewed by the Institute's Data Custodian and Academic Proposals Committee and a decision regarding access to the data is made subject to the National Research Ethics Service Research Ethics Committee’s ethical approval first provided for the study and to new analysis being proposed. Further information on our data sharing procedures can be found on the Institute’s website (http://www.keele.ac.uk/pchs/publications/datasharingresources/) or by emailing the Institute’s data manager (primarycare.datasharing@keele.ac.uk).

Statistical Analysis

For each time period, the two-year consultation prevalence for both the NorStOP responders and CiPCA comparison population was defined as the number of people with a record of consulting primary care at least once for a morbidity during the relevant two year time period, and is reported per 1,000 persons. 95% confidence intervals (95% CI) were calculated for the consultation prevalences within the NorStOP responder population assuming a Poisson distribution. If the 95% CI included the prevalence for the equivalent time period in the CiPCA comparison population, then this suggested the estimates were similar.

NorStOP responders at each time period were then compared again to the equivalent CiPCA comparison population with respect to consultation prevalence using age and gender standardised prevalence ratios (SPR), the CiPCA comparison population being the standard. A standardised prevalence ratio is the ratio of the prevalences of consultation for a particular morbidity in each of the two populations, standardised for age and gender using indirect standardisation. This analysis was repeated for prescription prevalence based on the number of people prescribed each type of pain medication during a time period, again reported per 1,000 persons.

Finally, to assess whether there was any bias in longitudinal associations identified in NorStOP responders compared to the CiPCA comparison population, the association of chronic morbidity at baseline with a future new record of a consultation for OA or joint pain during the 6-year follow-up was assessed in NorStOP 6-year responders. Chronic morbidity was defined as a consultation for ischaemic heart disease, diabetes, COPD, asthma or depression in the two years prior to baseline survey. Analysis was restricted to those without a consultation for OA or joint pain in the two years prior to the baseline survey. Time from baseline to a new diagnosis of OA or joint pain was identified in the medical records and the association of baseline morbidity with a future diagnosis of OA or joint pain evaluated using Cox proportional hazards regression, adjusted for age and gender. The proportionality assumption was assessed graphically and using Schoenfeld residuals [37], and deemed reasonable for this data. The analysis was then repeated with OA as the sole outcome in those without a record of OA prior to the baseline survey, with further adjustment for a record of joint pain consultation in the two years prior to the baseline survey. These analyses were then also performed in the CiPCA comparison population in those registered for all of 2001 and 2002 (defined for this analysis as the “prior to baseline” period) with follow-up from 2003–2008. Patients in CiPCA were censored at the point of death or leaving the practices.

Analysis was performed using Stata 12.1 for Windows.

Results

Figure 1 shows the responders at each stage of the NorStOP study. 11,209 responded at baseline. 8,197 of these consented to review of their medical records and form the NorStOP cohort for comparison in this analysis (i.e. 50.7% of the original target survey population). Compared to those who responded but did not consent to record review, consenters were slightly younger (mean 66.2 years vs. 67.4), had a lower proportion who were female (53% vs. 62%) and reported more joint pain (79% vs. 70%). 5,121 (62.5%) of the 8,197 responded at three years. Of the three year responders, 3,311 (65%) responded again at six years. Those who consented to record review constituted 91% of all responders at three years and 92% of responders at six years.

thumbnail
Figure 1. Flow diagram of responders to the North Staffordshire Osteoarthritis Project, United Kingdom (2002–2008).

aUnadjusted percentage responding before removing those who had moved or died.

https://doi.org/10.1371/journal.pone.0083948.g001

The CiPCA comparison population numbered 32,647 for the first time period, 32,830 for the second time period, and 30,280 for the third time period. Mean age and gender distribution were similar between the comparison population and the NorStOP responders for each time period (Table 1).

Compared with the CiPCA comparison population, NorStOP responders had similar or slightly higher levels of consultation in each time period for diabetes, COPD, asthma, otitis media and URTI with no change in the differences with the comparison population over the two follow-up points (Tables 2 and 3). The comparison population had a slightly higher depression consultation prevalence in the first time period than the NorStOP population. Ischaemic heart disease consultation prevalence was higher in the NorStOP responders than the CiPCA comparison population at 3-year and 6-year follow-ups (SPRs 1.18–1.25).

thumbnail
Table 2. Two year consultation and prescription prevalence per 1,000 persons at each survey point in CiPCA comparison population and NorStOP responders.

https://doi.org/10.1371/journal.pone.0083948.t002

thumbnail
Table 3. Age and gender standardised prevalence ratios (95% CI) comparing NorStOP responders at each survey point to CiPCA comparison population.

https://doi.org/10.1371/journal.pone.0083948.t003

The two year consultation prevalence of OA was 13% higher for the NorStOP responders than the CiPCA comparison population at baseline, and this difference remained at both follow-up points (SPRs 1.09–1.13 over the three time periods; Table 3). There was a similar pattern, although with a slightly larger difference between NorStOP and the CiPCA comparison population, for joint pain consultation (SPRs 1.12–1.23).

There were consistently slightly higher two year prescription prevalences of any pain medication (SPRs 1.07–1.12) and weak/moderate analgesia (SPRS 1.07–1.17) in NorStOP responders compared with the CiPCA comparison population. NSAID prescription prevalence was 13% higher for NorStOP responders at baseline, increasing to 25% at 6 years but strong analgesia prescription prevalence fell from 29% higher in NorStOP responders at baseline to 12% higher at 6 years (Table 3). Basic analgesics were prescribed to a similar proportion of NorStOP responders and patients in the comparison population (SPRs 0.94–0.97) (Table 3).

The age and gender adjusted risk of future new consultation of OA or joint pain in those with baseline chronic morbidity was similar in the CiPCA comparison population (adjusted HR 1.40; 95% CI 1.34, 1.47) and the NorStOP 6-year responders (1.32; 1.15, 1.51; Table 4). This was also true when outcome was restricted to new diagnosis of OA (comparison adjusted HR 1.25; 95% CI 1.16, 1.34; 6-year responders 1.23; 1.00, 1.52).

thumbnail
Table 4. Association of chronic morbidities with new record during follow-up of osteoarthritis, and osteoarthritis or joint pain, in NorStOP and CiPCA.

https://doi.org/10.1371/journal.pone.0083948.t004

Discussion

This study provides evidence that directly comparing consultation data from cohort responders with that from an available comparison population is a useful method for empirically investigating attrition during follow-up in a cohort study of older people. Our analysis of an existing cohort study shows that attrition did not result in any substantial selection bias at follow-up with respect to routinely recorded morbidities over a six year period. We did, however, find evidence of initial baseline selectivity at cohort recruitment among responders to the baseline survey - baseline participants had consulted more frequently about the topic of the study (OA and joint pain), and had received more and stronger analgesia prescriptions than the comparison population. However, there was little evidence that the cohort responders at follow-up represented any further selectivity with respect to the general population as a whole as represented by the CiPCA comparison population, despite one-third attrition at each stage of the NorStOP cohort follow-up. Furthermore, most other morbidities showed little difference in baseline consultation prevalence between NorStOP cohort responders and the CiPCA comparison population, nor did follow-up cohort responders-to-comparison population morbidity ratios alter in a consistent pattern, apart from the particular example of ischaemic heart disease which became relatively more frequent in the NorStOP population than the comparison population at 6-year follow-up. The latter is the one example in our data where initial selectivity into the cohort (i.e. a higher proportion of people at baseline with OA) may have resulted in additional selectivity at follow-up – a recent prospective study also found an increased risk of ischaemic heart disease among individuals with OA [38]. Finally, analysis of an hypothesised association between baseline chronic morbidities and future outcome (OA or joint pain) was no different in the cohort responders and the comparison population, showing that attrition did not result in biased estimates of the longitudinal associations between the morbidities included and the study outcome.

One explanation for initial selectivity into the cohort was the baseline questionnaire documentation which, although focused on general health, contained cues sufficient to encourage patients “with an interest” in joint pain to participate. Previous studies have found that survey responders with the topic under investigation are more likely to consent to medical record access [39][41] or to consult [17]; however, there is also evidence that survey participants do not change their consulting behaviour after completing a health-related questionnaire [42]. By contrast, the evidence for differences in health between participants and non-participants in cohort studies of older adults shows that non-participation at follow-up is more likely in those reporting poorer health [4], [9], [12] and cognitive impairment [4], [10], [11], [43], which provides one explanation of the finding that baseline participants had been less likely to consult about depression. However there is no complete consensus on this issue [11], [44], [45].

The consequence of baseline selectivity means that absolute percentages of people with joint pain and OA and taking analgesia derived from the baseline survey may overestimate rates in the general population. However, this does not affect the main questions we were concerned with in the current analysis: the subsequent impact of attrition during follow-up on the internal validity of longitudinal analysis of the cohort. As long as there is variation in the predictor variable of interest at baseline and sufficient persons are either exposed or not exposed to it to make the outcome of a cohort study viable and powerful enough to detect differences if they exist, then the actual representativeness of the baseline recruited sample of a putative population for a cohort study may not inevitably influence the associations derived from analysis between baseline and subsequent follow-up stages.

Our study has several strengths. Firstly, the existing cohort was a large general population survey of older people with a high response to the questionnaire at each stage. Secondly, although we have previously reported the levels of attrition at each stage of the NorStOP cohort [29][33], our novel method of empirically examining the impact of attrition on the selectivity of the followed-up population and on estimates of longitudinal associations now provides evidence that the application of a “follow-up rate criterion” applied to cohort studies may be unreasonable. Thirdly, the quality of CiPCA data is high due to the cycle of training, assessment and feedback undertaken [24] and prevalence of musculoskeletal conditions is comparable to that of larger national general practice databases [27].

A limitation of this study is that we report only on the results of applying our method within one existing cohort. Therefore the absence of major attrition bias found in this study may not be generalisable to other cohort studies with, for example, younger participants, or other morbidities. However, routinely collected primary care data of high quality from the wider general population in countries such as the UK is becoming increasingly available to researchers [46], [47]. Hence, there is the opportunity both to repeat our empirical investigation in other settings and for this method to be used more widely to assess selectivity and bias in cohort studies where consent for linkage to medical records has been obtained and comparison population health care data is available. One practical conclusion from our study is that routinely collected health care data provide one independent source of validation of follow-up in cohort studies and trials. Although such data has not often been available linked to cohort studies in the past, the increasing harnessing and linkage of large health care datasets for epidemiological purposes, for example in cardiovascular disease [48], offer the potential for this to become a more widely available component of cohort study design and validation in the future.

A further limitation is that our analysis is based on cohort responders consenting to medical record review, who were different on some baseline factors such as age and self-reported joint pain to those who responded but did not consent. However, as over 90% of all responders at the two follow-up points consented to record review, they can be considered to fairly reflect the responding population as a whole. In particular, as we have shown before, the possible loss to follow-up from refusal to consent to use of medical records (as distinct from providing replies to questionnaires) does not seem to introduce bias into longitudinal samples [41].

In conclusion, this study provides evidence for a useful method for investigating bias due to attrition in a cohort study of older people and for its potential application to assess selectivity and bias in other cohort studies where consent for linkage to medical records has been obtained and comparison population data is available. Our particular results also contribute empirical evidence that, although the occurrence of attrition in a cohort study should always be investigated as recommended by guidelines such as STROBE (strengthening the reporting of observational studies in epidemiology) [49], attrition in cohort studies of older people may not be an inevitable indicator of selectivity and bias and a flawed result.

Supporting Information

Appendix S1.

Read codes and terms used for morbidities. Read codes and terms for the nine specific morbidities included in the study. Read codes are a system of morbidity recording commonly used in UK primary care [34].

https://doi.org/10.1371/journal.pone.0083948.s001

(DOCX)

Acknowledgments

The authors would like to thank the general practitioners and staff at the participating general practices. The authors wish to acknowledge the Keele GP Research Partnership, the contributions of all members of the NorStOP research team to the data collection and study design, Dr Elaine Thomas and Dr John McBeth for advice on the manuscript, the informatics team and the administrative staff at the Research Institute for Primary Care & Health Sciences at Keele University.

Author Contributions

Conceived and designed the experiments: RJL KPJ PRC. Performed the experiments: RJL KPJ. Analyzed the data: KPJ. Wrote the paper: RJL KPJ PRC.

References

  1. 1. Goldberg M, Chastang JF, Leclerc A, Zins M, Bonenfant S, et al. (2001) Socioeconomic, demographic, occupational, and health factors associated with participation in a long-term epidemiologic survey: a prospective study of the French GAZEL cohort and its target population. Am J Epidemiol 154(4): 373–384.
  2. 2. Korkeila K, Suominen S, Ahvenainen J, Ojanlatva A, Rautava P, et al. (2001) Non-response and related factors in a nation-wide health survey. Eur J Epidemiol 17(11): 991–999.
  3. 3. Nohr EA, Frydenberg M, Henriksen TB, Olsen J (2006) Does low participation in cohort studies induce bias? Epidemiology 17(4): 413–418.
  4. 4. Vega S, Benito-León J, Bermejo-Pareja F, Medrano MJ, Vega-Valderrama LM, et al. (2010) Several factors influenced attrition in a population-based elderly cohort: neurological disorders in Central Spain Study. J Clin Epidemiol 63(2): 215–222.
  5. 5. Nummela O, Sulander T, Helakorpi S, Haapola I, Uutela A, et al. (2011) Register-based data indicated nonparticipation bias in a health study among aging people. J Clin Epidemiol 64(12): 1418–1425.
  6. 6. Pizzi C, De Stavola B, Merletti F, Bellocco R, dos Santos Silva I, et al. (2011) Sample selection and validity of exposure-disease association estimates in cohort studies. J Epidemiol Community Health 65(5): 407–411.
  7. 7. Kristman V, Manno M, Cote P (2004) Loss to follow-up in cohort studies: how much is too much? Eur J Epidemiol 19(8): 751–760.
  8. 8. Kristman VL, Manno M, Cote P (2005) Methods to account for attrition in longitudinal data: do they work? A simulation study. Eur J Epidemiol 20(8): 657–662.
  9. 9. Kempen GI, van Sonderen E (2002) Psychological attributes and changes in disability among low-functioning older persons: does attrition affect the outcomes? J Clin Epidemiol 55(3): 224–229.
  10. 10. Matthews FE, Chatfield M, Freeman C, McCracken C, Brayne C, et al. (2004) Attrition and bias in the MRC cognitive function and ageing study: an epidemiological investigation. BMC Public Health 4: 12.
  11. 11. Chatfield MD, Brayne CE, Matthews FE (2005) A systematic literature review of attrition between waves in longitudinal studies in the elderly shows a consistent pattern of dropout between differing studies. J Clin Epidemiol 58(1): 13–19.
  12. 12. Young AF, Powers JR, Bell SL (2006) Attrition in longitudinal studies: who do you lose? Aust N Z J Public Health 30(4): 353–361.
  13. 13. Haring R, Alte D, Volzke H, Sauer S, Wallaschofski H, et al. (2009) Extended recruitment efforts minimize attrition but not necessarily bias. J Clin Epidemiol 62(3): 252–260.
  14. 14. Brilleman SL, Pachana NA, Dobson AJ (2010) The impact of attrition on the representativeness of cohort studies of older people. BMC Med Res Methodol 10: 71.
  15. 15. Schmidt CO, Raspe H, Pfingsten M, Hasenbring M, Basler HD, et al. (2011) Does attrition bias longitudinal population-based studies on back pain? Eur J Pain 15(1): 84–91.
  16. 16. Van Loon AJ, Tijhuis M, Picavet HS, Surtees PG, Ormel J (2003) Survey non-response in the Netherlands: effects on prevalence estimates and associations. Ann Epidemiol 13(2): 105–110.
  17. 17. Papageorgiou AC, Croft PR, Ferry S, Jayson MI, Silman AJ (1995) Estimating the prevalence of low back pain in the general population. Evidence from the South Manchester Back Pain Survey. Spine 20(17): 1889–1894.
  18. 18. Hoeymans N, Feskens EJ, van den Bos GA, Kromhout D (1998) Non-response bias in a study of cardiovascular diseases, functional status and self-rated health among elderly men. Age & Ageing 27(1): 35–40.
  19. 19. Peat G, Thomas E, Handy J, Wood L, Dziedzic K, et al. (2006) The Knee Clinical Assessment Study-CAS(K). A prospective study of knee pain and knee osteoarthritis in the general population: baseline recruitment and retention at 18 months. BMC Musculoskelet Disord 7: 30.
  20. 20. Rockwood K, Stolee P, Robertson D, Shillington ER (1989) Response bias in a health status survey of elderly people. Age & Ageing 18(3): 177–182.
  21. 21. Freudenstein U, Arthur AJ, Matthews RJ, Jagger C (2001) Community surveys of late-life depression: who are the non-responders? Age & Ageing 30(6): 517–521.
  22. 22. Jacobsen SJ, Mahoney DW, Redfield MM, Bailey KR, Burnett Jr JC, et al. (2004) Participation bias in a population-based echocardiography study. Ann Epidemiol 14(8): 579–584.
  23. 23. Wells TS, Jacobson IG, Smith TC, Spooner CN, Smith B, et al. (2008) Prior health care utilization as a potential determinant of enrollment in a 21-year prospective study, the Millennium Cohort Study. Eur J Epidemiol 23(2): 79–87.
  24. 24. Porcheret M, Hughes R, Evans D, Jordan K, Whitehurst T, et al. (2004) Data quality of general practice electronic health records: the impact of a program of assessments, feedback, and training. J Am Med Inform Assoc 11(1): 78–86.
  25. 25. Bowling A (2009) Research Methods in Health. 3rd ed. Maidenhead: Open University Press.
  26. 26. Walsh K (1994) Evaluation of the use of general practice age-sex registers in epidemiological research. Br J Gen Pract 44(380): 118–122.
  27. 27. Jordan K, Clarke AM, Symmons DP, Fleming D, Porcheret M, et al. (2007) Measuring disease prevalence: a comparison of musculoskeletal disease using four general practice consultation databases. Br J Gen Pract 57(534): 7–14.
  28. 28. Thomas E, Wilkie R, Peat G, Hill S, Dziedzic K, et al. (2004) The North Staffordshire Osteoarthritis Project–NorStOP: prospective, 3-year study of the epidemiology and management of clinical osteoarthritis in a general population of older adults. BMC Musculoskelet Disord 5: 2.
  29. 29. Thomas E, Peat G, Harris L, Wilkie R, Croft PR (2004) The prevalence of pain and pain interference in a general population of older adults: cross-sectional findings from the North Staffordshire Osteoarthritis Project (NorStOP). Pain 110(1–2): 361–368.
  30. 30. Jordan KP, Thomas E, Peat G, Wilkie R, Croft P (2008) Social risks for disabling pain in older people: a prospective study of individual and area characteristics. Pain 137(3): 652–661.
  31. 31. Thomas E, Mottram S, Peat G, Wilkie R, Croft P (2007) The effect of age on the onset of pain interference in a general population of older adults: prospective findings from the North Staffordshire Osteoarthritis Project (NorStOP). Pain 129(1–2): 21–27.
  32. 32. Menz HB, Roddy E, Thomas E, Croft PR (2011) Impact of hallux valgus severity on general and foot-specific health-related quality of life. Arthritis Care Res (Hoboken) 63(3): 396–404.
  33. 33. Jordan KP, Sim J, Moore A, Bernard M, Richardson J (2012) Distinctiveness of long-term pain that does not interfere with life: an observational cohort study. Eur J Pain 16(8): 1185–1194.
  34. 34. NHS Information Authority (2000) The Clinical Terms Version 3 (The Read Codes). Birmingham: NHS Information Authority.
  35. 35. Bedson J, Belcher J, Martino OI, Ndlovu M, Rathod T, et al. (2013) The effectiveness of national guidance in changing analgesic prescribing in primary care from 2002 to 2009: An observational database study. Eur J Pain 17(3): 434–443.
  36. 36. Green DJ, Bedson J, Blagojevic-Burwell M, Jordan KP, van der Windt D (2013) Factors associated with primary care prescription of opioids for joint pain. Eur J Pain 17(2): 234–244.
  37. 37. Grambsch PM, Therneau TM (1994) Proportional hazards tests and diagnostics based on weighted residuals. Biometrika 81(3): 515–526.
  38. 38. Rahman MM, Kopec JA, Anis AH, Cibere J, Goldsmith CH (2013) The risk of cardiovascular disease in patients with osteoarthritis: A prospective longitudinal study. Arthritis Care Res (Hoboken) Aug 7. doi: https://doi.org/10.1002/acr.22092. [Epub ahead of print].
  39. 39. Petty DR, Zermansky AG, Raynor DK, Vail A, Lowe CJ, et al. (2001) “No thank you”: why elderly patients declined to participate in a research study. Pharm World Sci 23(1): 22–27.
  40. 40. Harris T, Cook DG, Victor C, Beighton C, DeWilde S, et al. (2005) Linking questionnaires to primary care records: factors affecting consent in older people. J Epidemiol Community Health 59(4): 336–338.
  41. 41. Dunn KM, Jordan K, Lacey RJ, Shapley M, Jinks C (2004) Patterns of consent in epidemiologic research: evidence from over 25,000 responders. Am J Epidemiol 159(11): 1087–1094.
  42. 42. Jeffery A, Jinks C, Jordan K (2006) The influence of completing a health-related questionnaire on primary care consultation behaviour. BMC Health Serv Res 6: 101.
  43. 43. Van Beijsterveldt CE, van Boxtel MP, Bosma H, Houx PJ, Buntinx F, et al. (2002) Predictors of attrition in a longitudinal cognitive aging study: the Maastricht Aging Study (MAAS). J Clin Epidemiol 55(3): 216–223.
  44. 44. Bhamra S, Tinker A, Mein G, Ashcroft R, Askham J (2008) The retention of older people in longitudinal studies: a review of the literature. Qual Ageing 9(4): 27–35.
  45. 45. Mein G, Johal S, Grant RL, Seale C, Ashcroft R, et al. (2012) Predictors of two forms of attrition in a longitudinal health study involving ageing participants: An analysis based on the Whitehall II study. BMC Med Res Methodol 12: 164.
  46. 46. Jordan KP, Croft P (2008) Opportunities and limitations of general practice databases in pain research. Pain 137(3): 469–470.
  47. 47. Jordan K, Porcheret M, Kadam UT, Croft P (2006) The use of general practice consultation databases in rheumatology research. Rheumatology (Oxford) 45(2): 126–128.
  48. 48. Denaxas SC, George J, Herrett E, Shah AD, Kalra D, et al. (2012) Data resource profile: cardiovascular disease research using linked bespoke studies and electronic health records (CALIBER). Int J Epidemiol 41: 1625–1638.
  49. 49. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, et al. (2007) STROBE Initiative. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 370: 1453–1457.