Research Article

Identifying Drivers of Overall Satisfaction in Patients Receiving HIV Primary Care: A Cross-Sectional Study

  • Bich N. Dang mail,

    Affiliations: Houston Veterans Affairs Health Services Research and Development Center of Excellence, Houston, Texas, United States of America, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, United States of America, Department of Medicine, Baylor College of Medicine, Houston, Texas, United States of America, Harris County Hospital District, Houston, Texas, United States of America

  • Robert A. Westbrook,

    Affiliation: Jesse H. Jones Graduate School of Business, Rice University, Houston, Texas, United States of America

  • Maria C. Rodriguez-Barradas,

    Affiliations: Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, United States of America, Department of Medicine, Baylor College of Medicine, Houston, Texas, United States of America

  • Thomas P. Giordano

    Affiliations: Houston Veterans Affairs Health Services Research and Development Center of Excellence, Houston, Texas, United States of America, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, United States of America, Department of Medicine, Baylor College of Medicine, Houston, Texas, United States of America, Harris County Hospital District, Houston, Texas, United States of America

  • Published: August 13, 2012
  • DOI: 10.1371/journal.pone.0042980



This study seeks to understand the drivers of overall patient satisfaction in a predominantly low-income, ethnic-minority population of HIV primary care patients. The study’s primary aims were to determine 1) the component experiences which contribute to patients’ evaluations of their overall satisfaction with care received, and 2) the relative contribution of each component experience in explaining patients’ evaluation of overall satisfaction.


We conducted a cross-sectional study of 489 adult patients receiving HIV primary care at two clinics in Houston, Texas, from January 13–April 21, 2011. The participation rate among eligible patients was 94%. The survey included 15 questions about various components of the care experience, 4 questions about the provider experience and 3 questions about overall care. To ensure that the survey was appropriately tailored to our clinic population and the list of component experiences reflected all aspects of the care experience salient to patients, we conducted in-depth interviews with key providers and clinic staff and pre-tested the survey instrument with patients.


Patients’ evaluation of their provider correlated the strongest with their overall satisfaction (standardized β = 0.445, p<0.001) and accounted for almost half of the explained variance. Access and availability, like clinic hours and ease of calling the clinic, also correlated with overall satisfaction, but less strongly. Wait time and parking, despite receiving low patient ratings, did not correlate with overall satisfaction.


The patient-provider relationship far exceeds other component experiences of care in its association with overall satisfaction. Our study suggests that interventions to improve overall patient satisfaction should focus on improving patients’ evaluation of their provider.


The use of self-reported patient evaluations as a quality measure signifies a paradigm shift in American medicine. The Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Hospital Survey, developed by the Centers for Medicare and Medicaid Services (CMS) and the Agency for Healthcare Research and Quality, represents the first standardized, nationwide measurement system for tracking patients’ perception of their care [1]. CMS reports CAHPS® Hospital Survey results publicly, and starting October 2012, Medicare will distribute value-based incentive payments for acute care services based partly on how patients rate their care experience [2]. In addition, the American Board of Internal Medicine requires recertifying physicians to complete at least one Practice Improvement Module®, one of which entails soliciting 25 patient evaluation surveys.

The focus on patient satisfaction stems from longstanding interest in the business sector, where most large firms regularly monitor the satisfaction of their customers. The emphasis on improving customer experience is based on evidence that higher levels of customer satisfaction lead to higher customer loyalty, greater repeat purchasing and more favorable referrals, all of which result in improved market share, greater revenues and higher profitability [3]. Customer satisfaction serves as a key metric for judging firm performance and informs management on how to improve customer experiences with their firm’s offerings and sales channels.

In the context of health care, patient satisfaction is an individual’s evaluation of his or her experiences in receiving health care in a specific delivery setting (e.g. hospital, primary care clinic, outpatient surgery, etc) [4]. Limited cross-sectional studies show a positive relationship between patient satisfaction and adherence to medications [5][9]. Likewise, adherence to medications clearly impacts clinical outcomes [10], [11]. Satisfaction also has been associated with patient switching behavior in regards to provider and insurance plans [12], [13]. Furthermore, studies using national CAHPS® Hospital Survey data show a significant albeit modest correlation between patient satisfaction and objective clinical performance measures [14], [15].

Leaders seeking to improve satisfaction typically apply the attribute model of satisfaction [16], [17]. Attribute models have been used to study satisfaction across a wide spectrum of human experience, including customer, job and life satisfaction [18], [19]. In the context of health care, this model incorporates the following concepts: 1) overall patient satisfaction describes a distinct and separate global evaluation of a set of component experiences; 2) given a set of experiences, patients weigh each experience differently in rating their overall satisfaction; 3) the stronger the association between a component experience and overall satisfaction, the greater the presumed impact of that component experience. The attribute model of satisfaction provides insight into the relative importance of different component experiences to patients and the trade-offs patients may make in exchange for excellence in other areas. Furthermore, component experiences attributed the most importance represent critical points of intervention for improving the care experience and ultimately increasing overall satisfaction.

Relatively fewer patient satisfaction studies take place in the outpatient primary care setting and in the context of chronic diseases. While the CAHPS® database of national hospital survey data provides a glimpse of patients’ hospital care experience, CMS does not require the public reporting of outpatient clinic survey data. HIV affects over 1.1 million people in the United States and represents a chronic disease well-suited for studying the drivers of overall patient satisfaction [20]. Management of HIV infection occurs mostly in the outpatient setting and requires frequent visits with a primary care provider. In this study, we seek to understand the drivers of overall patient satisfaction in a predominantly low-income, ethnic-minority population of HIV primary care patients, a group not well-represented in patient satisfaction studies. Specifically, we applied the attribute model of satisfaction to determine 1) the component experiences that contribute to patients’ evaluation of their overall satisfaction with care received, and 2) the relative contribution of each component experience in explaining patients’ evaluation of overall satisfaction.



We conducted a cross-sectional study of patients receiving outpatient HIV primary care at Thomas Street Health Center (TSHC) and the Michael E. DeBakey Veterans Affairs Medical Center (VAMC) in Houston, Texas. Patients enrolled in the study from January 13 to April 21, 2011. Inclusion criteria included: 1) age 18 years or older; 2) having at least one HIV primary care visit in the past year; and 3) having an “index” visit at least one year prior to enrollment. An “index” visit was defined as an HIV primary care visit with a doctor, advanced nurse practitioner, or physician assistant. Exclusion criteria included: 1) incarceration >30 days in the past year; 2) mental or physical inability to complete the survey; and 3) inability to complete the survey in English or Spanish. These criteria ensured that patients had sufficient exposure to the clinic to assess their overall satisfaction.

Survey Instrument

Satisfaction questions were adapted from validated patient self-report survey instruments. The survey measured patients’ evaluations of their component experiences and overall satisfaction with care in clinic. Questions measured cumulative satisfaction over the most recent 12-month time frame. The survey included 15 questions about various components of the care experience, 4 questions about the provider experience, and 3 questions about the overall care experience. Provider referred to a doctor, nurse practitioner or physician assistant. Table 1 shows the exact wording of the items.


Table 1. Distribution of item responses and reliability of multi-item constructs.


The questions about recommendation and trust were adapted from the Ambulatory Care Experiences Survey [21]. The questions about feelings towards the provider and overall care were based on the Delightful–Terrible 7-point scale, a validated measure of life satisfaction [19]. The switching question was based on brand and product switching behavior frequently cited in marketing research [22]. For ease of interpretation, the satisfaction responses were transformed to a 0- to 10-point scale.

Given the time constraints involved with administering surveys in a waiting room setting, we used validated, single-item questions to identify individuals with possible depression, excessive alcohol use, and illegal or prescription drug abuse [23][25]. We also used a validated, single item question to measure health status, “In general, how would you rate your overall health?” [26][28]. Pearlin’s 7-item Self-Mastery Scale assessed self-efficacy [29][31]. Commonly used questions provided data on patient demographics and HIV risk factors [21], [32].

To ensure a comprehensive list of component experiences that affect how patients evaluate their overall care, we performed in-depth interviews with key clinic staff. We interviewed physicians, nurses, administrators, clerical personnel and the physician assistant to gain insight into the various components of the clinic care process. The breadth of component experiences was confirmed during pre-testing of the survey instrument.

With the help of a health communications specialist, we simplified the wording and flow of the survey so that certain patients with low health literacy could complete it with ease [33]. The survey was translated into Spanish and the translation was reviewed by two Spanish speakers. The survey has a Flesch-Kincaid 6th grade reading level and takes about 10 minutes to complete.

Pre-testing the Survey Instrument

We conducted one-on-one, face-to-face cognitive interviews to pretest the survey instrument and ensure that the survey was appropriately tailored to our population of mostly low-income, ethnic-minority patients living with HIV. These interviews confirmed survey comprehension and conceptual equivalence of the English and Spanish versions of the survey [34]. Finally, we asked open-ended questions about component experiences that shape how patients evaluate their overall care in clinic. This served to verify that the list of component experiences was complete.

Pre-testing continued until redundancy in data was reached. The convenience sample included 11 English- and 10 Spanish-speaking patients. Interviews were audio-taped and lasted about 1 hour. Each participant received $10. We revised the survey as follows, based on results of cognitive interviews: 1) wording of certain survey questions was modified; 2) laboratory and social work services were added as component experiences; and 3) the response scale anchors for the questions about feelings toward the provider and care were changed from “delighted - terrible” to “completely satisfied - completely dissatisfied.”

Survey Administration

Preliminary study eligibility was determined by reviewing medical records. Employing a systematic sampling method, patients who met the inclusion criteria were approached based on check-in times. When a member of the research team became available to administer the survey, he/she approached the eligible patient with the most recent check-in time. Patients were told that personal data would be kept confidential, and answers would not affect their care. Patients completed the surveys while they waited for their appointment. Survey mode was coded as interviewer-administered if the patient received assistance in completing the survey.

Clinical and Demographic Characteristics

Electronic medical records and administrative data were reviewed. The following data were abstracted for participants: age, date of initial visit at the clinic, appointment data, CD4 cell count, and HIV viral load. For eligible patients who declined to participate, data abstraction was limited to age, race, sex, and ethnicity.

Quality Control

A single staff member performed double data entry of completed surveys. Five percent of medical records and administrative data were reviewed manually to make sure the data were abstracted correctly.

Statistical Analysis

Exploratory factor analysis.

Exploratory factor analysis of the 15 clinic and staff attributes determined whether certain attributes were similar enough to group into multi-item constructs. We used principle components analysis with the communality estimates set to one. Analysis revealed two interpretable factors, which explained 58% of the variance in these attributes. The factor solutions were rotated using an orthogonal rotation (varimax method) and interpreted using factor loadings of 0.6 or higher and similar thematic content [35]. The first factor reflected evaluations of the facility environment (noise, cleanliness, concern for patient privacy and clinic hours). The second factor reflected evaluations of the staff (person making appointment, front desk staff, and nurse). Based on these findings, facility and staff were defined as separate multi-item constructs. The third factor was not interpretable and the remaining attributes were treated as single-item measures.

Provider satisfaction was distinguished from overall satisfaction with care by performing a principle components analysis of the items measuring these constructs. Analysis revealed two factors, which explained 66% of the variance. The first factor reflected evaluations of the provider (likelihood of recommending provider, trust with provider, feelings about provider, and intention to switch provider). The second factor reflected evaluations of overall satisfaction with care received in the clinic (likelihood of recommending the clinic, feelings about care, and intention to switch clinic). These findings confirmed the treatment of provider and overall satisfaction as separate multi-item constructs.

Validity and reliability.

The validity of the four multi-item constructs (facility, staff, provider and overall satisfaction) was assessed by examining Pearson’s correlations between the individual items comprising the constructs. The multi-item constructs demonstrated satisfactory levels and patterns of intra-scale correlation (Table 2). High correlations between items intended to measure a given construct suggested the convergent validity of those measures. Likewise, substantially lower correlations between items within a given construct and items intended to measure other constructs suggest the discriminant validity of these measures.


Table 2. Item intercorrelations of patients’ component experiences and overall satisfaction.


We tested internal consistency reliability by estimating Cronbach’s alpha coefficient. The Cronbach’s alpha coefficient for the facility, staff, provider and overall satisfaction constructs were 0.85, 0.91, 0.84, and 0.70, respectively (Table 1). Scores for each multi-item construct were obtained by averaging the responses to all items comprising the construct.

Bivariate analyses.

Bivariate analyses between potential confounders and overall satisfaction with care were performed. Demographic, health status, behavioral characteristics, and clinic utilization listed in Table 3 were included in the regression model of overall satisfaction as potential confounding variables if its bivariate correlation or t-test reached a significance level of p<0.10. We used a significance threshold of p<0.10 instead of p<0.05 to minimize the risk of omitting a potential confounding variable from the final regression.


Table 3. Baseline characteristics of participants at Thomas Street Health Center and the Veterans Affairs Medical Center in Houston, Texas (N = 489).

Multiple regression analyses.

We performed multiple linear regression analyses to determine the strength of association between component experiences and overall satisfaction. Specifically, we estimated two regression models in a hierarchical fashion. Model 1 consists of only control variables and serves to evaluate their predictive ability as a group. Model 2 consists of control variables and the predictors of interest (i.e. component experiences), and serves to show the incremental explanation achieved by the component experiences beyond that of the control variables alone. The key assumptions of multiple regression analysis (i.e. linearity, normal distribution of residuals, constant variance of residuals, and absence of multicollinearity) were tested and verified. Pair-wise deletion of missing data was conducted.

Statistical analyses were performed with SAS version 9.2 (SAS Institute Inc, Cary, NC) and SPSS version 19 (SPSS Inc, Chicago, IL).

The Institutional Review Board for Baylor College of Medicine and Affiliated Institutions approved this study. Participants provided verbal informed consent.


Study Population Characteristics

Of 553 patients approached, twenty eight declined, twenty six met the exclusion criteria, four did not finish the survey in the allotted time, and six did not meet the inclusion criteria on further record review. A total of 489 patients were included in the analyses, 101 from VAMC and 388 from TSHC. The participation rate among eligible patients was 94% (489/521). Participants were similar to eligible non-participants in terms of age, race, sex, and ethnicity. As shown in Table 3, a majority of the participants were men (71%), non-Hispanic black (61%), had a household income of ≤ $10,000 (54%), and reported unprotected heterosexual contact as an HIV risk factor (50%). A total of 97% of participants reported having a “regular personal HIV provider.”

Item Nonresponse

A total of 73.2% of participants had no missing items, and 94.1% had a missing item rate of <5%. The average rate of missing items for a given participant was 1.4%. Rates of nonresponse for individual survey items were low, ranging from 0.2% to 4.1%.

Ratings of Component Experiences and Overall Satisfaction

Table 1 displays the distribution of patient ratings. The ratings for the component experiences were generally high, with the exception of parking and wait time (mean scores were 2.7 and 3.0, respectively, on a 5.0 scale). Patients reported high levels of overall satisfaction with care received. For example, over 90% of patients stated that they would probably (23.4%) or definitely (69.8%) “recommend this clinic to other patients with HIV,” and over 80% stated that they felt mostly satisfied (26.7%) or completely satisfied (57.3%) with the care they received in clinic.

Relationship between Component Experiences and Overall Satisfaction

Age, health status, depression, relationship status, education, self-efficacy, Spanish language preference, time enrolled in the clinic and survey mode met the criteria for entry into the multiple regression analysis as control variables. Table 4 shows the linear regression model of patient component experiences on overall satisfaction with care received in clinic. Listwise and mean substitution treatment of missing values yielded comparable results. Results did not differ between the clinic sites.


Table 4. Multiple regression of patients’ component experiences on overall satisfaction.


The regression model had an adjusted R square of 0.487 (p<0.001), indicating that the component experiences account for almost half of the explained variation in overall patient satisfaction. The top four component experiences driving overall satisfaction were: 1) satisfaction with the HIV provider (standardized β = 0.445, p<0.001), 2) facility environment (standardized β = 0.171, p = 0.038, 3) ease of calling the clinic and getting answers (standardized β = 0.124, p = 0.038), and 4) staff (standardized β = 0.161, p = 0.062). The large β coefficients reflect a strong relationship between the component experience and overall satisfaction. The size of the unstandardized b coefficient for each component experience indicates the rate of change in overall satisfaction for each unit change in that component experience. For example, satisfaction with the provider has an unstandardized B coefficient of 0.480. This means that a 1-point increase in satisfaction with the provider is associated with 0.480-point gain in overall satisfaction. Patients’ evaluations of wait time, parking, ease of getting to clinic, social work services, pharmacy, and laboratory were not significantly related to overall satisfaction.


In this study of 489 participants receiving outpatient HIV primary care, patients’ evaluation of their provider correlated the strongest with their overall satisfaction and accounted for almost half of the explained variance. Access and availability evaluations, like clinic hours and ease of calling the clinic, also correlated with overall satisfaction, but less strongly. Patients generally gave high overall satisfaction ratings.

The upwardly skewed distribution of patient satisfaction responses is not surprising. Applied customer satisfaction research shows that most people are very satisfied with most products and services they buy [36]. It is the change in the proportion of responses falling into the uppermost response categories (e.g. “top box” scores) over time that provides the most insight into an organization’s performance trajectory [37], [38]. Our findings of high levels of satisfaction are also consistent with other patient satisfaction studies [4], [39], [40]. Interestingly, certain component experiences, like wait time and parking, did not correlate with overall satisfaction. Although patients gave low ratings for wait time and parking, dissatisfaction with those components did not translate into lower levels of overall satisfaction. Our results suggest that as long as patients have positive experiences with their provider, they tend to overlook shortcomings in certain non-interpersonal aspects of care. As such, if clinics want to improve the overall patient care experience, they need to develop strategies to improve patients’ evaluation of their providers.

The facility environment and ease of calling the clinic constructs were also associated with overall satisfaction. The facility construct included questions about the clinic’s concern for patient privacy, clinic hours, noise, and cleanliness. Patients with HIV infection experience actual and perceived HIV-related stigma; thus, privacy concerns may influence their overall satisfaction [41]. Experiences included in the facility and ease of calling the clinic constructs represent modifiable components and, while not major drivers, could serve as easy targets for improvement.

The staff construct (front desk staff and nurse) was associated with overall satisfaction, but did not reach a statistical level of significance. Many studies in the patient satisfaction literature report an association between satisfaction with nursing care and overall satisfaction [42][45]. Most of these studies took place in the hospital setting where nursing care occurs continuously. In contrast, patients in the outpatient clinic setting tend to have brief encounters with nurses relative to providers, which may explain our findings.

Limited cross-sectional studies suggest that patients’ evaluation of their provider is important because it impacts subsequent patient behavior, like treatment adherence and intention to return to the provider [46][48]. However, it is unclear which aspect of provider care most strongly drives patients’ evaluation. Interpersonal dimensions such as provider warmth, empathy, trust, and communication skills have been associated with more favorable patient evaluations [4], [39], [40]. However, studies consistently show that these factors only explain a small fraction of variance in overall satisfaction scores. Organizational factors beyond a provider’s control may affect patients’ evaluation of their providers. For example, adequate time allotted for clinic visits, continuity of care with the same provider, and minimal wait time between requests for an appointment and the actual appointment date have been associated with favorable patient evaluations [49][51]. Additionally, satisfaction with a treatment regimen (e.g. complexity, discomfort, convenience) may also affect how patients rate the provider experience [52]. In-depth qualitative and longitudinal quantitative studies are needed to better understand which provider attributes correlate most highly with patients’ perception of their provider. In addition, longitudinal studies would allow for patient evaluations at multiple points in time and inform how patients’ evaluations of their providers impact subsequent behavior and, ultimately, clinical outcomes.

This study has several methodological strengths. Our study included a predominantly low-income, ethnic-minority population generally not well represented in patient satisfaction studies. Cognitive interviews ensured that survey items reflected all aspects of the clinic experience salient to patients. The study population was systematically sampled, the participation rate was high, and the item non-response rate low. Finally, when possible, we used multi-item constructs to minimize the risk of measurement error.

This study also has certain limitations. The correlational nature of our data precludes causal inferences. The mere participation in a satisfaction survey may inflate respondents’ ratings of satisfaction, a behavior noted in the marketing literature as “the question-behavior effect” [53], [54]. In addition, participants were enrolled in care at the VA and a public clinic, and the findings may not generalize to patients in other settings. Lastly, the rates of depression, excessive alcohol use, and illegal or prescription drug abuse may represent conservative estimates due to the use of single-item screening questions.


This study quantifies the relative importance of each component experience in shaping patients’ evaluation of their overall satisfaction with care. The findings provide an insight into the component experiences organizations should focus on to most effectively manage patient experiences.

In our study, satisfaction with one’s HIV primary care provider most strongly predicted overall satisfaction with care received in clinic. The patient-provider relationship exceeds other component experiences of care in its association with overall satisfaction. Our study suggests that focusing on improving patients’ satisfaction with their provider yields the highest return on investment, and that interventions to improve overall patient satisfaction should target this dimension of care.


This project was supported in part by the facilities and resources of the Houston Veterans Affairs Health Services Research and Development Center of Excellence, Michael E. DeBakey Veterans Affairs Medical Center and the Harris County Hospital District, Houston, Texas, United States of America. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.

Author Contributions

Conceived and designed the experiments: BND RAW MCR TPG. Analyzed the data: BND RAW. Wrote the paper: BND. Interpreted the data: BND RAW TPG. Critical revision of the manuscript for important intellectual content: BND RAW MCR TPG.


  1. 1. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA (2010) Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev 67(1): 27–37. doi: 10.1177/1077558709341065
  2. 2. (2010) Patient protection and affordable care act, pub. L. no. 111–148, §2702, 124 stat. 119, 318–319.
  3. 3. Mittal V, Frennea C (2010) Customer satisfaction: A strategic review and guidelines for managers. MSI Fast Forward Series, Marketing Science Institute, Cambridge, MA.
  4. 4. Crow R, Gage H, Hampson S, Hart J, Kimber A (2002) The measurement of satisfaction with healthcare: Implications for practice from a systematic review of the literature. Health Technol Assess 6(32): 1–244.
  5. 5. Bartlett EE, Grayson M, Barker R, Levine DM, Golden A, et al. (1984) The effects of physician communications skills on patient satisfaction; recall, and adherence. J Chronic Dis 37(9–10): 755–64. doi: 10.1016/0021-9681(84)90044-4
  6. 6. Hazzard A, Hutchinson SJ, Krawiecki N (1990) Factors related to adherence to medication regimens in pediatric seizure patients. J Pediatr Psychol 15(4): 543–55. doi: 10.1093/jpepsy/15.4.543
  7. 7. Roberts KJ (2002) Physician-patient relationships, patient satisfaction, and antiretroviral medication adherence among HIV-infected adults attending a public health clinic. AIDS Patient Care STDS 16(1): 43–50. doi: 10.1089/108729102753429398
  8. 8. Schneider J, Kaplan SH, Greenfield S, Li W, Wilson IB (2004) Better physician-patient relationships are associated with higher reported adherence to antiretroviral therapy in patients with HIV infection. J Gen Intern Med 19(11): 1096–103. doi: 10.1111/j.1525-1497.2004.30418.x
  9. 9. Barbosa CD, Balp MM, Kulich K, Germain N, Rofail D (2012) A literature review to explore the link between treatment satisfaction and adherence, compliance, and persistence. Patient Prefer Adherence 6: 39–48. doi: 10.2147/ppa.s24752
  10. 10. Wood E, Hogg RS, Yip B, Harrigan PR, O’Shaughnessy MV, et al. (2004) The impact of adherence on CD4 cell count responses among HIV-infected patients. J Acquir Immune Defic Syndr 35(3): 261–8. doi: 10.1097/00126334-200403010-00006
  11. 11. Li JZ, Paredes R, Ribaudo HJ, Svarovskaia ES, Kozal MJ, et al. (2012) Relationship between minority nonnucleoside reverse transcriptase inhibitor resistance mutations, adherence, and the risk of virologic failure. AIDS 26(2): 185–92. doi: 10.1097/qad.0b013e32834e9d7d
  12. 12. Marquis MS, Davies AR, Ware JE (1983) Patient satisfaction and change in medical care provider: A longitudinal study. Med Care 21(8): 821–9. doi: 10.1097/00005650-198308000-00006
  13. 13. Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH (2001) Switching doctors: Predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract 50(2): 130–6.
  14. 14. Jha AK, Orav EJ, Zheng J, Epstein AM (2008) Patients’ perception of hospital care in the United States. N Engl J Med 359(18): 1921–31. doi: 10.1056/nejmsa0804116
  15. 15. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R (2011) Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care 17(1): 41–8.
  16. 16. Oliver RL (2010) Satisfaction: A behavioral perspective on the consumer. 2nd ed. Armonk: M. E. Sharpe.
  17. 17. Westbrook RA (1981) Sources of consumer satisfaction with retail outlets. Journal of Retailing 57(3): 68–85.
  18. 18. Smith PC, Kendall LM, Hulin CL (1969) The measurement of satisfaction in work and retirement. Chicago: Rand McNally.
  19. 19. Andrews FM, Crandall R (1976) The validity of measures of self-reported well-being. Soc Indic Res 3: 1–19. doi: 10.1007/bf00286161
  20. 20. Centers for Disease Control and Prevention (2008) HIV prevalence estimates–United States, 2006. MMWR Morb Mortal Wkly Rep 57(39): 1073–6.
  21. 21. Safran DG, Karp M, Coltin K, Chang H, Li A, et al. (2006) Measuring patients’ experiences with individual primary care physicians. results of a statewide demonstration project. J Gen Intern Med 21(1): 13–21. doi: 10.1111/j.1525-1497.2005.00311.x
  22. 22. Mittal V, Ross WT, Baldasare PM (1998) Asymmetric impact of negative and positive attribute-level performance on overall satisfaction and repurchase intentions. J Marketing 62(1): 33–47. doi: 10.2307/1251801
  23. 23. Watkins CL, Lightbody CE, Sutton CJ, Holcroft L, Jack CI, et al. (2007) Evaluation of a single-item screening tool for depression after stroke: A cohort study. Clin Rehabil 21(9): 846–52. doi: 10.1177/0269215507079846
  24. 24. Smith PC, Schmidt SM, Allensworth-Davies D, Saitz R (2009) Primary care validation of a single-question alcohol screening test. J Gen Intern Med 24(7): 783–8. doi: 10.1007/s11606-009-0928-6
  25. 25. Smith PC, Schmidt SM, Allensworth-Davies D, Saitz R (2010) A single-question screening test for drug use in primary care. Arch Intern Med 170(13): 1155–60. doi: 10.1001/archinternmed.2010.140
  26. 26. Davies AR, Ware JE (1981) Measuring health perceptions in the health insurance experiment. Santa Monica, CA: Rand Corporation, R-2711-HHS.
  27. 27. Hennessy CH, Moriarty DG, Zack MM, Scherr PA, Brackbill R (1994) Measuring health-related quality of life for public health surveillance. Public Health Rep 109(5): 665–72.
  28. 28. DeSalvo KB, Bloser N, Reynolds K, He J, Muntner P (2006) Mortality prediction with a single general self-rated health question. A meta-analysis. J Gen Intern Med 21(3): 267–75. doi: 10.1111/j.1525-1497.2005.00291.x
  29. 29. Pearlin LI (1978) Schooler C (1978) The structure of coping. J Health Soc Behav 19(1): 2–21. doi: 10.2307/2136319
  30. 30. Pearlin LI, Lieberman MA, Menaghan EG, Mullan JT (1981) The stress process. J Health Soc Behav 22(4): 337–56. doi: 10.2307/2136676
  31. 31. Huba GJ, Melchior LA, Staff of The Measurement Group, and HRSA/HAB’s SPNS Cooperative Agreement Steering Committee (1996) Module 64: Self-Efficacy Form. Culver City, California: The Measurement Group. Available:​/module64.htm. Accessed 20 July 2012.
  32. 32. CAHPS Clinician & Group Surveys. Available:​p. Accessed 24 July 2012.
  33. 33. Gossey JT, Volk RJ (2008) Health literacy of patients served in the Harris County Hospital District. Houston: Baylor College of Medicine.
  34. 34. Willis GB (2005) Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks: Sage Publications.
  35. 35. Hair J, Black WC, Babin BJ, Anderson RE (2010) Multivariate data analysis. 7th ed. Englewood Cliffs: Prentice Hall.
  36. 36. The American Customer Satisfaction Index. Available: Accessed 20 July 2012.
  37. 37. Tull DS, Hawkins DI (1993) Marketing research: Measurement & method: A text with cases. 6th ed. New York: MacMillan.
  38. 38. Zikmund WG, Babin BJ (2012) Essentials of marketing research. 5th ed. Mason: South-Western College Publishing.
  39. 39. Kane RL (2005) Chapter 8: Satisfaction with care. In: Understanding Health Care Outcomes Research. Sudbury: Jones & Bartlett Pub. 159–197.
  40. 40. Sitzia J, Wood N (1997) Patient satisfaction: A review of issues and concepts. Soc Sci Med 45(12): 1829–43. doi: 10.1016/s0277-9536(97)00128-7
  41. 41. Mahajan AP, Sayles JN, Patel VA, Remien RH, Sawires SR, et al. (2008) Stigma in the HIV/AIDS epidemic: A review of the literature and recommendations for the way forward. AIDS 22 Suppl 2S67–79. doi: 10.1097/01.aids.0000327438.13291.62
  42. 42. Abramowitz S, Cote AA, Berry E (1987) Analyzing patient satisfaction: A multianalytic approach. QRB Qual Rev Bull 13(4): 122–30.
  43. 43. Cleary PD, Keroy L, Karapanos G, McMullen W (1989) Patient assessments of hospital care. QRB Qual Rev Bull 15(6): 172–9.
  44. 44. Woodside AG, Frey LL, Daly RT (1989) Linking service quality, customer satisfaction, and behavioral intention. J Health Care Mark 9(4): 5–17.
  45. 45. Pilpel D (1996) Hospitalized patients’ satisfaction with caregivers’ conduct and physical surroundings. J Gen Intern Med 11(5): 312–4. doi: 10.1007/bf02598274
  46. 46. Beach MC, Keruly J, Moore RD (2006) Is the quality of the patient-provider relationship associated with better adherence and health outcomes for patients with HIV? J Gen Intern Med 21(6): 661–5. doi: 10.1111/j.1525-1497.2006.00399.x
  47. 47. Garman AN, Garcia J, Hargreaves M (2004) Patient satisfaction as a predictor of return-to-provider behavior: Analysis and assessment of financial implications. Qual Manag Health Care 13(1): 75–80. doi: 10.1097/00019514-200401000-00007
  48. 48. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, et al. (1998) Linking primary care performance to outcomes of care. J Fam Pract 47(3): 213–20. doi: 10.1097/00005650-199805000-00012
  49. 49. Morrell DC, Evans ME, Morris RW, Roland MO (1986) The “five minute” consultation: Effect of time constraint on clinical content and patient satisfaction. Br Med J (Clin Res Ed) 292(6524): 870–3. doi: 10.1136/bmj.292.6524.870
  50. 50. Baker R (1996) Characteristics of practices, general practitioners and patients related to levels of patients’ satisfaction with consultations. Br J Gen Pract 46(411): 601–5.
  51. 51. Fan VS, Burman M, McDonell MB, Fihn SD (2005) Continuity of care and other determinants of patient satisfaction with primary care. J Gen Intern Med 20(3): 226–33. doi: 10.1111/j.1525-1497.2005.40135.x
  52. 52. Weaver M, Patrick DL, Markson LE, Martin D, Frederic I, et al. (1997) Issues in the measurement of satisfaction with treatment. Am J Manag Care 3(4): 579–94.
  53. 53. Dholakia UM (2010) Chapter 8: A critical review of question–behavior effect research. In: Malhotra NK. Review of Marketing Research. Vol. 7. Bingley: Emerald Group Publishing Limited. 145–197.
  54. 54. Liu W, Gal D (2011) Bringing us together or driving us apart: The effect of soliciting consumer input on consumers’ propensity to transact with an organization. The Journal of Consumer Research 38(2): 242–259. doi: 10.1086/658884