Abstract
Consumer feedback is now widely promoted in Australia as a means of informing the planning, delivery and evaluation of mental health services [1–4]. The value of including the patient's perspective on the services provided is becoming increasingly recognized at a national and statewide level. The ‘measurement of patient satisfaction and patient experience of health services, particularly with respect to outcomes’ is one of the major activities outlined in the Quality Improvement and Enhancement Plan for Queensland Health [4]. Although there is no universally accepted method of assessing patient satisfaction, the use of self-completed questionnaires by patients is commonly employed.
The increasing use of satisfaction surveys in the mental health field has been driven by the move towards consumer participation, the need to provide data for quality assurance/accreditation purposes and as a measure of treatment outcome [2, 5]. While the rush to monitor patient satisfaction has resulted in an exponential increase in the number of survey instruments available, many of these are poorly designed and suffer from questionable validity and reliability [6, 7]. A recent review of satisfaction surveys in the US found that only 11% tested interim reliability and only 5% used factor analysis in their development [7].
The majority of instruments used in the mental health field have been developed through items generated by service providers [7]. In the absence of consumer input many of these existing instruments suffer from low ‘content’ and ‘face’ validity by failing to capture aspects of satisfaction particularly relevant and important to consumers of mental health services [8–10]. Moreover, purely quantitative measures may fail to uncover serious dissatisfaction in specific areas. As illustrated by studies of inpatient psychiatric services in the UK [11, 12], satisfaction was extremely positive when assessed using a global quantitative scale. Yet in semistructured interviews, these same patients expressed significant dissatisfaction with many aspects of their treatment [12]. Critics urge for greater consumer input in the development of instruments and the use of open-ended questions to capture the values and experiences of the patients themselves [5, 13].
The high level of patient satisfaction (typically 75–90% satisfied) provided by many of the surveys in current use is of concern [14]. This lack of variation in responses is likely to be an artefact of the scale design rather than true perceptions of satisfaction [5]. Many of the surveys reviewed use the ‘yes/no’ response format which does not allow for the dispersion of responses at the positive end of the scale. Ware and Hays found that the ‘E5’ format (poor, fair, good, very good and excellent) provided greater response variability and superior predictive properties for a number of patient behaviours [15].
Finally, difficulties in conceptualizing satisfaction has given rise to longer survey instruments as researchers try to capture anything that might contribute to satisfaction [6]. Since inpatient care is now restricted to those individuals requiring stabilization during periods of acute crisis [16], self-report measures need to be brief, simply worded and easily administered. However, instruments developed in Australia tend to be rather long [17, 18], focus on community services [2] or specific clinical conditions/groups [1, 3].
The current study was designed to advance our understanding of the factors underpinning patient satisfaction in the inpatient setting and to address many of the shortcomings of previous survey development. We describe the development and testing of a brief satisfaction measure for inpatients, the Inpatient Evaluation of Service Questionnaire (IESQ).
Method
Instrument development employed three separate but related phases. Phase I involved focus group discussions with 66 inpatients at three acute care units. The aim of the discussion groups (n = 8) was to generate a pool of items related to patient satisfaction with hospital stay. Discussion groups were ceased when participants failed to raise new/ additional items to those already identified in previous groups. The group discussions were conducted by the first author (TM) and were guided by open-ended questions such as, ‘What do you like most/least about your current stay in hospital?’ and ‘If you could change one thing to make your stay more pleasant what would it be?’ While this procedure generated an extensive pool of issues, these were summarized around core themes as outlined by Sim [19]. Three service aspects considered important in the literature [3, 20] but not raised in the group discussions were added. These included ‘the attention the staff gave to your concerns and worries’, ‘the standard of privacy in your ward’, and ‘the groups/activities provided by the staff’. This resulted in a total of 51 items.
In Phase II, a second sample of 72 patients from the same three acute units was asked to rate the 51 items in terms of importance (1 = ‘not at all important’ to 5 = ‘extremely important’) in contributing to their satisfaction with hospital stay. The items rated most important in determining satisfaction included ‘being respected by staff’ (mean = 4.35) and the ‘quality of service provided by the nursing staff’ (mean = 4.24). The ‘number of patients in the ward’ was rated least important (mean = 1.95). Twenty items with a mean importance score of ‘3’ or greater were retained. These 20 items were used to construct the current questionnaire. Additional questions were included to collect relevant demographic information, to evaluate behavioural intentions and to allow for freehand comments. The final 29 item questionnaire is structured as follows:
– 20 items concerning treatment and care, and the services offered by the hospital, rated using the ‘E5’ format (‘poor’, ‘fair’, ‘good’, ‘very good’, ‘excellent’); – 1 item rating overall satisfaction with hospital stay (‘E5’ format); – 2 items rating behavioural intentions (advise a friend with similar problems to come to the hospital, and intent to return to the hospital if they had similar problems); – 2 open-ended questions that enabled patients to provide feedback on aspects of the hospital stay that they liked most and/or liked least; and – 4 demographic items that have been found in the literature to influence satisfaction (age, gender, number of previous admissions, time in hospital since admission).
During Phase III the draft questionnaire was administered to 494 consecutive inpatients who were approaching discharge in acute (n = 3) and rehabilitation (n = 2) facilities. The rehabilitation facilities were included to assess the application of the instrument in this alternate inpatient environment. Three hundred and fifty-six (72%) completed surveys were returned.
During all three phases of the study, patients were excluded if their stay was less than 7 days. It was felt that exposure of less than 7 days would be too brief for patients to assess the inpatient environment and to make valid judgements about aspects of satisfaction [12]. Patients who were readmitted during the study period were not invited to participate in the study again.
Results
Initial analysis explored differences in satisfaction scores for patients across the service settings (acute and rehabilitation). Scores for each of the 20 scaled items were summed to provide an overall satisfaction score for each patient. While acute care patients (n = 195) were more satisfied overall (mean total score = 60.8 vs 55.6), differences between the two service settings were not statistically significant (t = 1.02; df = 349; p < 0.05). Indeed, agreement between the mean total scores of the rehab cohort and the total sample was significant (Pearson's r = 0.51). Consequently, the completed instruments from both settings (n = 356) were combined and analysed as a single data set from that point.
To examine the response variability of the IESQ, the 20 scaled items were subjected to a frequency analysis. The total scores for the items showed good dispersion of responses: 16.6% responded with an average rating of ‘poor’; 19.7% ‘fair’; 35.7% ‘good’; 14.7% ‘very good’; and 13.3% ‘excellent’. The single item rating ‘overall stay in this hospital’ correlated strongly with the total/summed score of the other 20 items (Pearson's r = 0.7828, p < 0.005).
Principal components factor extraction (set at 0.40) with varimax rotation resulted in three factors with eigenvalues greater than one (9.30, 1.29 and 1.21). Considered together, the three factors accounted for 59% of the total variance. Factor I, accounting for 46.5% of the variance (Cronbach's alpha = 0.9316), and describes a staff-patient alliance. In particular, the information and explanations given to patients by staff, the respect shown by staff, the availability of staff, the quality of service from nurses and the opportunity to be involved in decisions about treatment were important contributors to this alliance. The second factor (accounting for 6.5% of the variance with an alpha = 0.7830) focused on the treatment environment (cleanliness, privacy, food) and the activities provided for patients. The third factor (accounting for 6.0% of the variance with an alpha = 0.8607) describes a medical component and comprised items relating to the doctor's availability and quality of service, as well as explanations about treatment and the way treatment was perceived by the patient to meet their needs. A reliability analysis for the total scale suggests good internal consistency (Cronbach's alpha = 0.9511) with no deletion of items considered necessary or appropriate. The factor loadings are reported in Table 1.
Factor loadings and variance explained by each factor
Discussion
Ethical clearance for the present study was obtained from each of the hospitals involved. While informed consent was obtained in writing from those patients who participated in the focus group discussions (phase I) and the importance ratings (phase II), written consent was not obtained from the 356 patients who completed the draft survey (phase III). As completion of the survey was left to the discretion of individual patients who returned it anonymously, written consent from participants was deemed unnecessary.
The first two phases of this study were carried out at the same three acute psychiatric units. Thus, factors such as nature and duration of illness were likely to have remained constant during the 6 months of data collection. During phase III, we again invited patients from these same three acute units to participate in the completion of the draft questionnaire. In addition, we included patients from a rehabilitation service (n = 161) to assess the possibility of using the questionnaire in this environment. Although developed in the acute setting, the instrument seemed to perform equally well in the rehabilitation setting. A review of the freehand comments provided by the rehabilitation patients did not support the inclusion of additional items or other modifications to the instrument.
The procedure followed in trialing the draft instrument (i.e. distributing surveys to patients nearing discharge) is commonly used for monitoring satisfaction in inpatient units [12]. While attempts were made to ensure that all patients approaching discharge were invited to complete the draft survey, only 72% did so. Although non-participation may suggest a lack of satisfaction or complete satisfaction [3], the decision to participate will always rest with the patient. There will be a group of patients who choose not to participate and more expensive and invasive techniques such as focus group discussions or individual interviews could be used to solicit their views [12].
The domains covered in the IESQ compare favourably with other patient satisfaction surveys. Key dimensions reported in the literature include the social domain, which incorporates staff-patient relations [17, 21]; and the technical domain, which includes treatment and outcome items [22–24]. In keeping with previous Australian studies [2, 3], a single factor concerned with a staff-patient alliance accounted for almost 50% of the variance. It is clear that patients distinguish between the treatment provided and the treatment environment itself in that the second factor (which contained items related to treatment environment) explained much less of the variance (6.5%). Service providers should not underestimate the importance of treatment environment as dissatisfaction with this may contribute to behaviours such as aggression [25] and absconding [26].
In addition to the scaled items, the IESQ comprises two open-ended questions, to elicit comments on aspects of the hospital stay that patients liked ‘most’ or liked ‘least’. These questions were included to allow for identification of particular aspects not covered by the scaled items, and to discover why not just whether inpatients were dissatisfied. While only 52% of those who responded choose to supply freehand comments, this additional information did provide important insights into satisfaction with specific aspects of the services on offer and was it was clearly valued by service providers.
The high levels of satisfaction (70–95%) found in earlier studies raise concerns about the sensitivity of satisfaction instruments [5]. We found that the ‘E5’ response format (‘poor’, ‘fair’, ‘good’, ‘very good’ and ‘excellent’) did produce good response variability across all response options particularly at the positive end of the scale. This finding supports the argument that higher satisfaction ratings are likely to arise from the poor instrument design rather than lack of instrument sensitivity [27]. Moreover, it suggests that inpatients are capable of making judgements about satisfaction and discriminating between levels of satisfaction.
Our findings support previous studies in that younger patients were less satisfied and male patients were more satisfied. It is clear that other factors such as treating doctor, primary nurse and symptomatology may have influenced satisfaction ratings [3, 12]. However, it was not possible to assess the impact of these variables on satisfaction as we choose not to collect identifying information. Further development of the instrument could explore the impact of mood on satisfaction by inviting patients to provide a self-rating of their mood as described by Eyers et al. [3].
In conclusion, the IESQ was developed to provide a brief, user-friendly instrument that overcomes some of the shortcomings of existing satisfaction measures. It covers a broad range of inpatient concerns, it is simply worded, easy to score, and is designed to be completed independently by the inpatient. While it assesses a number of satisfaction constructs, administration time is kept below 5 min. Ongoing assessment of the instrument would be warranted should it be used in settings other than acute/rehabilitation.
Footnotes
Acknowledgements
Thanks to the Schizophrenia Fellowship of New South Wales who provided financial support for the early stages of this project.
