Quality and Safety in Health Care Journal

Challenge of ensuring access to high-quality emergency surgical care for all

Emergency general surgery (EGS) encompasses a variety of common acute surgical conditions with high morbidity and mortality that often require timely delivery of resource-intensive care. In the UK, over 30 000 patients require an emergency laparotomy each year1 and a 2012 audit by the UK Emergency Laparotomy Network revealed a greater than 10-fold variation in mortality rates between hospitals.2 The wide variability in both processes of care and clinical outcomes make EGS a prime target for quality improvement (QI) programmes, whereby promotion of evidence-based practices associated with better outcomes have the potential to impact thousands of lives.

The Enhanced Peri-Operative Care for High-risk patients (EPOCH) trial was designed to evaluate the impact of a national QI programme on survival after emergency abdominal surgery across 93 National Health Service (NHS) hospitals in the UK.1 In this trial, a care pathway consisting of 37 consensus-derived best...

REMS in pregnancy: system perfectly designed to the get the results it gets

Reproductive drug safety has been a priority for patients and physicians even before the 1960s, when thalidomide—a drug commonly used to alleviate morning sickness—was tied to alarming cases of infants born with phocomelia.1 The Kefauver-Harris Amendment of 1962 prevented thalidomide approval in the USA.1 The legislation also led to immediate reforms in how drugs were approved, but not necessarily how they were prescribed.1 In the decades that followed, processes to regulate safe prescribing lagged.

The first reproductive drug safety initiatives were those for isotretinoin (Accutane) and thalidomide: the Accutane Pregnancy Prevention Program (1988), the System for Thalidomide Education and Prescribing Safety (1998) and the System to Manage Accutane-Related Teratogenicity (2002). In response to persistent gaps in these and other drug safety monitoring programmes, the US Food and Drug Administration (FDA) subsequently implemented the Risk Management and Evaluation Strategy (REMS) programme in 2007.

Nothing soft about 'soft skills: core competencies in quality improvement and patient safety education and practice

Quality improvement and patient safety (QIPS) education programmes have proliferated in the past decade given the rising demand for healthcare professionals to develop the knowledge, skills and attitudes required to make improvements in healthcare.1–4 On the one hand, this proliferation is a positive sign of the institutionalisation of QIPS within our educational, practice, professional and regulatory spheres. On the other hand, while numerous QIPS education programmes are up and running, our understanding of key educational processes and how to optimise outcomes is still evolving. For instance, it remains unclear how to simultaneously optimise learning and project outcomes in quality improvement (QI) project-based learning or how to facilitate interprofessional learning in QIPS education.

In this issue of BMJ Quality and Safety, Myers and colleagues5 studied the influence of two postgraduate QIPS fellowship training programme for physicians on graduates’ career outcomes...

Hospital-level evaluation of the effect of a national quality improvement programme: time-series analysis of registry data

Background and objectives

A clinical trial in 93 National Health Service hospitals evaluated a quality improvement programme for emergency abdominal surgery, designed to improve mortality by improving the patient care pathway. Large variation was observed in implementation approaches, and the main trial result showed no mortality reduction. Our objective therefore was to evaluate whether trial participation led to care pathway implementation and to study the relationship between care pathway implementation and use of six recommended implementation strategies.

Methods

We performed a hospital-level time-series analysis using data from the Enhanced Peri-Operative Care for High-risk patients trial. Care pathway implementation was defined as achievement of >80% median reliability in 10 measured care processes. Mean monthly process performance was plotted on run charts. Process improvement was defined as an observed run chart signal, using probability-based ‘shift’ and ‘runs’ rules. A new median performance level was calculated after an observed signal.

Results

Of 93 participating hospitals, 80 provided sufficient data for analysis, generating 800 process measure charts from 20 305 patient admissions over 27 months. No hospital reliably implemented all 10 processes. Overall, only 279 of the 800 processes were improved (3 (2–5) per hospital) and 14/80 hospitals improved more than six processes. Mortality risk documented (57/80 (71%)), lactate measurement (42/80 (53%)) and cardiac output guided fluid therapy (32/80 (40%)) were most frequently improved. Consultant-led decision making (14/80 (18%)), consultant review before surgery (17/80 (21%)) and time to surgery (14/80 (18%)) were least frequently improved. In hospitals using ≥5 implementation strategies, 9/30 (30%) hospitals improved ≥6 care processes compared with 0/11 hospitals using ≤2 implementation strategies.

Conclusion

Only a small number of hospitals improved more than half of the measured care processes, more often when at least five of six implementation strategies were used. In a longer term project, this understanding may have allowed us to adapt the intervention to be effective in more hospitals.

Comparative effectiveness of risk mitigation strategies to prevent fetal exposure to mycophenolate

Background

In 2012, the US Food and Drug Administration approved a Risk Evaluation and Mitigation Strategy (REMS) programme including mandatory prescriber training and a patient/provider acknowledgement form to prevent fetal exposure to mycophenolate. Prior to the REMS, the teratogenic risk was solely mitigated via written information (black box warning, medication guide (MG period)). To date, there is no evidence on the effectiveness of the REMS.

Methods

We used a national private health insurance claims database to identify women aged 15–44 who filled ≥1 mycophenolate prescription. To compare fetal exposure during REMS with the MG period, we estimated the prevalence of pregnancy at treatment initiation in a pre/post comparison (analysis 1) and the rate of conception during treatment in a retrospective cohort study (analysis 2). Pregnancy episodes were measured based on diagnosis and procedure codes for pregnancy outcomes or prenatal screening. We used generalised estimating equation models with inverse probability of treatment weighting to calculate risk estimates.

Results

The adjusted proportion of existing pregnancy per 1000 treatment initiations was 1.7 (95% CI 1.0 to 2.9) vs 4.1 (95% CI 3.2 to 5.4) during the REMS and MG period. The adjusted prevalence ratio and prevalence difference were 0.42 (95% CI 0.24 to 0.74) and –2.4 (95% CI –3.8 to –1.0), respectively. In analysis 2, the adjusted rate of conception was 12.5 (95% CI 8.9 to 17.6) vs 12.9 (95% CI 9.9 to 16.9) per 1000 years of mycophenolate exposure time in the REMS versus MG periods. The adjusted risk ratio and risk difference were 0.97 (95% CI 0.63 to 1.49) and –0.4 (95% CI –5.9 to 5.0), respectively. Sensitivity analyses on the estimated conception date demonstrated robustness of our findings.

Conclusion

While the REMS programme achieved less pregnancies at treatment initiation, it failed to prevent the onset of pregnancy during treatment. Enhanced approaches to ensure effective contraception during treatment should be considered.

Demonstrating the value of postgraduate fellowships for physicians in quality improvement and patient safety

Background

Academic fellowships in quality improvement (QI) and patient safety (PS) have emerged as one strategy to fill a need for physicians who possess this expertise. The authors aimed to characterise the impact of two such programmes on the graduates and their value to the institutions in which they are housed.

Methods

In 2018, a qualitative study of two US QIPS postgraduate fellowship programmes was conducted. Graduates’ demographics and titles were collected from programme files,while perspectives of the graduates and their institutional mentors were collected through individual interviews and analysed using thematic analysis.

Results

Twenty-eight out of 31 graduates (90%) and 16 out of 17 (94%) mentors participated in the study across both institutions. At a median of 3 years (IQR 2–4) postgraduation, QIPS fellowship programme graduates’ effort distribution was: 50% clinical care (IQR 30–61.8), 48% QIPS administration (IQR 20–60), 28% QIPS research (IQR 17.5–50) and 15% education (7.1–30.4). 68% of graduates were hired in the health system where they trained. Graduates described learning the requisite hard and soft skills to succeed in QIPS roles. Mentors described the impact of the programme on patient outcomes and increasing the acceptability of the field within academic medicine culture.

Conclusion

Graduates from two QIPS fellowship programmes and their mentors perceive programmatic benefits related to individual career goal attainment and institutional impact. The results and conceptual framework presented here may be useful to other academic medical centres seeking to develop fellowships for advanced physician training programmes in QIPS.

Deprescribing psychotropic medications in children: results of a national qualitative study

Background and Objective

Prescriptions for psychotropic medications to children have risen dramatically in recent years despite few regulatory approvals and growing concerns about side effects. Government policy and numerous programmes are attempting to curb this problem. However, the perspectives of practising clinicians have not been explored. To characterise the perspectives and experiences of paediatric primary care clinicians and mental health specialists regarding overprescribing and deprescribing psychotropic medications in children.

Methods

We conducted 24 semistructured interviews with clinicians representing diverse geographic regions and practice settings in the USA. Interview questions focused on clinician perspectives surrounding overprescribing and experiences with deprescribing. We transcribed audio files verbatim and verified them for accuracy. We analysed transcripts using a grounded theory approach, identifying emergent themes and developing a conceptual model using axial coding.

Results

Analysis yielded themes within four domains: social and clinical contextual factors contributing to overprescribing, opportunities for deprescribing, and facilitators and barriers to deprescribing in paediatric outpatient settings. Most participants recognised the problem of overprescribing, and they described complex clinical and social contextual factors, as well as internal and external pressures, that contribute to overprescribing. Opportunities for deprescribing included identification of high-risk medications, routine reassessment of medication needs and recognition of the broader social needs of vulnerable children. Facilitators and barriers to deprescribing were both internal (eg, providing psychoeducation to families) and external (eg, parent and child preferences) to clinicians.

Conclusion

Our findings highlight a discrepancy between clinicians’ concerns about overprescribing and a lack of resources to support deprescribing in outpatient paediatric settings. To successfully initiate deprescribing, clinicians will need practical tools and organisational supports, as well as social resources for vulnerable families.

Cautionary study on the effects of pay for performance on quality of care: a pilot randomised controlled trial using standardised patients

Background

Due to the difficulty of studying incentives in practice, there is limited empirical evidence of the full-impact pay-for-performance (P4P) incentive systems.

Objective

To evaluate the impact of P4P in a controlled, simulated environment.

Design

We employed a simulation-based randomised controlled trial with three standardised patients to assess advanced practice providers’ performance. Each patient reflected one of the following: (A) indicated for P4P screenings, (B) too young for P4P screenings, or (C) indicated for P4P screenings, but screenings are unrelated to the reason for the visit. Indication was determined by the 2016 Centers for Medicare and Medicaid Services quality measures.

Intervention

The P4P group was paid $150 and received a bonus of $10 for meeting each of five outcome measures (breast cancer, colorectal cancer, pneumococcal, tobacco use and depression screenings) for each of the three cases (max $300). The control group received $200.

Setting

Learning resource centre.

Participants

35 advanced practice primary care providers (physician assistants and nurse practitioners) and 105 standardised patient encounters.

Measurements

Adherence to incentivised outcome measures, interpersonal communication skills, standards of care, and misuse.

Results

The Type a patient was more likely to receive indicated P4P screenings in the P4P group (3.82 out of 5 P4P vs 2.94 control, p=0.02), however, received lower overall standards of care under P4P (31.88 P4P vs 37.06 control, p=0.027). The Type b patient was more likely to be prescribed screenings not indicated, but highlighted by P4P: breast cancer screening (47% P4P vs 0% control, p<0.01) and colorectal cancer screening (24% P4P vs 0% control, p=0.03). The P4P group over-reported completion of incentivised measures resulting in overpayment (average of $9.02 per patient).

Limitations

A small sample size and limited variability in patient panel limit the generalisability of findings.

Conclusions

Our findings caution the adoption of P4P by highlighting the unintended consequences of the incentive system.

Does team reflexivity impact teamwork and communication in interprofessional hospital-based healthcare teams? A systematic review and narrative synthesis

Background

Teamwork and communication are recognised as key contributors to safe and high-quality patient care. Interventions targeting process and relational aspects of care may therefore provide patient safety solutions that reflect the complex nature of healthcare. Team reflexivity is one such approach with the potential to support improvements in communication and teamwork, where reflexivity is defined as the ability to pay critical attention to individual and team practices with reference to social and contextual information.

Objective

To systematically review articles that describe the use of team reflexivity in interprofessional hospital-based healthcare teams.

Methods

Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, six electronic databases were searched to identify literature investigating the use of team reflexivity in interprofessional hospital-based healthcare teams.

The review includes articles investigating the use of team reflexivity to improve teamwork and communication in any naturally occurring hospital-based healthcare teams. Articles’ eligibility was validated by two second reviewers (5%).

Results

Fifteen empirical articles were included in the review. Simulation training and video-reflexive ethnography (VRE) were the most commonly used forms of team reflexivity. Included articles focused on the use of reflexive interventions to improve teamwork and communication within interprofessional healthcare teams. Communication during interprofessional teamworking was the most prominent focus of improvement methods. The nature of this review only allows assessment of team reflexivity as an activity embedded within specific methods. Poorly defined methodological information relating to reflexivity in the reviewed studies made it difficult to draw conclusive evidence about the impact of reflexivity alone.

Conclusion

The reviewed literature suggests that VRE is well placed to provide more locally appropriate solutions to contributory patient safety factors, ranging from individual and social learning to improvements in practices and systems.

Trial registration number

CRD42017055602.

Learning from complaints in healthcare: a realist review of academic literature, policy evidence and front-line insights

Introduction

A global rise in patient complaints has been accompanied by growing research to effectively analyse complaints for safer, more patient-centric care. Most patients and families complain to improve the quality of healthcare, yet progress has been complicated by a system primarily designed for case-by-case complaint handling.

Aim

To understand how to effectively integrate patient-centric complaint handling with quality monitoring and improvement.

Method

Literature screening and patient codesign shaped the review’s aim in the first stage of this three-stage review. Ten sources were searched including academic databases and policy archives. In the second stage, 13 front-line experts were interviewed to develop initial practice-based programme theory. In the third stage, evidence identified in the first stage was appraised based on rigour and relevance, and selected to refine programme theory focusing on what works, why and under what circumstances.

Results

A total of 74 academic and 10 policy sources were included. The review identified 12 mechanisms to achieve: patient-centric complaint handling and system-wide quality improvement. The complaint handling pathway includes (1) access of information; (2) collaboration with support and advocacy services; (3) staff attitude and signposting; (4) bespoke responding; and (5) public accountability. The improvement pathway includes (6) a reliable coding taxonomy; (7) standardised training and guidelines; (8) a centralised informatics system; (9) appropriate data sampling; (10) mixed-methods spotlight analysis; (11) board priorities and leadership; and (12) just culture.

Discussion

If healthcare settings are better supported to report, analyse and use complaints data in a standardised manner, complaints could impact on care quality in important ways. This review has established a range of evidence-based, short-term recommendations to achieve this.

Evaluating improvement interventions using routine data to support a learning health system: research design, data access, analysis and reporting

Introduction: learning health systems

Friedman and colleagues1 outline a vision of the learning health system founded on the sharing of data and achieved through alignment of information technology, advanced analytics and clinical expertise. The Institute of Medicine2 recognises the potential of the learning health system to generate new information automatically during the delivery of healthcare, offering continual opportunities to improve healthcare processes for the benefit of public health. The learning health system is promoted as a mechanism to accelerate the adoption of effective treatments into clinical practice, shortening the extended delay3 from publication of research findings to implementation. Furthermore, it harbours the ambition to deliver personalised medicine to each service user, rather than the systematic provision of identical care to groups of patients who share the same characteristics. Worldwide escalating costs in healthcare provision due to demographic changes, compounded by ongoing use...

Weekend effect: complex metric for a complex pathway

‘Time is tissue’ has become a mantra in emergency care: delays in treatment limit the opportunity to minimise damage caused by trauma and ischaemia, particularly for conditions like acute stroke or myocardial infarction where drugs and techniques can restore blood flow. The therapeutic time window may have widened for some forms of acute ischaemic stroke,1 2 but that does not mean that time is no longer critical. For ST-segment elevation myocardial infarction (STEMI), survival rates are higher if balloon angioplasty is performed earlier than 90 min from hospital presentation.3 4 Efforts to improve patient outcomes further by reducing time to treatment have taken a ‘whole pathway’ approach, from initial symptoms in the community to definitive intervention in the hospital. These have included raising public awareness,5 evaluating different routes of urgent referral to hospital6 and investigating the timing of...

Approach to making the availability heuristic less available

Introduction

Errors in judgement are often traceable to pitfalls of human reasoning. One pitfall is the availability heuristic, defined as a tendency to judge the likelihood of a condition by the ease at which examples spring to mind. This intuition is often a great approximation but can be sometimes mistaken because of fallible memories. People, for example, may mistakenly believe drowning causes fewer deaths than fires in the USA (actual deaths in 2017: drowning=3709 vs fires=2812)1 because they cannot easily recall many news stories about drowning. Calm water is boring to imagine whereas bright flames are dramatic images vividly recalled and frequently popularised. In turn, people can underestimate the risks lurking in lakes or rivers and neglect basic safety strategies. This may be an example where the availability heuristic could cause a fatal mistake.

Diagnostic errors can also stem from the availability heuristic and contribute to serious...

Improving cardiac surgical quality: lessons from the Japanese experience

In this issue of BMJ Quality & Safety, Yamamoto and colleagues1 describe an innovative national quality improvement intervention to identify and remediate low-performing Japanese cardiac surgery programmes. This is the most recent of numerous quality initiatives by Japanese cardiothoracic surgical leaders,2–11 and they deserve recognition for their ambitious and ongoing efforts. In 2000, emulating the Society of Thoracic Surgeons Database,12 they developed the Japan Cardiovascular Surgery Database (JCVSD), the foundation for all their subsequent quality activities. Because their board certification requires JCVSD participation,4 this assures that every board-certified cardiac surgeon, and presumably every CT programme, has access to rigorous, nationally benchmarked results. With their newest quality programme, these results are used to target low-performing centres. What can we learn from their experience, and...

What are we doing when we double check?

Double checking is often considered a useful strategy to detect and prevent medication errors, especially before the administration of high-risk drugs.1 2 From a safety research perspective, the effectiveness of double checking in preventing medication errors is limited by several factors,3 4 even if they are conducted independently5: a double check represents a barrier designed to catch errors before they reach the patient. If it is carried out by two people (compared with a technology-based check, like barcode scanning), the detection rate is limited because both people may be affected by the same disturbances in the environment, for example, noise, confusing drug labels or cognitive biases in information processing (eg, confirmation bias6 7). Double checks also may become a mindless routine over time,3 7 meaning that the checking persons rely on...

The relationship between off-hours admissions for primary percutaneous coronary intervention, door-to-balloon time and mortality for patients with ST-elevation myocardial infarction in England: a registry-based prospective national cohort study

Background

The degree to which elevated mortality associated with weekend or night-time hospital admissions reflects poorer quality of care (‘off-hours effect’) is a contentious issue. We examined if off-hours admissions for primary percutaneous coronary intervention (PPCI) were associated with higher adjusted mortality and estimated the extent to which potential differences in door-to-balloon (DTB) times—a key indicator of care quality for ST elevation myocardial infarction (STEMI) patients—could explain this association.

Methods

Nationwide registry-based prospective observational study using Myocardial Ischemia National Audit Project data in England. We examined how off-hours admissions and DTB times were associated with our primary outcome measure, 30-day mortality, using hierarchical logistic regression models that adjusted for STEMI patient risk factors. In-hospital mortality was assessed as a secondary outcome.

Results

From 76 648 records of patients undergoing PPCI between January 2007 and December 2012, we included 42 677 admissions in our analysis. Fifty-six per cent of admissions for PPCI occurred during off-hours. PPCI admissions during off-hours were associated with a higher likelihood of adjusted 30-day mortality (OR 1.13; 95% CI 1.01 to 1.25). The median DTB time was longer for off-hours admissions (45 min; IQR 30–68) than regular hours (38 min; IQR 27–58; p<0.001). After adjusting for DTB time, the difference in adjusted 30-day mortality between regular and off-hours admissions for PPCI was attenuated and no longer statistically significant (OR 1.08; CI 0.97 to 1.20).

Conclusion

Higher adjusted mortality associated with off-hours admissions for PPCI could be partly explained by differences in DTB times. Further investigations to understand the off-hours effect should focus on conditions likely to be sensitive to the rapid availability of services, where timeliness of care is a significant determinant of outcomes.

'Immunising physicians against availability bias in diagnostic reasoning: a randomised controlled experiment

Background

Diagnostic errors have often been attributed to biases in physicians’ reasoning. Interventions to ‘immunise’ physicians against bias have focused on improving reasoning processes and have largely failed.

Objective

To investigate the effect of increasing physicians’ relevant knowledge on their susceptibility to availability bias.

Design, settings and participants

Three-phase multicentre randomised experiment with second-year internal medicine residents from eight teaching hospitals in Brazil.

Interventions

Immunisation: Physicians diagnosed one of two sets of vignettes (either diseases associated with chronic diarrhoea or with jaundice) and compared/contrasted alternative diagnoses with feedback. Biasing phase (1 week later): Physicians were biased towards either inflammatory bowel disease or viral hepatitis. Diagnostic performance test: All physicians diagnosed three vignettes resembling inflammatory bowel disease, three resembling hepatitis (however, all with different diagnoses). Physicians who increased their knowledge of either chronic diarrhoea or jaundice 1 week earlier were expected to resist the bias attempt.

Main outcome measurements

Diagnostic accuracy, measured by test score (range 0–1), computed for subjected-to-bias and not-subjected-to-bias vignettes diagnosed by immunised and not-immunised physicians.

Results

Ninety-one residents participated in the experiment. Diagnostic accuracy differed on subjected-to-bias vignettes, with immunised physicians performing better than non-immunised physicians (0.40 vs 0.24; difference in accuracy 0.16 (95% CI 0.05 to 0.27); p=0.004), but not on not-subjected-to-bias vignettes (0.36 vs 0.41; difference –0.05 (95% CI –0.17 to 0.08); p=0.45). Bias only hampered non-immunised physicians, who performed worse on subjected-to-bias than not-subjected-to-bias vignettes (difference –0.17 (95% CI –0.28 to –0.05); p=0.005); immunised physicians’ accuracy did not differ (p=0.56).

Conclusions

An intervention directed at increasing knowledge of clinical findings that discriminate between similar-looking diseases decreased physicians’ susceptibility to availability bias, reducing diagnostic errors, in a simulated setting. Future research needs to examine the degree to which the intervention benefits other disease clusters and performance in clinical practice.

Trial registration number

68745917.1.1001.0068.

Quality improvement in cardiovascular surgery: results of a surgical quality improvement programme using a nationwide clinical database and database-driven site visits in Japan

Background

In 2015, an academic-led surgical quality improvement (QI) programme was initiated in Japan to use database information entered from 2013 to 2014 to identify institutions needing improvement, to which cardiovascular surgery experts were sent for site visits. Here, posthoc analyses were used to estimate the effectiveness of the QI programme in reducing surgical mortality (30-day and in-hospital mortality).

Methods

Patients were selected from the Japan Cardiovascular Surgery Database, which includes almost all cardiovascular surgeries in Japan, if they underwent isolated coronary artery bypass graft (CABG), valve or thoracic aortic surgery from 2013 to 2016. Difference-in-difference methods based on a generalised estimating equation logistic regression model were used for pre-post comparison after adjustment for patient-level expected surgical mortality.

Results

In total, 238 778 patients (10 172 deaths) from 590 hospitals, including 3556 patients seen at 10 hospitals with site visits, were included from January 2013 to December 2016. Preprogramme, the crude surgical mortality for site visit and non-site visit institutions was 9.0% and 2.7%, respectively, for CABG surgery, 10.7% and 4.0%, respectively, for valve surgery and 20.7% and 7.5%, respectively, for aortic surgery. Postprogramme, moderate improvement was observed at site visit hospitals (3.6%, 9.6% and 18.8%, respectively). A difference-in-difference estimator showed significant improvement in CABG (0.29 (95% CI 0.15 to 0.54), p<0.001) and valve surgery (0.74 (0.55 to 1.00); p=0.047). Improvement was observed within 1 year for CABG surgery but was delayed for valve and aortic surgery. During the programme, institutions did not refrain from surgery.

Conclusions

Combining traditional site visits with modern database methodologies effectively improved surgical mortality in Japan. These universal methods could be applied via a similar approach to contribute to achieving QI in surgery for many other procedures worldwide.

Impact of structured interdisciplinary bedside rounding on patient outcomes at a large academic health centre

Background

Effective communication between healthcare providers and patients and their family members is an integral part of daily care and discharge planning for hospitalised patients. Several studies suggest that team-based care is associated with improved length of stay (LOS), but the data on readmissions are conflicting. Our study evaluated the impact of structured interdisciplinary bedside rounding (SIBR) on outcomes related to readmissions and LOS.

Methods

The SIBR team consisted of a physician and/or advanced practice provider, bedside nurse, pharmacist, social worker and bridge nurse navigator. Outcomes were compared in patients admitted to a hospital medicine unit using SIBR (n=1451) and a similar control unit (n=770) during the period of October 2016 to September 2017. Multivariable negative binomial regression analysis was used to compare LOS and logistic regression analysis was used to calculate 30-day and 7-day readmission in patients admitted to SIBR and control units, adjusting for covariates.

Results

Patients admitted to SIBR and control units were generally similar (p≥0.05) with respect to demographic and clinical characteristics. Unadjusted readmission rates in SIBR patients were lower than in control patients at both 30 days (16.6% vs 20.3%, p=0.03) and 7 days (6.3% vs 9.0%, p=0.02) after discharge, while LOS was similar. After adjusting for covariates, SIBR was not significantly related to the odds of 30-day readmission (OR 0.81, p=0.07) but was lower for 7-day readmission (OR 0.70, p=0.03); LOS was similar in both groups (p=0.58).

Conclusion

SIBR did not reduce LOS and 30-day readmissions but had a significant impact on 7-day readmissions.

On selecting quality indicators: preferences of patients with breast and colon cancers regarding hospital quality indicators

Background

There is an increasing number of quality indicators being reported publicly with aim to improve the transparency on hospital care quality. However, they are little used by patients. Knowledge on patients’ preferences regarding quality may help to optimise the information presented to them.

Objective

To measure the preferences of patients with breast and colon cancers regarding publicly reported quality indicators of Dutch hospital care.

Methods

From the existing set of clinical quality indicators, participants of patient group discussions first assessed an indicator’s suitability as choice information and then identified the most relevant ones. We used the final selection as attributes in two discrete choice experiments (DCEs). Questionnaires included choice vignettes as well as a direct ranking exercise, and were distributed among patient communities. Data were analysed using mixed logit models.

Results

Based on the patient group discussions, 6 of 52 indicators (breast cancer) and 5 of 21 indicators (colon cancer) were selected as attributes. The questionnaire was completed by 84 (breast cancer) and 145 respondents (colon cancer). In the patient group discussions and in the DCEs, respondents valued outcome indicators as most important: those reflecting tumour residual (breast cancer) and failure to rescue (colon cancer). Probability analyses revealed a larger range in percentage change of choice probabilities for breast cancer (10.9%–69.9%) relative to colon cancer (7.9%–20.9%). Subgroup analyses showed few differences in preferences across ages and educational levels. DCE findings partly matched with those of direct ranking.

Conclusion

Study findings show that patients focused on a subset of indicators when making their choice of hospital and that they valued outcome indicators the most. In addition, patients with breast cancer were more responsive to quality information than patients with colon cancer.

Pages