In this issue of BMJ Quality & Safety, Schnipper et al evaluate the implementation of a multifaceted medication reconciliation intervention at six hospitals using the MARQUIS medication reconciliation implementation toolkit.1 The planned intervention included the following elements: hiring or reallocating new staff to obtain medication histories, performing both admission and discharge medication reconciliation, improving access to preadmission medication sources, introducing policy, training staff on obtaining medication histories and patient counselling, implementing a gold standard medication reconciliation process including targeting of high-risk patients, improving healthcare information technology and utilising social marketing and community engagement. The study had many methodological strengths, including independent observers for outcome verification, clinical assessment of medication discrepancies, pragmatic implementation in both community and teaching hospitals, mentored implementation and a large randomly selected patient sample with controls and temporal trending. The main result was that potentially harmful discrepancies did not decrease over time beyond baseline...
Alerts have become a routine part of our daily lives—from the apps on our phones to an increasing number of ‘wearables’ (eg, fitness trackers) and household devices. Within healthcare, frontline clinicians have become all too familiar with a barrage of alerts and alarms from electronic medical records and medical devices.
Somewhat less familiar to most clinicians, however, are the alerts received by institutions from regulators and other regional or national bodies monitoring healthcare performance. After the Bristol inquiry in 2001 in the UK,1 research showed that given the available data Bristol could have been detected as an outlier and that it was not simply a matter of the low volume of cases.2 3 Had the cumulative excess mortality been monitored using these routinely collected data, then an alarm could have given for Bristol after the publication of the 1991 Cardiac Surgical Register and...
Unintentional discrepancies across care settings are a common form of medication error and can contribute to patient harm. Medication reconciliation can reduce discrepancies; however, effective implementation in real-world settings is challenging.Methods
We conducted a pragmatic quality improvement (QI) study at five US hospitals, two of which included concurrent controls. The intervention consisted of local implementation of medication reconciliation best practices, utilising an evidence-based toolkit with 11 intervention components. Trained QI mentors conducted monthly site phone calls and two site visits during the intervention, which lasted from December 2011 through June 2014. The primary outcome was number of potentially harmful unintentional medication discrepancies per patient; secondary outcome was total discrepancies regardless of potential for harm. Time series analysis used multivariable Poisson regression.Results
Across five sites, 1648 patients were sampled: 613 during baseline and 1035 during the implementation period. Overall, potentially harmful discrepancies did not decrease over time beyond baseline temporal trends, adjusted incidence rate ratio (IRR) 0.97 per month (95% CI 0.86 to 1.08), p=0.53. The intervention was associated with a reduction in total medication discrepancies, IRR 0.92 per month (95% CI 0.87 to 0.97), p=0.002. Of the four sites that implemented interventions, three had reductions in potentially harmful discrepancies. The fourth site, which implemented interventions and installed a new electronic health record (EHR), saw an increase in discrepancies, as did the fifth site, which did not implement any interventions but also installed a new EHR.Conclusions
Mentored implementation of a multifaceted medication reconciliation QI initiative was associated with a reduction in total, but not potentially harmful, medication discrepancies. The effect of EHR implementation on medication discrepancies warrants further study.Trial registration number
To investigate the association between alerts from a national hospital mortality surveillance system and subsequent trends in relative risk of mortality.Background
There is increasing interest in performance monitoring in the NHS. Since 2007, Imperial College London has generated monthly mortality alerts, based on statistical process control charts and using routinely collected hospital administrative data, for all English acute NHS hospital trusts. The impact of this system has not yet been studied.Methods
We investigated alerts sent to Acute National Health Service hospital trusts in England in 2011–2013. We examined risk-adjusted mortality (relative risk) for all monitored diagnosis and procedure groups at a hospital trust level for 12 months prior to an alert and 23 months post alert. We used an interrupted time series design with a 9-month lag to estimate a trend prior to a mortality alert and the change in trend after, using generalised estimating equations.Results
On average there was a 5% monthly increase in relative risk of mortality during the 12 months prior to an alert (95% CI 4% to 5%). Mortality risk fell, on average by 61% (95% CI 56% to 65%), during the 9-month period immediately following an alert, then levelled to a slow decline, reaching on average the level of expected mortality within 18 months of the alert.Conclusions
Our results suggest an association between an alert notification and a reduction in the risk of mortality, although with less lag time than expected. It is difficult to determine any causal association. A proportion of alerts may be triggered by random variation alone and subsequent falls could simply reflect regression to the mean. Findings could also indicate that some hospitals are monitoring their own mortality statistics or other performance information, taking action prior to alert notification.
To provide a description of the Imperial College Mortality Surveillance System and subsequent investigations by the Care Quality Commission (CQC) in National Health Service (NHS) hospitals receiving mortality alerts.Background
The mortality surveillance system has generated monthly mortality alerts since 2007, on 122 individual diagnosis and surgical procedure groups, using routinely collected hospital administrative data for all English acute NHS hospital trusts. The CQC, the English national regulator, is notified of each alert. This study describes the findings of CQC investigations of alerting trusts.Methods
We carried out (1) a descriptive analysis of alerts (2007–2016) and (2) an audit of CQC investigations in a subset of alerts (2011–2013).Results
Between April 2007 and October 2016, 860 alerts were generated and 76% (654 alerts) were sent to trusts. Alert volumes varied over time (range: 40–101). Septicaemia (except in labour) was the most commonly alerting group (11.5% alerts sent). We reviewed CQC communications in a subset of 204 alerts from 96 trusts. The CQC investigated 75% (154/204) of alerts. In 90% of these pursued alerts, trusts returned evidence of local case note reviews (140/154). These reviews found areas of care that could be improved in 69% (106/154) of alerts. In 25% (38/154) trusts considered that identified failings in care could have impacted on patient outcomes. The CQC investigations resulted in full trust action plans in 77% (118/154) of all pursued alerts.Conclusion
The mortality surveillance system has generated a large number of alerts since 2007. Quality of care problems were found in 69% of alerts with CQC investigations, and one in four trusts reported that failings in care may have an impact on patient outcomes. Identifying whether mortality alerts are the most efficient means to highlight areas of substandard care will require further investigation.
Central line associated pneumothorax (CLAP) could be a good quality of care indicator because they are objectively measured, clearly undesirable and possibly avoidable. We measured the incidence and trends of CLAP using radiograph report text search with manual review and compared them with measures using routinely collected health administrative data.Methods
For each hospitalisation to a tertiary care teaching hospital between 2002 and 2015, we searched all chest radiography reports for a central line with a sensitive computer algorithm. Screen positive reports were manually reviewed to confirm central lines. The index and subsequent chest radiography reports were screened for pneumothorax followed by manual confirmation. Diagnostic and procedural codes were used to identify CLAP in administrative data.Results
In 685 044 hospitalisations, 10 819 underwent central line insertion (1.6%) with CLAP occurring 181 times (1.7%). CLAP risk did not change over time. Codes for CLAP were inaccurate (sensitivity 13.8%, positive predictive value 6.6%). However, overall code-based CLAP risk (1.8%) was almost identical to actual values possibly because patient strata with inflated CLAP risk were balanced by more common strata having underestimated CLAP risk. Code-based methods inflated central line incidence 2.2 times and erroneously concluded that CLAP risk decreased significantly over time.Conclusions
Using valid methods, CLAP incidence was similar to those in the literature but has not changed over time. Although administrative database codes for CLAP were very inaccurate, they generated CLAP risks very similar to actual values because of offsetting errors. In contrast to those from radiograph report text search with manual review, CLAP trends decreased significantly using administrative data. Hospital CLAP risk should not be measured using administrative data.
To quantify the association between patient self-management capability measured using the Patient Activation Measure (PAM) and healthcare utilisation across a whole health economy.Results
12 270 PAM questionnaires were returned from 9348 patients. In the adjusted analyses, compared with the least activated group, highly activated patients (level 4) had the lowest rate of contact with a general practitioner (rate ratio: 0.82, 95% CI 0.79 to 0.86), emergency department attendances (rate ratio: 0.68, 95% CI 0.60 to 0.78), emergency hospital admissions (rate ratio: 0.62, 95% CI 0.51 to 0.75) and outpatient attendances (rate ratio: 0.81, 95% CI 0.74 to 0.88). These patients also had the lowest relative rate (compared with the least activated) of ‘did not attends’ at the general practitioner (rate ratio: 0.77, 95% CI 0.68 to 0.87), ‘did not attends’ at hospital outpatient appointments (rate ratio: 0.72, 95% CI 0.61 to 0.86) and self-referred attendance at emergency departments for conditions classified as minor severity (rate ratio: 0.67, 95% CI 0.55 to 0.82), a significantly shorter average length of stay for overnight elective admissions (rate ratio 0.59, 95% CI 0.37 to 0.94),and a lower likelihood of 30- day emergency readmission (rate ratio: 0.68 , 95% CI 0.39 to 1.17), though this did not reach significance.Conclusions
Self-management capability is associated with lower healthcare utilisation and less wasteful use across primary and secondary care.
Several countries have national policies and programmes requiring hospitals to use quality and safety (QS) indicators. To present an overview of these indicators, hospital-wide QS (HWQS) dashboards are designed. There is little evidence how these dashboards are developed. The challenges faced to develop these dashboards in Dutch hospitals were retrospectively studied.Methods
24 focus group interviews were conducted: 12 with hospital managers (n=25; 39.7%) and 12 support staff (n=38; 60.3%) in 12 of the largest Dutch hospitals. Open and axial codings were applied consecutively to analyse the data collected.Results
A heuristic tool for the general development process for HWQS dashboards containing five phases was identified. In phase 1, hospitals make inventories to determine the available data and focus too much on quantitative data relevant for accountability. In phase 2, hospitals develop dashboard content by translating data into meaningful indicators for different users, which is not easy due to differing demands. In phase 3, hospitals search for layouts that depict the dashboard content suited for users with different cognitive abilities and analytical skills. In phase 4, hospitals try to integrate dashboards into organisational structures to ensure that data are systematically reviewed and acted on. In phase 5, hospitals want to improve the flexibility of their dashboards to make this adaptable under differing circumstances.Conclusion
The literature on dashboards addresses the technical and content aspects of dashboards, but overlooks the organisational development process. This study shows how technical and organisational aspects are relevant in development processes.
Identifying mechanisms to improve provider compliance with quality metrics is a common goal across medical disciplines. Nudge interventions are minimally invasive strategies that can influence behavioural changes and are increasingly used within healthcare settings. We hypothesised that nudge interventions may improve provider compliance with lung-protective ventilation (LPV) strategies during general anaesthesia.Methods
We developed an audit and feedback dashboard that included information on both provider-level and department-level compliance with LPV strategies in two academic hospitals, two non-academic hospitals and two academic surgery centres affiliated with a single healthcare system. Dashboards were emailed to providers four times over the course of the 9-month study. Additionally, the default setting on anaesthesia machines for tidal volume was decreased from 700 mL to 400 mL. Data on surgical cases performed between 1 September 2016 and 31 May 2017 were examined for compliance with LPV. The impact of the interventions was assessed via pairwise logistic regression analysis corrected for multiple comparisons.Results
A total of 14 793 anaesthesia records were analysed. Absolute compliance rates increased from 59.3% to 87.8%preintervention to postintervention. Introduction of attending physician dashboards resulted in a 41% increase in the odds of compliance (OR 1.41, 95% CI 1.17 to 1.69, p=0.002). Subsequently, the addition of advanced practice provider and resident dashboards lead to an additional 93% increase in the odds of compliance (OR 1.93, 95% CI 1.52 to 2.46, p<0.001). Lastly, modifying ventilator defaults led to a 376% increase in the odds of compliance (OR 3.76, 95% CI 3.1 to 4.57, p<0.001).Conclusion
Audit and feedback tools in conjunction with default changes improve provider compliance.
In 2009, the National Patient Safety Foundation’s Lucian Leape Institute (LLI) published a paper identifying five areas of healthcare that require system-level attention and action to advance patient safety.The authors argued that to truly transform the safety of healthcare, there was a need to address medical education reform; care integration; restoring joy and meaning in work and ensuring the safety of the healthcare workforce; consumer engagement in healthcare and transparency across the continuum of care. In the ensuing years, the LLI convened a series of expert roundtables to address each concept, look at obstacles to implementation, assess potential for improvement, identify potential implementation partners and issue recommendations for action. Reports of these activities were published between 2010 and 2015. While all five areas have seen encouraging developments, multiple challenges remain. In this paper, the current members of the LLI (now based at the Institute for Healthcare Improvement) assess progress made in the USA since 2009 and identify ongoing challenges.