Quality and Safety in Health Care Journal

Composite measures of healthcare quality: sensible in theory, problematic in practice

All healthcare systems show variation in the quality of care provided, whether that means access to primary care services,1 ambulance response times,2 Accident & Emergency waiting times3 or treatment processes and outcomes.4–6 Monitoring this variation in quality can serve multiple purposes: informing patients about where best to seek care;7 allowing clinicians to compare their performance with that of their peers and thus identify targets for local-level quality improvement efforts, and supporting the development of national policy. Though, what all these have in common is a trust in the reliability of the data to adequately reflect healthcare quality—sometimes a questionable assumption.

In BMJ Quality and Safety, Hofstede et al8 have addressed a common situation where providers (such as hospitals, general practices or community teams) are ranked according to their performance on a quality indicator....

Research paradigm that tackles the complexity of in situ care: video reflexivity

This issue of BMJQuality & Safety presents a study conducted at the University of Michigan to evaluate ‘video reflexivity’ (VR, also referred to as VRE or ‘video-reflexive ethnography’) as a means for intervening in how physicians and nurses work together.1 The study found ‘increased reflection in both nurse and physician participants’, an outcome also reported (among other things) in related studies from the UK, Australia, New Zealand and the USA.2–6 ‘Increased reflection’ may not set the hearts and minds of quality and safety experts on fire. And yet this finding is significant.

Consider that healthcare improvement initiatives, patient safety research and system-wide implementation programmes have to come to terms with the implications of rising care complexity. This rise in complexity is due to increasing multimorbidity, mobility and migration, ageing, public assertiveness, technological advances, staff turnover, mounting information,...

Reducing hospital admissions for adverse drug events through coordinated pharmacist care: learning from Hawaii without a field trip

Adverse drug events among older adults are common and serious. Approximately 9% of all hospital admissions for older adults are attributable to adverse drug reactions.1 Moreover, up to one in five adults experience an adverse drug reaction during hospitalisation,2 3 and approximately 15%–50% of hospitalised older adults will suffer an adverse drug event within 30 days of returning home (with most of these events resulting from medications that were started in the hospital).4–6 If our goal is primum non nocere (‘first, do no harm’), we have substantial opportunities for improvement.

A variety of interventions have been attempted to stem this tide of medication-induced harm, with variable success, and no clear path for hitting the sweet spot of meaningfully improving clinical outcomes related to medication use in a manner than is clinically scalable and cost-effective.7–12

Ranking hospitals: do we gain reliability by using composite rather than individual indicators?

Background

Despite widespread use of quality indicators, it remains unclear to what extent they can reliably distinguish hospitals on true differences in performance. Rankability measures what part of variation in performance reflects ‘true’ hospital differences in outcomes versus random noise.

Objective

This study sought to assess whether combining data into composites or including data from multiple years improves the reliability of ranking quality indicators for hospital care.

Methods

Using the Dutch National Medical Registration (2007–2012) for stroke, colorectal carcinoma, heart failure, acute myocardial infarction and total hiparthroplasty (THA)/ total knee arthroplasty (TKA) in osteoarthritis (OA), we calculated the rankability for in-hospital mortality, 30-day acute readmission and prolonged length of stay (LOS) for single years and 3-year periods and for a dichotomous and ordinal composite measure in which mortality, readmission and prolonged LOS were combined. Rankability, defined as (between-hospital variation/between-hospital+within hospital variation)x100% is classified as low (<50%), moderate (50%–75%) and high (>75%).

Results

Admissions from 555 053 patients treated in 95 hospitals were included. The rankability for mortality was generally low or moderate, varying from less than 1% for patients with OA undergoing THA/TKA in 2011 to 71% for stroke in 2010. Rankability for acute readmission was low, except for acute myocardial infarction in 2009 (51%) and 2012 (62%). Rankability for prolonged LOS was at least moderate. Combining multiple years improved rankability but still remained low in eight cases for both mortality and acute readmission. Combining the individual indicators into the dichotomous composite, all diagnoses had at least moderate rankability (range: 51%–96%). For the ordinal composite, only heart failure had low rankability (46% in 2008) (range: 46%–95%).

Conclusion

Combining multiple years or into multiple indicators results in more reliable ranking of hospitals, particularly compared with mortality and acute readmission in single years, thereby improving the ability to detect true hospital differences. The composite measures provide more information and more reliable rankings than combining multiple years of individual indicators.

Community-acquired and hospital-acquired medication harm among older inpatients and impact of a state-wide medication management intervention

Background

We previously reported reduction in the rate of hospitalisations with medication harm among older adults with our ‘Pharm2Pharm’ intervention, a pharmacist-led care transition and care coordination model focused on best practices in medication management. The objectives of the current study are to determine the extent to which medication harm among older inpatients is ‘community acquired’ versus ‘hospital acquired’ and to assess the effectiveness of the Pharm2Pharm model with each type.

Methods

After a 3-year baseline, six non-federal general acute care hospitals with 50 or more beds in Hawaii implemented Pharm2Pharm sequentially. The other five such hospitals served as the comparison group. We measured frequencies and quarterly rates of admissions among those aged 65 and older with ‘community-acquired’ (International Classification of Diseases-coded as present on admission) and ‘hospital-acquired’ (coded as not present on admission) medication harm per 1000 admissions from 2010 to 2014.

Results

There were 189 078 total admissions from 2010 through 2014, 7% of which had one or more medication harm codes. There were 16 225 medication harm codes, 70% of which were community-acquired, among these 13 795 admissions. The varied times when the intervention was implemented across hospitals were associated with a significant reduction in the rate of admissions with community-acquired medication harm compared with non-intervention hospitals (p=0.001), and specifically harm by anticoagulants (p<0.0001) and by medications in therapeutic use (p<0.001). The hospital-acquired medication harm rate did not change. The rate of admissions with community-acquired medication harm was reduced by 4.28 admissions per 1000 admissions per quarter in the Pharm2Pharm hospitals relative to the comparison hospitals.

Conclusion

The Pharm2Pharm model is an effective way to address the growing problem of community-acquired medication harm among high-risk, chronically ill patients. This model demonstrates the importance of deploying specially trained pharmacists in the hospital and in the community to systematically identify and resolve drug therapy problems.

Information management goals and process failures during home visits for middle-aged and older adults receiving skilled home healthcare services after hospital discharge: a multisite, qualitative study

Background

Middle-aged and older adults requiring skilled home healthcare (‘home health’) services following hospital discharge are at high risk of experiencing suboptimal outcomes. Information management (IM) needed to organise and communicate care plans is critical to ensure safety. Little is known about IM during this transition.

Objectives

(1) Describe the current IM process (activity goals, subactivities, information required, information sources/targets and modes of communication) from home health providers’ perspectives and (2) Identify IM-related process failures.

Methods

Multisite qualitative study. We performed semistructured interviews and direct observations with 33 home health administrative staff, 46 home health providers, 60 middle-aged and older adults, and 40 informal caregivers during the preadmission process and initial home visit. Data were analysed to generate themes and information flow diagrams.

Results

We identified four IM goals during the preadmission process: prepare referral document and inform agency; verify insurance; contact adult and review case to schedule visit. We identified four IM goals during the initial home visit: assess appropriateness and obtain consent; manage expectations; ensure safety and develop contingency plans. We identified IM-related process failures associated with each goal: home health providers and adults with too much information (information overload); home health providers without complete information (information underload); home health coordinators needing information from many places (information scatter); adults’ and informal caregivers’ mismatched expectations regarding home health services (information conflict) and home health providers encountering inaccurate information (erroneous information).

Conclusions

IM for hospital-to-home health transitions is complex, yet key for patient safety. Organisational infrastructure is needed to support IM. Future clinical workflows and health information technology should be designed to mitigate IM-related process failures to facilitate safer hospital-to-home health transitions.

Public reporting of antipsychotic prescribing in nursing homes: population-based interrupted time series analyses

Background

Although sometimes appropriate, antipsychotic medications are associated with increased risk of significant adverse events. In 2014, a series of newspaper articles describing high prescribing rates in nursing homes in Ontario, Canada, garnered substantial interest. Subsequently, an online public reporting initiative with home-level data was launched. We examined the impact of these public reporting interventions on antipsychotic prescribing in nursing homes.

Methods

Time series analysis of all nursing home residents in Ontario, Canada, between 1 October 2013 and 31 March 2016. The primary outcome was the proportion of residents prescribed antipsychotics each month. Balance measures were prescriptions for common alternative sedating agents (benzodiazepines and/or trazodone). We used segmented regression to assess the effects on prescription trends of the newspaper articles and the online home-level public reporting initiative.

Results

We included 120 009 nursing home resident admissions across 636 nursing homes. Following the newspaper articles, the proportion of residents prescribed an antipsychotic decreased by 1.28% (95% CI 1.08% to 1.48%) and continued to decrease at a rate of 0.2% per month (95% CI 0.16% to 0.24%). The online public reporting initiative did not alter this trend. Over 3 years, there was a net absolute reduction in antipsychotic prescribing of 6.0% (95% CI 5.1% to 6.9%). Trends for benzodiazepine prescribing did not change as substantially during the period of observation. Trazodone use has been gradually increasing, but its use did not change abruptly at the time of the mass media report or the public reporting initiative.

Interpretation

The rapid impact of mass media on prescribing suggests both an opportunity to use this approach to invoke change and a warning to ensure that such reporting occurs responsibly.

Value of hospital resources for effective pressure injury prevention: a cost-effectiveness analysis

Objective

Hospital-acquired pressure injuries are localised skin injuries that cause significant mortality and are costly. Nursing best practices prevent pressure injuries, including time-consuming, complex tasks that lack payment incentives. The Braden Scale is an evidence-based stratification tool nurses use daily to assess pressure-injury risk. Our objective was to analyse the cost-utility of performing repeated risk-assessment for pressure-injury prevention in all patients or high-risk groups.

Design

Cost-utility analysis using Markov modelling from US societal and healthcare sector perspectives within a 1-year time horizon.

Setting

Patient-level longitudinal data on 34 787 encounters from an academic hospital electronic health record (EHR) between 2011 and 2014, including daily Braden scores. Supervised machine learning simulated age-adjusted transition probabilities between risk levels and pressure injuries.

Participants

Hospitalised adults with Braden scores classified into five risk levels: very high risk (6–9), high risk (10–11), moderate risk (12–14), at-risk (15–18), minimal risk (19–23).

Interventions

Standard care, repeated risk assessment in all risk levels or only repeated risk assessment in high-risk strata based on machine-learning simulations.

Main outcome measures

Costs (2016 $US) of pressure-injury treatment and prevention, and quality-adjusted life years (QALYs) related to pressure injuries were weighted by transition probabilities to calculate the incremental cost-effectiveness ratio (ICER) at $100 000/QALY willingness-to-pay. Univariate and probabilistic sensitivity analyses tested model uncertainty.

Results

Simulating prevention for all patients yielded greater QALYs at higher cost from societal and healthcare sector perspectives, equating to ICERs of $2000/QALY and $2142/QALY, respectively. Risk-stratified follow-up in patients with Braden scores <15 dominated standard care. Prevention for all patients was cost-effective in >99% of probabilistic simulations.

Conclusion

Our analysis using EHR data maintains that pressure-injury prevention for all inpatients is cost-effective. Hospitals should invest in nursing compliance with international prevention guidelines.

Work-life balance behaviours cluster in work settings and relate to burnout and safety culture: a cross-sectional survey analysis

Background

Healthcare is approaching a tipping point as burnout and dissatisfaction with work-life integration (WLI) in healthcare workers continue to increase. A scale evaluating common behaviours as actionable examples of WLI was introduced to measure work-life balance.

Objectives

(1) Explore differences in WLI behaviours by role, specialty and other respondent demographics in a large healthcare system. (2) Evaluate the psychometric properties of the work-life climate scale, and the extent to which it acts like a climate, or group-level norm when used at the work setting level. (3) Explore associations between work-life climate and other healthcare climates including teamwork, safety and burnout.

Methods

Cross-sectional survey study completed in 2016 of US healthcare workers within a large academic healthcare system.

Results

10 627 of 13 040 eligible healthcare workers across 440 work settings within seven entities of a large healthcare system (81% response rate) completed the routine safety culture survey. The overall work-life climate scale internal consistency was α=0.830. WLI varied significantly among healthcare worker role, length of time in specialty and work setting. Random effects analyses of variance for the work-life climate scale revealed significant between-work setting and within-work setting variance and intraclass correlations reflected clustering at the work setting level. T-tests of top versus bottom WLI quartile work settings revealed that positive work-life climate was associated with better teamwork and safety climates, as well as lower personal burnout and burnout climate (p<0.001).

Conclusion

Problems with WLI are common in healthcare workers and differ significantly based on position and time in specialty. Although typically thought of as an individual difference variable, WLI appears to operate as a climate, and is consistently associated with better safety culture norms.

Application of electronic trigger tools to identify targets for improving diagnostic safety

Progress in reducing diagnostic errors remains slow partly due to poorly defined methods to identify errors, high-risk situations, and adverse events. Electronic trigger (e-trigger) tools, which mine vast amounts of patient data to identify signals indicative of a likely error or adverse event, offer a promising method to efficiently identify errors. The increasing amounts of longitudinal electronic data and maturing data warehousing techniques and infrastructure offer an unprecedented opportunity to implement new types of e-trigger tools that use algorithms to identify risks and events related to the diagnostic process. We present a knowledge discovery framework, the Safer Dx Trigger Tools Framework, that enables health systems to develop and implement e-trigger tools to identify and measure diagnostic errors using comprehensive electronic health record (EHR) data. Safer Dx e-trigger tools detect potential diagnostic events, allowing health systems to monitor event rates, study contributory factors and identify targets for improving diagnostic safety. In addition to promoting organisational learning, some e-triggers can monitor data prospectively and help identify patients at high-risk for a future adverse event, enabling clinicians, patients or safety personnel to take preventive actions proactively. Successful application of electronic algorithms requires health systems to invest in clinical informaticists, information technology professionals, patient safety professionals and clinicians, all of who work closely together to overcome development and implementation challenges. We outline key future research, including advances in natural language processing and machine learning, needed to improve effectiveness of e-triggers. Integrating diagnostic safety e-triggers in institutional patient safety strategies can accelerate progress in reducing preventable harm from diagnostic errors.

Formative evaluation of the video reflexive ethnography method, as applied to the physician-nurse dyad

Background

Despite decades of research and interventions, poor communication between physicians and nurses continues to be a primary contributor to adverse events in the hospital setting and a major challenge to improving patient safety. The lack of progress suggests that it is time to consider alternative approaches with greater potential to identify and improve communication than those used to date. We conducted a formative evaluation to assess the feasibility, acceptability and utility of using video reflexive ethnography (VRE) to examine, and potentially improve, communication between nurses and physicians.

Methods

We begin with a brief description of the institutional review boardapproval process and recruitment activities, then explain how we conducted the formative evaluation by describing (1) the VRE process itself; (2) our assessment of the exposure to the VRE process; and (3) challenges encountered and lessons learnt as a result of the process, along with suggestions for change.

Results

Our formative evaluation demonstrates that it is feasible and acceptable to video-record communication between physicians and nurses during patient care rounds across many units at a large, academic medical centre. The lessons that we learnt helped to identify procedural changes for future projects. We also discuss the broader application of this methodology as a possible strategy for improving other important quality and safety practices in healthcare settings.

Conclusions

The VRE process did generate increased reflection in both nurse and physician participants. Moreover, VRE has utility in assessing communication and, based on the comments of our participants, can serve as an intervention to possibly improve communication, with implications for patient safety.

External validity is also an ethical consideration in cluster-randomised trials of policy changes

Hemming et al (‘Ethical Implications of Excessive Cluster Sizes in Cluster Randomized Trials’, 20 February 2018) cite the FIRST Trial as an example of a ‘higher risk’ cluster-randomised trial in which large cluster sizes pose unjustifiable excess risk.1 The authors state, ‘[t]he obvious way to reduce the cluster size in this study is to reduce the duration of the trial.’

We believe this to be an inappropriate recommendation stemming from an inaccurate appraisal of the FIRST Trial.

The FIRST Trial was designed to inform a potential policy change in US resident duty hours. In the Statistical Analysis Plan (SAP), which was made available at www.nejm.org, we clearly and prospectively stated, ‘[t]his study is a trial-based evaluation of potential policy effects on patient safety and resident wellbeing... this study is intended to inform real-world policy decision-making with respect to resident duty hours regulation.’2 The...

External validity is also an ethical consideration in cluster-randomised trials of policy changes: the authors reply

We would like to thank the authors of the FIRST trial for responding to our paper on ‘Ethical implications of excessive cluster sizes in cluster randomised trials’.1 We are pleased that our paper has generated this interest. The science and methodology of trial design are constantly evolving and will evolve faster, to the future benefit of science, if we can openly reflect on how things have been done in the past and how we might do things differently in the future. We used the FIRST trial as a case study in our analysis as we believed it demonstrated potential for a more efficient design.

The FIRST authors make some valid points about pragmatic trials and large cluster sizes, that being that there are multiple factors to consider when designing trials. However, generalisability refers to the replication of the study results across different populations. Very large trials in...

Patient participation in inpatient ward rounds on acute inpatient medical wards: a descriptive study

Background

Meaningful partnering with patients is advocated to enhance care delivery. Little is known about how this is operationalised at the point of care during hospital ward rounds, where decision-making concerning patient care frequently occurs.

Objective

Describe participation of patients, with differing preferences for participation, during ward rounds in acute medical inpatient services.

Methods

Naturalistic, multimethod design. Data were collected using surveys and observations of ward rounds at two hospitals in Melbourne, Australia. Using convenience sampling, a stratified sample of acute general medical patients were recruited. Prior to observation and interview, patient responses to the Control Preference Scale were used to stratify them into three groups representing diverse participation preferences: active control where the patient makes decisions; shared control where the patient prefers to make decisions jointly with clinicians; and passive control where the patient prefers clinicians make decisions.

Results

Of the 52 patients observed over 133 ward rounds, 30.8% (n=16) reported an active control preference for participation in decision-making during ward rounds, 25% (n=13) expressed shared control preference and 44.2% (n=23) expressed low control preference. Patients’ participation was observed in 75% (n=85) of ward rounds, but few rounds (18%, n=20) involved patient contribution to decisions about their care. Clinicians prompted patient participation in 54% of rounds; and in 15% patients initiated their own participation. Thematic analysis of qualitative observation and patient interview data revealed two themes, supporting patient capability and clinician-led opportunity, that contributed to patient participation or non-participation in ward rounds.

Conclusions

Participation in ward rounds was similar for patients irrespective of control preference. This study demonstrates the need to better understand clinician roles in supporting strategies that promote patient participation in day-to-day hospital care.

Virtual outpatient clinic as an alternative to an actual clinic visit after surgical discharge: a randomised controlled trial

Background

It is standard practice to review all patients following discharge at a follow-up clinic but demands on all health services outweigh resources and unnecessary review appointments may delay or deny access to patients with greater needs.

Aims

This randomised trial aimed to establish whether a virtual outpatient clinic (VOPC) was an acceptable alternative to an actual outpatient clinic (OPC) attendance for a broad range of general surgical patients following a hospital admission.

Patients and methods

All patients admitted under one general surgical service over the study period were assessed. If eligible for inclusion the rationale, randomisation and follow-up methods were explained, consent was sought and patients randomised to receive either a VOPC or an OPC appointment.

Results

Two-hundred and nine patients consented to study inclusion, of which 98/107 (91.6%) in the VOPC group and 83/102 (81.4%) in the OPC group were successfully contacted. Only 6 patients in the OPC group and 10 in the VOPC group reported ongoing issues. A further follow-up indicated 78 of 82 (95%) VOPC patients were very happy with their overall experience compared with 34/61 (56%) in the actual OPC group (p<0.001). A significant proportion of both cohorts—68/82 (83%) in VOPC group and 41/61 (67%) in OPC group (p = 0.029)—preferred a VOPC appointment as their future follow-up of choice.

Conclusions

The majority of patients discharged from a surgical service could be better followed up by a virtual clinic with a significant proportion of patients reporting a preference for and a greater satisfaction with such a service.

Standardisation of perioperative urinary catheter use to reduce postsurgical urinary tract infection: an interrupted time series study

Background

Prevention of healthcare-associated urinary tract infection (UTI) has been the focus of a national effort, yet appropriate indications for insertion and removal of urinary catheters (UC) among surgical patients remain poorly defined.

Methods

We developed and implemented a standardised approach to perioperative UC use to reduce postsurgical UTI including standard criteria for catheter insertion, training of staff to insert UC using sterile technique and standardised removal in the operating room and surgical unit using a nurse-initiated medical directive. We performed an interrupted time series analysis up to 2 years following intervention. The primary outcome was the proportion of patients who developed postsurgical UTI within 30 days as measured by the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). Process measures included monthly UC insertions, removals in the operating room and UC days per patient-days on surgical units.

Results

At baseline, 22.5% of patients were catheterised for surgery, none were removed in the operating room and catheter-days per patient-days were 17.4% on surgical units. Following implementation of intervention, monthly catheter removal in the operating room immediately increased (range 12.2%–30.0%) while monthly UC insertion decreased more slowly before being sustained below baseline for 12 months (range 8.4%–15.6%). Monthly catheter-days per patient-days decreased to 8.3% immediately following intervention with a sustained shift below the mean in the final 8 months. Postsurgical UTI decreased from 2.5% (95% CI 2.0-3.1%) to 1.4% (95% CI 1.1-1.9; p=0.002) during the intervention period.

Conclusions

Standardised perioperative UC practices resulted in measurable improvement in postsurgical UTI. These appropriateness criteria for perioperative UC use among a broad range of surgical services could inform best practices for hospitals participating in ACS NSQIP.

Speaking up about patient safety concerns: the influence of safety management approaches and climate on nurses willingness to speak up

Background

Speaking up is important for patient safety, but healthcare professionals often hesitate to voice their concerns. Direct supervisors have an important role in influencing speaking up. However, good insight into the relationship between managers’ behaviour and employees’ perceptions about whether speaking up is safe and worthwhile is still lacking.

Aim

To explore the relationships between control-based and commitment-based safety management, climate for safety, psychological safety and nurses’ willingness to speak up.

Methods

We conducted a cross-sectional survey study, resulting in a sample of 980 nurses and 93 nurse managers working in Dutch clinical hospital wards. To test our hypotheses, hierarchical regression analyses (at ward level) and multilevel regression analyses were conducted.

Results

Significantly positive associations were found between nurses’ perceptions of control-based safety management and climate for safety (β=0.74; p<0.001), and between the perceived levels of commitment-based management and team psychological safety (β=0.36; p<0.01). Furthermore, team psychological safety is found to be positively related to nurses’ speaking up attitudes (B=0.24; t=2.04; p<0.05). The relationship between nurse-rated commitment-based safety management and nurses’ willingness to speak up is fully mediated by team psychological safety.

Conclusion

Results provide initial support that nurses who perceive higher levels of commitment-based safety management feel safer to take interpersonal risks and are more willing to speak up about patient safety concerns. Furthermore, nurses’ perceptions of control-based safety management are found to be positively related to a climate for safety, although no association was found with speaking up. Both control-based and commitment-based management approaches seem to be relevant for managing patient safety, but when it comes to encouraging speaking up, a commitment-based safety management approach seems to be most valuable.

Rate of avoidable deaths in a Norwegian hospital trust as judged by retrospective chart review

Background

The proportion of avoidable hospital deaths is challenging to estimate, but has great implications for quality improvement and health policy. Many studies and monitoring tools are based on selected high-risk populations, which may overestimate the proportion. Mandatory reporting systems, however, under-report. We hypothesise that a review of an unselected sample of hospital deaths will provide an estimate of avoidability in-between the estimates from these methods.

Methods

A retrospective case record review of an unselected population of 1000 consecutive non-psychiatric hospital deaths in a Norwegian hospital trust was conducted. Reviewers evaluated to what degree each death could have been avoided, and identified problems in care.

Results

We found 42 (4.2%) of deaths to be at least probably avoidable (more than 50% chance of avoidability). Life expectancy was shortened by at least 1 year among 34 of the 42 patients with an avoidable death. Patients whose death was found to be avoidable were less functionally dependent compared with patients in the non-avoidable death group. The surgical department had the greatest proportion of such deaths. Very few of the avoidable deaths were reported to the hospital’s report system.

Conclusions

Avoidable hospital deaths occur less frequently than estimated by the national monitoring tool, but much more frequently than reported through mandatory reporting systems. Regular reviews of an unselected sample of hospital deaths are likely to provide a better estimate of the proportion of avoidable deaths than the current methods.

Michigan Appropriate Perioperative (MAP) criteria for urinary catheter use in common general and orthopaedic surgeries: results obtained using the RAND/UCLA Appropriateness Method

Background

Indwelling urinary catheters are commonly used for patients undergoing general and orthopaedic surgery. Despite infectious and non-infectious harms of urinary catheters, there is limited guidance available to surgery teams regarding appropriate perioperative catheter use.

Objective

Using the RAND Corporation/University of California Los Angeles (RAND/UCLA) Appropriateness Method, we assessed the appropriateness of indwelling urinary catheter placement and different timings of catheter removal for routine general and orthopaedic surgery procedures.

Methods

Two multidisciplinary panels consisting of 13 and 11 members (physicians and nurses) for general and orthopaedic surgery, respectively, reviewed the available literature regarding the impact of different perioperative catheter use strategies. Using a standardised, multiround rating process, the panels independently rated clinical scenarios (91 general surgery, 36 orthopaedic surgery) for urinary catheter placement and postoperative duration of use as appropriate (ie, benefits outweigh risks), inappropriate or of uncertain appropriateness.

Results

Appropriateness of catheter use varied by procedure, accounting for procedure-specific risks as well as expected procedure time and intravenous fluids. Procedural appropriateness ratings for catheters were summarised for clinical use into three groups: (1) can perform surgery without catheter; (2) use intraoperatively only, ideally remove before leaving the operating room; and (3) use intraoperatively and keep catheter until postoperative days 1–4. Specific recommendations were provided by procedure, with postoperative day 1 being appropriate for catheter removal for first voiding trial for many procedures.

Conclusion

We defined the appropriateness of indwelling urinary catheter use during and after common general and orthopaedic surgical procedures. These ratings may help reduce catheter-associated complications for patients undergoing these procedures.

Addressing the challenges of knowledge co-production in quality improvement: learning from the implementation of the researcher-in-residence model

The concept of knowledge co-production is used in health services research to describe partnerships (which can involve researchers, practitioners, managers, commissioners or service users) with the purpose of creating, sharing and negotiating different knowledge types used to make improvements in health services. Several knowledge co-production models have been proposed to date, some involving intermediary roles. This paper explores one such model, researchers-in-residence (also known as ‘embedded researchers’).

In this model, researchers work inside healthcare organisations, operating as staff members while also maintaining an affiliation with academic institutions. As part of the local team, researchers negotiate the meaning and use of research-based knowledge to co-produce knowledge, which is sensitive to the local context. Even though this model is spreading and appears to have potential for using co-produced knowledge to make changes in practice, a number of challenges with its use are emerging. These include challenges experienced by the researchers in embedding themselves within the practice environment, preserving a clear focus within their host organisations and maintaining academic professional identity.

In this paper, we provide an exploration of these challenges by examining three independent case studies implemented in the UK, each of which attempted to co-produce relevant research projects to improve the quality of care. We explore how these played out in practice and the strategies used by the researchers-in-residence to address them. In describing and analysing these strategies, we hope that participatory approaches to knowledge co-production can be used more effectively in the future.

Pages