Resource Page
Infant Breathing System Recall: AirLife/Vyaire Removes Infant Heated Wire Circuits Due to Risk for Inadvertent Adapter Disconnection During Ventilation
Broselow Pediatric Emergency Rainbow Tape Recall: AirLife Removes Certain Broselow Pediatric Emergency Rainbow Tapes due to Misprinted Information
Cybersecurity Vulnerabilities with Certain Patient Monitors from Contec and Epsimed: FDA Safety Communication
Early Alert: Infusion Pump Software Issue from Baxter
Extended-Release Stimulants for ADHD: FDA Drug Safety Communication - FDA Requires Expanded Labeling about Weight Loss Risk in Patients Younger than 6 Years
Early Alert: Blood Pump Controller Issue from Abiomed
Sandoz Inc. Issues Voluntary Nationwide Recall of One Lot of Cefazolin for Injection Due to Product Mispackaging
mRNA COVID-19 Vaccines: FDA Safety Communication - FDA Approves Required Updated Warning in Labeling Regarding Myocarditis and Pericarditis Following Vaccination
Angiographic Catheter Recall: Cook Removes Beacon Tip Angiographic Catheters due to Tip Separation
Early Alert: Esophageal pH Monitoring Capsule Issue from Medtronic
Import Alerts for Certain Olympus Medical Devices Manufactured in Japan - Letter to Health Care Providers
Anesthesia Delivery Systems Recall: GE HealthCare Issues Correction for Certain Carestations due to Risk of Ineffective Ventilation When Used in Volume Control Ventilation (VCV) Mode
Medical Procedure Kits Correction: Medline Industries, LP Issues Correction for Medline Procedure Kits Containing Medtronic Aortic Root Cannula due to Potential Excess Material in Male Luers
Resuscitation System Recall: ZOLL Circulation, Inc. Recalls AutoPulse NXT Resuscitation System Due to a Failure Code That May Stop Compressions or Deliver Inadequate CPR
Transderm Scōp (Scopolamine Transdermal System): Drug Safety Communication - FDA Adds Warning About Serious Risk of Heat-Related Complications with Antinausea Patch
Understanding the evidence for artificial intelligence in healthcare
Scientific studies of artificial intelligence (AI) solutions in healthcare have been the subject of intense criticism—both in research publications and in the media.1–3 Early validations of predictive algorithms are criticised for not having meaningful clinical impact, and AI tools that make mistakes or fail to show immediate improvement in health outcomes are heralded as the first snowflakes in the next AI winter (a period of decreased interest in AI research and development). Scientific evidence is the language of trust in healthcare, and peer-reviewed studies evaluating AI solutions are key to fostering adoption. There are over two dozen reporting guidelines for AI in medicine,4 and many other consensus statements and standards that offer recommendations for the publication of research about medical AI.5 Despite such guidance, the average frontline clinician still struggles in interpreting the results of an AI study to...
Workforce well-being is workforce readiness: it is time to advance from describing the problem to solving it
‘We need bold, fundamental change that gets at the roots of the burnout crisis.’- US Surgeon General Vivek H. Murthy, MD, MBA.
Well-being was brought into clearer focus during the COVID-19 pandemic, during which the prevalence of healthcare worker (HCW) emotional exhaustion increased from 27%1 to 39%.2 Currently, there is not a coordinated effort to ensure HCW well-being interventions meet minimum standards of feasibility, accessibility and methodological rigour. In this issue of BMJ Quality and Safety, Melvin et al assessed perceptions of physician well-being programmes by interviewing physicians and people involved in these programmes.3 As is often the case with any real-world application of science, there are substantial gaps between the programmes as intended and the programmes in practice. The authors conclude that the ‘persistence of poor well-being outcomes suggests that current support initiatives are suboptimal’.
The key is understanding what is suboptimal....
We will take some team resilience, please: Evidence-based recommendations for supporting diagnostic teamwork
In this issue of BMJ Quality and Safety, Black and colleagues present a qualitative study of healthcare teams working to uncover diagnoses in patients experiencing non-specific cancer symptoms.1 The study highlights the criticality of teams in helping support or derail diagnostic pathways. Overall, Black et al1 present unique insights that highlight the challenges clinical teams face when caring for patients with non-specific symptoms.
Unfortunately, we know that diagnostic processes such as those studied by Black et al are frequently unsafe. Diagnostic errors are ‘the single largest source of deaths across all (healthcare) settings,’ with estimates for cancer-related mistakes estimated at around 11.1%.2 A key challenge to making diagnoses in patients with non-specific symptoms is the presence of uncertainty throughout the diagnostic process.
As Black et al point out,1 uncertainty in the diagnostic process is felt by both patients and clinicians. It...
Large-scale observational study of AI-based patient and surgical material verification system in ophthalmology: real-world evaluation in 37 529 cases
Surgical errors in ophthalmology can have devastating consequences. We developed an artificial intelligence (AI)-based surgical safety system to prevent errors in patient identification, surgical laterality and intraocular lens (IOL) selection. This study aimed to evaluate its effectiveness in real-world ophthalmic surgical settings.
MethodsIn this retrospective observational before-and-after implementation study, we analysed 37 529 ophthalmic surgeries (18 767 pre-implementation, 18 762 post implementation) performed at Tsukazaki Hospital, Japan, between 1 March 2019 and 31 March 2024. The AI system, integrated with the WHO surgical safety checklist, was implemented for patient identification, surgical laterality verification and IOL authentication.
ResultsPost implementation, five medical errors (0.027%) occurred, with four in non-authenticated cases (where the AI system was not fully implemented or properly used), compared with one (0.0053%) pre-implementation (p=0.125). Of the four non-authenticated errors, two were laterality errors during the initial implementation period and two were IOL implantation errors involving unlearned IOLs (7.3% of cases) due to delayed AI updates. The AI system identified 30 near misses (0.16%) post implementation, vs 9 (0.048%) pre-implementation (p=0.00067), surgical laterality errors/near misses occurred at 0.039% (7/18 762) and IOL recognition at 0.29% (28/9713). The system achieved>99% implementation after 3 months. Authentication performance metrics showed high efficiency: facial recognition (1.13 attempts, 11.8 s), surgical laterality (1.05 attempts, 3.10 s) and IOL recognition (1.15 attempts, 8.57 s). Cost–benefit analysis revealed potential benefits ranging from US$181 946.94 to US$2 769 129.12 in conservative and intermediate scenarios, respectively.
ConclusionsThe AI-based surgical safety system significantly increased near miss detection and showed potential economic benefits. However, errors in non-authenticated cases underscore the importance of consistent system use and integration with existing safety protocols. These findings emphasise that while AI can enhance surgical safety, its effectiveness depends on proper implementation and continuous refinement.
Pages
