Resource Page

Extended-Release Stimulants for ADHD: FDA Drug Safety Communication - FDA Requires Expanded Labeling about Weight Loss Risk in Patients Younger than 6 Years

FDA MedWatch -

The FDA is revising the labeling of all extended-release stimulants indicated to treat attention-deficit/hyperactivity disorder (ADHD) - including certain formulations of amphetamine and methylphenidate - to warn about the risk of weight loss and other adverse reactions (side effects) in patients yo

Transderm Scōp (Scopolamine Transdermal System): Drug Safety Communication - FDA Adds Warning About Serious Risk of Heat-Related Complications with Antinausea Patch

FDA MedWatch -

The FDA is warning that the antinausea patch Transderm Scōp (scopolamine transdermal system) can increase body temperature and cause heat-related complications, resulting in hospitalization or even death in some cases. Most cases occurred in children 17 years and younger and in adults 60 years and o

Understanding the evidence for artificial intelligence in healthcare

Quality and Safety in Health Care Journal -

Scientific studies of artificial intelligence (AI) solutions in healthcare have been the subject of intense criticism—both in research publications and in the media.1–3 Early validations of predictive algorithms are criticised for not having meaningful clinical impact, and AI tools that make mistakes or fail to show immediate improvement in health outcomes are heralded as the first snowflakes in the next AI winter (a period of decreased interest in AI research and development). Scientific evidence is the language of trust in healthcare, and peer-reviewed studies evaluating AI solutions are key to fostering adoption. There are over two dozen reporting guidelines for AI in medicine,4 and many other consensus statements and standards that offer recommendations for the publication of research about medical AI.5 Despite such guidance, the average frontline clinician still struggles in interpreting the results of an AI study to...

Workforce well-being is workforce readiness: it is time to advance from describing the problem to solving it

Quality and Safety in Health Care Journal -

‘We need bold, fundamental change that gets at the roots of the burnout crisis.’- US Surgeon General Vivek H. Murthy, MD, MBA.

Well-being was brought into clearer focus during the COVID-19 pandemic, during which the prevalence of healthcare worker (HCW) emotional exhaustion increased from 27%1 to 39%.2 Currently, there is not a coordinated effort to ensure HCW well-being interventions meet minimum standards of feasibility, accessibility and methodological rigour. In this issue of BMJ Quality and Safety, Melvin et al assessed perceptions of physician well-being programmes by interviewing physicians and people involved in these programmes.3 As is often the case with any real-world application of science, there are substantial gaps between the programmes as intended and the programmes in practice. The authors conclude that the ‘persistence of poor well-being outcomes suggests that current support initiatives are suboptimal’.

The key is understanding what is suboptimal....

We will take some team resilience, please: Evidence-based recommendations for supporting diagnostic teamwork

Quality and Safety in Health Care Journal -

In this issue of BMJ Quality and Safety, Black and colleagues present a qualitative study of healthcare teams working to uncover diagnoses in patients experiencing non-specific cancer symptoms.1 The study highlights the criticality of teams in helping support or derail diagnostic pathways. Overall, Black et al1 present unique insights that highlight the challenges clinical teams face when caring for patients with non-specific symptoms.

Unfortunately, we know that diagnostic processes such as those studied by Black et al are frequently unsafe. Diagnostic errors are ‘the single largest source of deaths across all (healthcare) settings,’ with estimates for cancer-related mistakes estimated at around 11.1%.2 A key challenge to making diagnoses in patients with non-specific symptoms is the presence of uncertainty throughout the diagnostic process.

As Black et al point out,1 uncertainty in the diagnostic process is felt by both patients and clinicians. It...

Large-scale observational study of AI-based patient and surgical material verification system in ophthalmology: real-world evaluation in 37 529 cases

Quality and Safety in Health Care Journal -

Background

Surgical errors in ophthalmology can have devastating consequences. We developed an artificial intelligence (AI)-based surgical safety system to prevent errors in patient identification, surgical laterality and intraocular lens (IOL) selection. This study aimed to evaluate its effectiveness in real-world ophthalmic surgical settings.

Methods

In this retrospective observational before-and-after implementation study, we analysed 37 529 ophthalmic surgeries (18 767 pre-implementation, 18 762 post implementation) performed at Tsukazaki Hospital, Japan, between 1 March 2019 and 31 March 2024. The AI system, integrated with the WHO surgical safety checklist, was implemented for patient identification, surgical laterality verification and IOL authentication.

Results

Post implementation, five medical errors (0.027%) occurred, with four in non-authenticated cases (where the AI system was not fully implemented or properly used), compared with one (0.0053%) pre-implementation (p=0.125). Of the four non-authenticated errors, two were laterality errors during the initial implementation period and two were IOL implantation errors involving unlearned IOLs (7.3% of cases) due to delayed AI updates. The AI system identified 30 near misses (0.16%) post implementation, vs 9 (0.048%) pre-implementation (p=0.00067), surgical laterality errors/near misses occurred at 0.039% (7/18 762) and IOL recognition at 0.29% (28/9713). The system achieved>99% implementation after 3 months. Authentication performance metrics showed high efficiency: facial recognition (1.13 attempts, 11.8 s), surgical laterality (1.05 attempts, 3.10 s) and IOL recognition (1.15 attempts, 8.57 s). Cost–benefit analysis revealed potential benefits ranging from US$181 946.94 to US$2 769 129.12 in conservative and intermediate scenarios, respectively.

Conclusions

The AI-based surgical safety system significantly increased near miss detection and showed potential economic benefits. However, errors in non-authenticated cases underscore the importance of consistent system use and integration with existing safety protocols. These findings emphasise that while AI can enhance surgical safety, its effectiveness depends on proper implementation and continuous refinement.

Pages

Subscribe to Medication Safety Officers Society- MSOS aggregator - Resource Page