Authors: Rob Hirst, Liz Farah, Chris Connolly, Becky Maxwell, Andy Neill, Dave McCreary / Codes: IP3, ResC3, RP5, SLO10, SLO5 / Published: 08/08/2024

Authors

- Andy Neill

- Dave McCreary

Clinical question

Is physician gestalt better at predicting sepsis than all the scores.

Title

Knack, S. K. S. et al. Early Physician Gestalt Versus Usual Screening Tools for the Prediction of Sepsis in Critically Ill Emergency Patients. Ann. Emerg. Med. (2024) doi:10.1016/j.annemergmed.2024.02.009. 

Background

- sepsis is common, something you may or may not have noticed. There has been an explosion of "awareness" for sepsis that seems to have resulted in young people with pharyngitis rising to the top of the triage pile and getting IV antibiotics within seconds of arrival...

- there have been a number of tools to help early identification of sepsis with things like SIRS. NEWS and the slightly odd qSOFA that came out with the revised sepsis definitions a few years ago

- but is this any better than physician impression, or to use that much loved german word “gestalt"

Methods

- single centre study, mostly authors from Hennepin with brian driver (of bougie study fame) among the authors

- prospective data

- enrolled people presenting to their resus area. This is a bit of a fatal flaw for me as there is clearly a reason that someone thought they should go to resus. Whatever that something is is not being studied here. The people i really want to know about are the ones who get missed by the usual routes (eg triage) and end up in a chair in minors but turn out to be sepsis. They do acknowledge this in their limitations

- once there the doctors completed a VAS for suspicion of sepsis and lots of data was collected for the sepsis scores.

- primary outcome was a discharge diagnosis of sepsis. Now in the UK and Ireland i would be concerned about this as the poor people who have to code our admissions are often working with very little data to guide them and i doubt the accuracy of this. But in the US there is specific payment for accurately diagnosing and treating sepsis and the incentive is incredibly important to the hospital. So I suspect the discharge diagnosis is at least moderately accurate. It is hardly a gold standard though.

- also, because this now seems compulsory, they added a machine learning model to look at lots of data and see if that was better than the sepsis scores. This seems like a little bit of an add on as it doesn't even make it in the title of the paper.

Results

- 2500 pts

- 11% ended up with discharge diagnosis of sepsis

- vast majority of the VAS were done by attendings

- unsurprisingly the higher the VAS the more likely antibiotics were given.

- when they only used data available at 15 mins the VAS outperformed everything. Again this is hardly surprising as that would take in a clinical history and context, something that would not be picked up on on a score.

- they give a ROC score of ~0.9 for the physician gestalt which suggests very usable specificity and sensitivity. This applies when the VAS was >50%. VAS was consistently better than the sepsis scores or our new machine overlords - the machine learning.

Thoughts

- in this sick cohort (they were all in resus). VAS gestalt by mostly fully trained EPs outperformed pretty much all the scores in predicting a diagnosis of sepsis. It also outperformed a machine learning algorithm

- unclear how good it would be if applied to a truly undifferentiated cohort but it does not seem that completing a qSOFA on someone you already think has sepsis and is sick adds anything.