esoleyman a day ago | next |

I don’t like relative risk and relative risk reduction because it tends to overestimate the effectiveness of the intervention.

In this case, the absolute risk when measuring for death in the GIM pre-intervention and GIM post-intervention are 0.0215 (2.15%) and 0.0146 (1.46%) with an absolute risk reduction of 0.0069 (.69%).

While the relative risk is 26% across the pre- and post-intervention, the absolute risk reduction is only 0.69% with a NNT (number needed to treat) of 1/156. Which means that 1 patient in 156 was helped by this intervention.

In addition, they had 2 false alarms for each true alarm and could suggest that interventions were performed in patients who did not require it — more tests, medications and possibly increased risk from said interventions.

This shows that the CHARTwatch ML/AI is not helping at all that much clinically.

vessenes a day ago | root | parent | next |

I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?

_aavaa_ a day ago | root | parent | next |

I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.

tcmart14 a day ago | root | parent | next |

This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.

That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.

the__alchemist 5 hours ago | root | parent | prev | next |

More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.

_aavaa_ 4 hours ago | root | parent |

No, the problem is information overload. Even without these errors nurses are often overburdened with work and paperwork. Adding another alarm, with a >50% false positive rate is going to make that situation worse. And the nurses will start ignoring the unreliable warning.

the__alchemist 4 hours ago | root | parent |

I suspect we are on the same page. My point is in regards to using information as described in the article to improve the system. I do not think an on/off "alarm" is the way to do this. The key is to use information from signal processing theory (eg how a Kalman filter updates) to provide input into what medical action to take. The reactions against more diagnostics etc is due to how they are applied, like a brute force alarm, leading to worse outcomes through, for example, unnecessary surgeries etc.

The reduction I am arguing against is: "Historically, extra information and diagnostics that have an error margin results in worse outcomes because we misapply it; therefore don't build these systems."

rscho a day ago | root | parent | prev | next |

No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.

hammock a day ago | root | parent | prev | next |

I like this analysis, although I come to a different conclusion: if AI can allow nurses to manage 10x as many beds (30 vs 3), a hospital can now let go 90% of its nursing staff. Wouldn’t you?

namaria 3 hours ago | root | parent | next |

Coming to the conclusion that cutting 90% of nursing staff is possible and desirable is an astonishingly disconnected take

netsharc 5 hours ago | root | parent | prev | next |

Luckily most hospitals in the world seem to be short-staffed, and the population of sick is growing (because people are living longer).

hammock 4 hours ago | root | parent |

Generally speaking, they aren’t short staffed because there aren’t enough nurses, but because they can’t/won’t pay them enough. Those same hospitals hire large numbers of travel nurses to supplement their “short staff” at pay rates double or triple a local nurse.

And the nurses who want decent pay and can do travel nurse, do travel nurse

fsckboy a day ago | root | parent | prev | next |

>1 patient in 156 was helped by this intervention

the headline says we're talking about death: does that mean 1 life was saved for every 156 patients?

>In addition, they had 2 false alarms for each true alarm and ... and possibly increased risk from said interventions

but wouldn't this study have captured any deaths from those interventions, so the 1 out of 156 life-savings was net?

rscho a day ago | root | parent |

Would you suffer serious nonlethal complications from false alarm to (maybe) save your room neighbour that you've never met before? This wouldn't be captured.

fsckboy 18 hours ago | root | parent |

an individual would probably not make that choice, but the population could easily, the insurance company might, religious leaders might, etc.

this study was measuring deaths and what you are suggesting would be outside this study, but it could be measured also.

swyx a day ago | root | parent | prev | next |

this was excellent and necessary context on all fluff pieces like the OP. how can we automate this kind of analysis?

esoleyman a day ago | root | parent | next |

You can't automate it. You have to look at the data and charts to figure out the specifics you want and then you plug and chug. I haven't looked deeply at this though but whenever researchers use relative risk and it shows a profound effect, I always calculate the absolute risk to make sure that the intervention is effective.

Many researchers go to relative risk because it shows better results!

netruk44 a day ago | root | parent | prev | next |

I know everyone hates "I asked ChatGPT" comments but...I feel it's relevant here.

It came to roughly the same conclusion as the gp comment when provided with the study PDF.

https://chatgpt.com/share/66eb09e3-7a74-8008-afa8-3b60161d24...

(Though obviously this approach still requires you to go and look at the PDF yourself to make sure it isn't making anything up)

staticman2 a day ago | root | parent |

I think that ChatGPT result is a Rorschach test, it wrote things like "The percentage reduction could be exaggerated based..."

Could is doing a lot of work in letting you interpret what it's saying however you like.

qsort a day ago | prev | next |

"The deterioration prediction model was a time-aware multivariate adaptive regression spline (MARS) model"

https://doi.org/10.1503/cmaj.240132

tantalor a day ago | root | parent | next |

Thanks for posting this! Much better source than CBC article.

I found this interesting:

> 1 truly alerted patient for every 2 falsely alerted patients was deemed an acceptable number of false alarms

shadowgovt a day ago | root | parent |

Interesting, and I think it makes sense.

In an ideal world, the nurse to patient ratio would be high enough that patients could be seen on regular rotation frequently. I've never been in a hospital where this was the case. So a system that can correctly prioritize resources for critical cases even if it's pulling resources away from non-critical cases will probably result in a net improved outcome.

rscho a day ago | root | parent |

With such a false positive rate, I expect the staff on site to find ways to negate the effect pretty quickly.

Tostino a day ago | root | parent | next |

I don't know...If every 3rd time I was alerted it was some relatively serious issue, vs how often there is a serious issue when just doing rounds that you stumble upon, I'd think that would be a pretty good alert rate compared to the norm. But then again, I'm not in healthcare.

rscho a day ago | root | parent |

Essentially, it depends on workplace integration, i.e. how much effort it takes to discover the alert trigger. From personal experience, I'd say the upper limit of inconvenience is 'click the alert' on mobile, and 'move mouse on alert label to see tooltip' on desktop. Anything more will be quickly discarded, especially if it involves a popup or opens a new window.

inglor_cz a day ago | root | parent | prev |

2 in 3 isn't such a terrible false positive rate.

If your home alarm caught one real burglar per each three occassions it triggered, I bet you wouldn't develop alarm fatigue. I certainly wouldn't.

rscho a day ago | root | parent |

Do you get burglars multiple times a day? I bet not...

inglor_cz a day ago | root | parent |

Yeah, but potential death of a patient is on a similar level of seriousness.

rscho a day ago | root | parent |

Most of those alarms will warn about trivial everyday results, that may (with rather low probability) cause death down the line. My bet is they'd get mostly ignored very quickly.

lukeinator42 a day ago | root | parent | next |

I'm also shocked at how readable this wikipedia article is relative to most articles about statistical methods.

IshKebab a day ago | root | parent |

Wow you're right. I mean it's all maths articles on Wikipedia, not just statistics.

I think there are two causes of Wikipedia maths articles' general awfulness:

1. They're probably written by people that just learnt about them and want to show off their superior knowledge rather than explain the concept.

2. The people writing them think it's supposed to be a precise mathematical definition of the concept, rather than an easy to understand introduction. It's like they're writing a formal model instead of a tutorial.

Often the Mathworld articles are a lot better than Wikipedia, when they exist at least.

0cf8612b2e1e a day ago | root | parent | prev | next |

Fun bit of trivia (though depressing) from the wiki

  The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".

jmward01 a day ago | prev | next |

The question is will this lead to better care or a reduction in resources? Technology allows companies to become 'just good enough'. Any better than 'just good enough' and resources are withdrawn. If there is a 26% improvement in x and x was 'just good enough' before then the only 'rational' move by administration is to reduce other resources until x hits 'just good enough' again. That being said, I think the improvements are coming so rapidly in healthcare that we have a real chance of causing the entire system to shift into a new dynamic so maybe we will actually capture some of these gains for patients.

jjmarr a day ago | root | parent | next |

This takes place in Canada. There are no for-profit hospital complexes like the USA. All of our major hospitals are non-profit, reimbursed by the single-payer healthcare system and philanthropists getting stuff named after them. The profit-motive isn't as significant of a factor here.

That being said, I'm fine with a reduction of resources if additional resources don't increase the quality of my care. In Canada, doctors don't really like to prescribe antibiotics for minor infections.

Americans find this bizarre, but for a minor infection antibiotics are going to screw up your stomach bacteria and long-term health to maybe treat a disease that your body can easily handle on its own.

There's no magic value that comes from allocating resources to a problem. Oftentimes spending money has zero or negative impact beyond virtue-signalling that you care about the problem.

llm_nerd a day ago | root | parent | next |

Canadian hospitals have largely the same cost cutting and "efficiency" measures as their US equivalents. Departments have budgets that they have to fight for, feifdoms compete for scraps, and there is an enormous and perpetually growing admin/executive side that is taking more and more of the budget. Couple this with governments such as Ontario that "starve the beast", so to speak, forcing hospitals to squeeze further.

I don't think we should ever take any sort of superior position on this. The same motivations and outcomes occur.

Having said that, efficiency is good, especially with an aging population that will require more and more care. Resources are limited, so applying them in the most effective, efficient way possible is always a win.

jjmarr a day ago | root | parent |

American healthcare spending 80% more than Canada on a per-capita basis for worse or equal outcomes.[1]

Our system has major problems, but we spend less money and have a healthier population. That definitionally means we're more efficient.

> The same motivations and outcomes occur.

Our hospitals don't have shareholders that capture excess revenue as profit. Efficiency gains in a non-profit hospital typically get reinvested into the mission of providing healthcare. Efficiency gains in a for-profit hospital often go to the owners.

"Efficiency" is also measured differently in a non-profit context. A business measures monetary return on investment. A non-profit organization measures the monetary cost of achieving its mission.

Many for-profit hospitals in the United States offer free mental health clinics. These clinics have been accused of baiting patients into saying something suicidal as a tactic to involuntarily commit said patients.[2] Because appeals of an emergency mental health order are difficult, this is an extremely efficient way of making money (the hospital gets to bill the patient for their stay).

I don't believe this could happen in Canada. The goal is to get people out of the hospital because there aren't enough beds.

[1] https://en.wikipedia.org/wiki/Comparison_of_the_healthcare_s...

[2] https://www.buzzfeednews.com/article/rosalindadams/intake

ywvcbk 10 hours ago | root | parent | next |

> Our hospitals don't have shareholders that capture excess revenue as profit

Aren’t most hospitals in the US technically non-profit, though?

llm_nerd a day ago | root | parent | prev |

I will always choose properly funded universal healthcare over the US model, and my disagreement was with the claim that somehow the Canadian system wouldn't yield a reduction in resources because of some unique quality of universal/non-profit healthcare. Of course resources would be rebalanced if some part of healthcare could be done with less, and if the administration could cut budgets because a model lets them hit the same benchmarks with less, they absolutely, unequivocally will. And then they'll give themselves a fat bonus.

As to the mental health holds, here in Canada we have a problem with social workers encouraging difficult cases to consider medically assisted suicides, which is pretty disgusting. We have people dying on waiting lists. We have people having to go to the US to get basic imagining of probable cancer cases.

Universal healthcare is superior -- again assuming proper funding, which jurisdictions like Ontario are far, far short of -- but in the current state of the Canadian system, I would never imagine bragging about it online.

jmward01 a day ago | root | parent | prev |

I totally agree. The tie to whole patient outcome is stronger in that system. Still not perfect, but a lot more direct for sure. It may be an odd thing to say, but because of that there is an argument that the Canadian system is closer to a true free market healthcare system with the patient as the consumer than the US system.

doe_eyes a day ago | root | parent | prev | next |

I think you're missing an important part of the equation: it's outcome quality per amount paid. If you could have gotten 20% better results but it would mean tripling the costs of healthcare because we'd need to hire a lot more staff, perhaps we felt that was a bad deal.

If you can get 20% by paying... what, presumably <5% more for some ML tool that double-checks stuff and flags risky stuff... perhaps it's something we want to do.

jmward01 a day ago | root | parent | next |

No, my argument isn't that this wouldn't be used, it is that by using it there will be overage in quality of care above 'good enough' for the same or similar cost. That will result in the most expensive resources being reduced until quality of care is back to 'good enough' at less cost. It isn't a stretch to imagine that a tool like this would lead to a reduction in nursing staff since they can make rounds more effective and now don't need as many people to get the same level of quality job done.

doe_eyes a day ago | root | parent |

But I think that's a wrong way to look at it. Or rather, it posits that we're at a point we truly consider good enough independent of cost.

It's entirely possible that we want better healthcare outcomes - all the historical trends point to that - but that we're more or less out of ideas how to get there on the cheap. This might be a new possibility.

In your model, why do we get improved, costlier insulin if the old thing was good enough? Because we actually want to pay more if it works better, and it doesn't mean we cut something else to make up for it. You just pay more in taxes in a subsidized model, or pay more at the pharmacy with private healthcare. There's a drug manufacturer profit motive in there, but it holds true in the added-cost ML scenario too.

jmward01 a day ago | root | parent |

I can agree that good enough is not tied to cost and that is likely unfortunate for the patient. It is instead tied to profit, for the company. If increasing the standard of care leads to more profit a rational company will do that. If it means lowering then they will do that. Unfortunately there aren't many actual direct ties between patient outcome and profit and often when they do exist they are negative for the patient. The classic example of this is the question of is it more profitable to cure or to manage a disease? I'd love it if whole life outcome was actually tied to profit in a way that was beneficial to the patient. That would mean a free market driven by the patient as the consumer could exist. But healthcare systems, especially in the US, generally aren't structured that way.

So, to answer your question about 'why do we get improved, costlier insulin if the old thing was good enough' it is because the healthcare system will make more money on it. If they take a % then they are incentivized to use a more expensive version and they can justify it with the word 'better' even if the person is actually worse off as their financial situation deteriorates and they and their families are forced to cut quality of life everywhere else. They put their line for good enough at the point that makes the most value for them, not the point that is best for the patient.

brudgers a day ago | root | parent | prev |

it's outcome quality per amount paid

Outcome is not one thing. The patient wants better health. The provider has an interest in profits. The government has an interest in optics…well anyone using “AI” does.

rkangel a day ago | root | parent | prev | next |

This question has an implicit assumption that you're talking about a US-style health system and the incentives that exist in a system of that structure.

This is exactly why a structure like the UK NHS which is going for "what's the most healthcare I can get for the country with a fixed pot of money" is a better setup.

For instance, in the UK the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies. Similarly the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.

theonemind a day ago | root | parent | next |

The early death of smokers tends to save a long, expensive period of end-of-life care. I believe smoking deaths reduce health care costs, ironically enough.

stackskipton a day ago | root | parent |

It does, there is even a study on it. https://pubmed.ncbi.nlm.nih.gov/9321534/

Smokers also help keep pension/social security costs down since they pay into it but don't collect out of it or do for much shorter period.

danielbln a day ago | root | parent |

That study is almost 30 year old, has there been more current research? I also wonder if externalities like trauma on friends/family are factored in, I could imagine there are some transitive effects?

AStonesThrow a day ago | root | parent | prev |

> the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies.

Abundant contraception encourages and promotes promiscuity

> the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.

Reducing tobacco usage makes more room for nicotine OTC and vaping to replace it. Among other stimulants.

poincaredisk 19 hours ago | root | parent | next |

What do you recommend? Just letting people die of cancer and ignore teenage pregnancies?

I don't think data supports your claim that tobacco use was merely redirected to other forms of nicotine. But even if it did, that's a success since they're less harmful.

ywvcbk 10 hours ago | root | parent | prev |

> Reducing tobacco usage makes more room for nicotine OTC and vaping to replace it. Among other stimulants.

And? Nicotine itself is not particularly dangerous and might be even neuroprotective if consumed in moderation. Vaping as a consumption method might be problematic of course, but I don’t think there is any research showing it to be even as remotely as harmful?

TimPC a day ago | root | parent | prev | next |

I think in this case it's unlikely because I don't think the problems the tool solves correspond 1-1 with reduced staffing or other resources. The tool mostly seems to provide ongoing diagnosis at a level of detail the clinical team doesn't have regular bandwidth for (they might make one diagnosis of the patient per time they visit the patient rather than on an ongoing basis). It doesn't really reduce the amount of time staff can spend with patients. They can't get rid of doctor diagnosis entirely so they can't really reduce time per patient in any effective way.

wesselbindt a day ago | root | parent | prev | next |

Starving the beast is an ongoing program, the budget will be cut (or fixed, hence silently cut through inflation) either way. My hope is that improvements like this will stave off the harmful effects of the budget cuts.

formerly_proven a day ago | root | parent |

You realistically can’t starve the beast that is healthcare. The costs will go up disproportionately, and they do, in basically every advanced economy: https://en.m.wikipedia.org/wiki/Baumol_effect

wesselbindt a day ago | root | parent | next |

While I agree that you shouldn't, and that the end goal (privatized health care) is at the same time more costly and less efficient, that doesn't mean people can't or don't.

The Baumol effect you link to only shows that wage demands from health care workers go up in proportion to the wages of other workers. This means (roughly speaking), that reducing the health care budget will reduce the effectiveness of your health care system, because you're able to afford fewer people (I think this is the point you're making, please correct me if I'm wrong).

But that's entirely the point of starving the beast! By cutting funding to some federal department, that department becomes less effective, which makes people think that the government is incapable of running said department, and makes them open to the idea of privatizing the department. Et voila, you've opened up a whole new market that can be exploited for profits! The holy grail is opening up a market with inelastic demand such as health care, where people, no matter what you charge, will be forced to buy your product. This program has been incredibly successful in the US, which can be seen by comparing their health care system to that of other wealthy nations.

lotsofpulp a day ago | root | parent | prev |

You can reduce the spend per person by replacing more qualified workers with less qualified (cheaper) workers, and adding friction to the process of obtaining healthcare.

Increasing prior authorizations, increasing paperwork complexity, increasing hold times on the phone, obfuscation for who is responsible for what, constantly changing coverage so people have to change providers, and otherwise discourage them from seeking care.

MichaelZuo a day ago | root | parent | prev | next |

Why are you putting ‘just good enough’ in quotation marks?

Even declaring that is the case doesn’t change that it’s still clearly a personal judgement depending on the individual.

MSFT_Edging a day ago | root | parent |

Its not a personal judgement to say private equity buying up hospitals has shifted the priorities of the hospitals from care to profit.

"Depending on the individual" here means, depending if you're a share holder, or the patient dying on the cot.

zooq_ai a day ago | root | parent | prev |

This scenario exists only in progressive and HNers heads. Companies make money and Capitalism works by offering more services not fewer. Are there companies that does short-term thinking? Yes. But overall, our standard of living and quality of services has always improved

jmward01 a day ago | root | parent | next |

That was a rational capitalist argument. If a company has an opportunity to make money, they will. Any better than 'good enough' isn't rational and the people running that company should be fired. In the long term the entire industry will slowly adopt this and the standard of care may rise slightly as these gains are used for competitive advantage instead of pure profit but that will take a while at best and relies on a true free market, which healthcare definitely isn't.

SteveNuts a day ago | root | parent | next |

> relies on a true free market, which healthcare definitely isn't.

I think this is the part that people miss the most. When a purchasing decision is made based on something like "who has the best quality shoes in price range X", competition can occur.

When the buying decision is "will I live or die", there's not really any choice made there. Couple that with the complete lack of transparency for how much a give procedure will cost, and you've strayed so far away from a free market that it's not even recognizable.

I mean, even the hospital can't even remotely accurately tell you how much something will cost before you actually get a bill...

9dev a day ago | root | parent |

It’s not only about living or dying. Care work is fundamentally about treating humans with dignity and respect, and just shouldn’t be regarded as a free market playfield.

SteveNuts a day ago | root | parent |

Right, my point was more that healthcare isn't optional.

Folks think that removing the "profit motive" will somehow cripple the whole system, or hospitals will try to save any penny they can (spoiler alert: they already do)

throw10920 16 hours ago | root | parent | prev | next |

> If a company has an opportunity to make money, they will. Any better than 'good enough' isn't rational and the people running that company should be fired.

...where "good enough" is relative to a particular level of quality and price point, of course. Otherwise there wouldn't be different markets for rich and poor people. And this mechanic helps avoid a "collapse into mediocrity" that you'd otherwise get if all goods and services were offered at a single price point.

The real problem is what you identified at the end, that healthcare isn't anything like a free market. There's no buyer mobility, no transparency as to the level of the service you're getting - heck, you don't even know how much you're going to pay in advance, unlike almost every other industry.

Dalewyn a day ago | root | parent | prev |

>Any better than 'good enough' isn't rational and the people running that company should be fired.

This is kind of the reason the Japanese economy is stagnant and continues to fail in winning global marketshare. Businesses that are too good will fail or at least not compete with businesses that settle for being good enough.

formerly_proven a day ago | root | parent | prev |

IME anything that looks vaguely like a cost center often has something vaguely resembling an acceptable service/quality level and people typically aim to achieve that with the lowest cost. It’s not at all uncommon to cut budgets/headcount when that goal is exceeded noticeably.

tantalor a day ago | prev | next |

Unclear what "AI" brings to the table here. Sounds like traditional automation & monitoring could do the job here. No mention of how the model works, or what kind of training is involved.

> white blood cell count was "really, really high"

You don't need AI for this.

I wish they would provide a more compelling example.

jncfhnb a day ago | root | parent | next |

It’s a regression model. You don’t “need” AI for anything. But using ML to identify thresholds for decision making is extremely useful.

I don’t like calling everything AI, but I’m even more irritated by people that don’t understand the value of simple ML models for low hanging fruit decisions like the one shown here

rscho a day ago | root | parent | prev |

I agree. That's a much more compelling use of statistics than the shitty neural nets and whatnot we are usually served in healthcare.

jampekka a day ago | root | parent | prev | next |

It is based on a relatively traditional time series regression method. "AI" is just the usual spin.

Unbeliever69 a day ago | root | parent |

In AI applications, especially those involving predictive modeling, MARS can be used to improve the accuracy of predictions. For example, MARS models are used in time series forecasting, financial predictions, environmental modeling, and other domains where relationships between inputs and outputs are complex and non-linear. By adding time-awareness, the model can handle time-based data more effectively.

delichon a day ago | root | parent | prev | next |

It's the difference between "give the programmer this medical report and have them parse out the white blood cell count" versus s/progammer/AI. And the same every time that the report changes in any way.

I've been that programmer more times than I can count. I'm much happier about being able to work on better problems instead than I am worried about AI taking away my rice bowl.

snapcaster a day ago | root | parent | prev | next |

I think there is some element of "technology laundering" here that I saw during the blockchain hype. Even if plain ol' monitoring and automation could solve your problem no executives want to back that. If you say it's adding AI, blockchain, etc. they get to feel like a visionary so they'll fund your project

byteknight a day ago | root | parent | prev | next |

I am beyond tired of the "It made a decision on a if-statement, that's AI!"

kenjackson a day ago | root | parent | next |

Most modern AI does even less. It simply flows values through a graph. No decision is ever made. The consumer of the network interprets the result and makes a decision.

patapong a day ago | root | parent | prev | next |

I am tired of people redefining AI to exclude fully viable and useful technologies in favour of the latest hype. AI should be a functional concept, not defined by technological choices.

charlie0 a day ago | root | parent | prev | next |

You do have to wonder though, if traditional automation could do the job, why wasn't that done already?

contagiousflow a day ago | root | parent | next |

I think the real question is why is this being reported on. There are always medical advancements, but because this one gets chosen as a news story because "AI" in the headline gets clicks.

digging a day ago | root | parent |

This isn't just a small advancement, though. It's a simple tool, which isn't restricted to medical specialists, with a huge impact.

If a study found that letting cats roam hospital hallways reduced unexpected deaths by 26%, I think that would be reported, too.

contagiousflow a day ago | root | parent |

It's a 26% decrease in relative terms, but looking at the study shows that it is a 0.5% decrease in absolute terms (1.6% vs 2.1%). A 0.5% decrease is great and should be applauded, but I think the article framing of this being a breakthrough is misleading and even goes against the conclusions of the very paper it is reporting on.

charlie0 a day ago | root | parent |

Dang, the journos fooled me again with their creative headlines. You're right, this is basically a nothing-burger to generate clicks.

snarf21 7 hours ago | root | parent | prev | next |

Because healthcare (and banking and ....) are horribly behind on tech. We have life saving devices in hospitals still running Windows 95 as an OS. Also, the main problem in healthcare is misaligned incentives. As said elsewhere in this thread, this kind of tech will get when it enables cost reductions larger than its costs.

rscho a day ago | root | parent | prev | next |

Because tech people don't understand how healthcare systems work, and reciprocally healthcare workers have neither the education nor the time to understand new tech. The result is what you get today: people from both sides shouting at deaf ears on the internet. Also, the usual corporate culture issues.

charlie0 a day ago | root | parent |

Hot take: If tech people who are used to working with complex systems can't understand it, maybe it's time to replace the whole thing. The healthcare system doesn't make sense at all and is that way because of regulation and a bunch of other crap we need to get rid of/refactor.

rscho a day ago | root | parent |

One thing tech people absolutely don't understand is how much 2024 medicine is know-how and not science. And that's not for lack of trying to make it science. There are certainly things that could be improved, even through trivial stats. But for the most part, our information retrieval capabilities are so bad that the ability to actually walk the corridors and see the patient IRL is currently not something current state-of-the-art AI can compensate for.

charlie0 21 hours ago | root | parent |

I wasn't referring to marginal gains through the use of AI or automation, I'm referring to re-building everything from scratch so that things are actually efficient and effective. ie, see what Tesla did to the car industry and SpaceX to the space industry. We need something like that for health.

apwell23 a day ago | root | parent | prev | next |

because its not useful ?

"A difference-in-differences comparison between GIM and subspecialty units demonstrated no statistically significant difference in outcomes"

nisten a day ago | root | parent | prev | next |

Machine learning is extremely good at recognising patterns and I'd much rather trust an LLM's spotting accuracy for an early warning system than the regex code of hospital IT workers

artfulmink a day ago | root | parent | next |

Machine learning is indeed extremely good at pattern recognition, but I wouldn't trust an LLM to reliably identify patterns, especially in a medical context. As other commenters have said, this article is evidence of classical methods continuing to be useful.

thenaturalist a day ago | root | parent | prev | next |

This sentence contains two diametrically opposed hypothesis.

LLM's and accuracy in one sentence in the context of quantifying thresholds is stunning.

LLM's don't have a concept of numerical accuracy.

criley2 a day ago | root | parent | prev |

This doesn't make sense on many levels. "Hospital IT" does not code the hospital EHR systems, just like the airport doesn't code flight management systems.

These are life-long software engineers, just like others reading this comment, using the best tools at their disposal to engineer lifesaving software. They're not using "regex" to develop algorithms for monitoring patients (???), and frankly that suggestion is so wild that one has to assume you don't know anything about algorithm design at all.

An LLM literally hallucinates incorrect answers by design and struggles to get extremely basic math and spelling correct.

You're welcome to put your literal life in the hands of a hallucinating english generator, but when it comes to healthcare, I want a "0% LLM" policy. LLM's will be the cheap things that offer substandard care to poor people, while the wealthy and elite enjoy personalized and human-centered care.

potato3732842 a day ago | root | parent | prev |

Knowing what I know about workplace dynamics in hospitals I'm gonna go out on a limb and say that the "new hotness" factor of the term "AI" probably does a lot of heavy lifting here when it comes to getting buy in from management and users.

Forgoing a decade of income to get some letters beside your name selects for people who don't take orders from Clippy unless you market it well.

bearjaws a day ago | prev | next |

This is a great example of "classic AI" being more than good enough.

Using AI to find patterns in patients and intervene was something I worked on in my last job in Specialty Pharma. Theres many red flags on patients long before they even start treatment, sadly income is one of the largest red flags here in the States.

We were able to perform interventions earlier and improve outcomes with a simple regression model that tried to determined the number of missed doses.

gyutff a day ago | prev | next |

In my experience the best thing to have in a hospital is an advocate.

If a loved one is in the hospital, stay with them as long as the hospital will allow you to.

geocrasher a day ago | root | parent | next |

^^^^^^^^^^^ THIS ^^^^^^^^^^^^

Medical professionals, mostly nurses, are spread extremely thin. They are so busy and/or jaded that they often neglect to show any compassion or empathy until they see somebody else doing it. Having a family member nearby also keeps them accountable.

I have seen it personally too many times.

resource_waste a day ago | root | parent | prev |

Its incredible that this is needed.

Medical isnt science, and its frightening.

The weirdest thing I've experienced as a patient is that Physicians will urge you against second opinions or having multiple doctors.

Hope telemedicine becomes more mainstream, I'd like to avoid US physicians as much as possible.

rscho a day ago | root | parent | next |

Medicine isn't science because science is not as advanced as many would think. The lack of workplace integration is also a big factor.

I don't think we discourage second opinions, except maybe in some for-profit structures. The bad idea is to have multiple people making decisions in parallel. I'm not in the US, though.

Regarding advocacy, I don't think it's so crazy. It's very good to have a valid interlocutor when the patient is diminished. Also, hospitals are big systems with limited personalization. If someone's there to call out the system when it's trying to shoehorn too hard, it's also very good.

ndarray a day ago | root | parent | prev | next |

Enjoy the privilege of seeing multiple doctors as long as you still can. With steady cost reduction (AI, automation, less effort per patient) and increase in medical authoritarianism ("expert said so") that privilege is on thin ice. In the UK it's already normal to have a single area-designated doctor you're allowed to go to, and that doctor is also a gatekeeper to refer you to specialists. Hope he likes you! Beyond that, AI diagnosis would likely require an extensive medical online profile of you. Such e-med profiles obviously already exist in various countries, as opt-out features. In the name of cost reduction through automation, I'll be so free and call it: These profiles will become mandatory over the next ten years. Either way, good luck getting a second opinion once a false diagnosis ended up in your file or once AI continuously misidentifies a pattern present there.

chaosist a day ago | root | parent | prev |

I was semi-retired two years ago and decided to do a LPN program to work part time, do something physical, something that felt like a moral win and good for society.

I would have had no problem intellectually getting through the program but quit after the first night in a hospital.

Anyone sitting at a desk can not understand how tough and miserable a nursing job is. Everyone is basically miserable and stressed out. The work is completely thankless, disgusting and dangerous with personal liability on the line if you make a mistake. Everything that we take for granted in an office setting just doesn't apply in a medical setting.

I eventually just went back to a bullshit project management job, for more money than a nurse of course. This is obviously part of the problem.

It is easy to complain about the system when it is someone else who has to help grandma to the bathroom. There is no easy solution for any of this given the demographics. It is basically a disaster.

larsiusprime a day ago | prev | next |

We have the term “GOFAI” to distinguish “modern” AI from the older stuff (big bag of if statements, behavior trees, etc., but do we need a new term now to distinguish pre-LLM / Diffusion models (neural networks and tree based models)? Everyone thinks “ChatGPT” when they hear AI now but surely this is something more like XGBoost or a neural network under the hood.

tensor a day ago | root | parent | next |

No, we don't use GOFAI, we call it machine learning. LLMs are a subset of the field, and if you want to refer to them just use the term LLM. We don't need new terms when we already have easy to use precise language.

Marketing will abuse any term they get their hands on, and certainly "AI" has been abused, but in the field it usually the umbrella term for all areas of research into making "intelligent" behaviour. Be it expert systems, logic systems, machine learning, statistical machine learning, or otherwise.

marcosdumay a day ago | root | parent | prev | next |

Why all that need to distinguish it?

If you want the details, call it a regression model. If not, why insisting on communicating the details?

swyx a day ago | root | parent | prev |

can you elaborate how XGBoost could be used for a time series type thing like Chartwatch seems to be doing?

SketchySeaBeast a day ago | prev | next |

Important to note that the timing of this means that it's dedicated, specific AI, not "throw a wrapper and a specific prompt in front of ChatGPT" AI. Of course it's all muddied now.

Tobani a day ago | root | parent |

Test results are already reported from testing equipment a value and expected range (to account for a specific machine/reagant's calibration). Notifying when out of range hardly seems like a AI, but it certainly might be marketed as such.

Maybe there is some nuance for things like a patient in for liver issues where their liver enzymes are expected to be abnormal, but identifying when it is abnormal for them.

SketchySeaBeast a day ago | root | parent |

Yeah, I'm not sure how this qualifies as AI outside of marketing, but wanted to get ahead of the people whose opinions would be biased by the current en vogue LLMs.

delichon a day ago | prev | next |

There's a thriller plot hidden in here where the medicos ask an AI to reduce unexpected deaths so it manipulates both predictions and deaths to optimize the statistic. When they can manipulate the world we'll have to treat prompts as if they were wish fulfillment demands to a hostile djinn.

renonce a day ago | prev | next |

> That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program.

I’m not sure how an alarm for “high white cell count” should have had so much impact. Here in China once the doctor prescribes a finger blood test, we sample finger blood after lining up for 15 minutes, and the result is available within 30 minutes. The patient prints the results from a kiosk and any patient who cares enough about their own health will see the exceptionally high white cell count and request an urgent appointment with the doctor for diagnosis right away. Even in normal cases we usually have the doctor see the report within two hours. Why wait several hours?

> While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand.

> But in health care, he stressed, these tools have immense potential to combat the staff shortages plaguing Canada's health-care system by supplementing traditional bedside care.

This sounds like the deaths prevented by this tech are caused by delays and staff shortage and what this tech does is to prioritize patients with serious issues? While I appreciate using new tools to cut deaths, it looks like the elephant in the room is staff shortage?

pknerd a day ago | prev | next |

Don't judge me. I am not an ML expert. I am just wondering how this is an AI or ML thing. Is it not match the threshold of WBC in the body, if it is above or below the range, generate an alert. Can any ML guy tell me how this system is actually working?

kenjackson a day ago | root | parent |

I don't know the details, but I suspect its a bit more. Probably takes as input all of the factors over a time series and then determines based on these inputs over time there is a higher likelihood of Y. When that likelihood reaches some threshold it sends an alert to the nurse. It's almost certainly not as simple as temperature at 105 -> alert (although a temp of 105, would certainly signal a problem).

fabiospampinato a day ago | prev | next |

Closing hospitals would cut deaths in hospitals by 100%.

Like I'm not sure what this measure means, it's not like 26% of people that would die in the hospital would be made immortal or something.

magicmicah85 a day ago | prev | next |

Stuff like this is not exactly new but it’s great it’s receiving desired outcomes. The company I work for developed a sepsis alert back in 2010 that helped inform clinicians to possible sepsis in patients by analyzing lab results. Lot of success stories but of course false positives. Tools like this are very useful when they are one of many factors driving a clinician’s decision and not the only reason.

JumpCrisscross a day ago | prev | next |

“While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand”

So the blood was collected and labs done but it wasn’t scheduled to be reviewed until later?

Seems like a win-win. For those saying you don’t need AI, the alternative would be either across-the-board thresholds for flags for each line item (too many false positives) or manually setting it for each patient (too intensive).

rscho a day ago | root | parent |

Across the board thresholds is exactly what we usually have. I'm not so sure about the false positives being so high. I expect most of the effect to result from additional nurse whipping (sorry, 'targeted warning').

JumpCrisscross a day ago | root | parent |

> Across the board thresholds is exactly what we usually have. I'm not so sure about the false positives being so high

The article would have been stronger with those numbers. But I wouldn’t be convinced that a high WBC count for an average ER visitor would have been sensitive enough to trigger an alarm. The prior knowledge that it’s a cat bite is important.

daft_pink a day ago | prev | next |

This is really awesome. As someone that has entered an emergency room in severe pain and is shocked at how long it takes to see a physician. I hope this system can monitor people waiting to be admitted as well.

Oarch a day ago | prev | next |

The real weasel word here is "unexpected". If the AI is going around terminating patients and this counts as expected behaviour... technically correct!

inglor_cz a day ago | root | parent |

I don't think it is weasel word. It is just a qualification.

Nobody really expects AI to save terminal cancer patients or 90-y.o. cardiacs. Unexpected deaths, on the other hand, are really nasty, both for the next of kin and the doctors themselves. If an apparently viable patient suddenly drops dead, everyone asks what went wrong.

Reducing such deaths by one fourth is a good job.

loeg a day ago | prev | next |

You have two levers for reducing unexpected deaths, right? Hopefully this didn't increase the number of expected deaths to substitute.

apwell23 a day ago | prev | next |

" A difference-in-differences comparison between GIM and subspecialty units demonstrated no statistically significant difference in outcomes"

botanical a day ago | prev | next |

I really dislike how AI is used for everything. To me, AI means a dumb LLM spewing out half-truth and whole lies.

But in this instant, it's machine learning in the form of regression analysis: Multivariate adaptive regression spline

AStonesThrow 11 hours ago | prev | next |

I am thankful that we can expect fewer unexpected events now. Today I was saved from death at least five times because an automated traffic signal alerted me to machines hurtling dangerously in the wrong direction. I was able to push commands that halted my conveyance until the risk of death had plummeted.

I am also reminded of Dilbert's PHB decreeing that all future unplanned outages must be announced at least 48 hours in advance.

resource_waste a day ago | prev | next |

These studies are the only way AI will be implemented in Medical.

This stuff will not happen because its good technology that can save lives. Rather, the public pressure from AI performing better at saving lives that humans.

The anecdotes of 'oh it was wrong that one time', will pale in comparison to success. Maybe Insurance companies will be the winners and be our advocate. I've already seen medical professionals use 'that one time it was wrong' as a way to ignore technology.

wolfi1 a day ago | prev | next |

imagine how much they could reduce when the barest safety rules regarding hygiene were met

lambdadelirium a day ago | prev |

"While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand"

So they're understaffed and could just look into the results more, oh wow what an use of computational power.

tensor a day ago | root | parent | prev | next |

Continuous monitoring will always be better than manual checks. Also this is not an LLM and uses less power than your email software.

falcor84 a day ago | root | parent | prev | next |

That is literally what AI means - the ability of a computer to perform a task that would otherwise require human reasoning.

syndicatedjelly a day ago | root | parent | next |

What is an example of a computer that doesn't employ AI, by this definition?

falcor84 a day ago | root | parent |

The way I see it, AI is about the tasks a system handles, rather than the computer itself. I would say that AI encompasses the set of tasks where the computer system is in some way better than its user not just by having access to more computational resources, but by actually "reasoning" better. As a simple example, I'd argue that a basic spell-checker which works via dictionary lookup doesn't employ AI, but an extensive modern grammar checker does, as it can "reason" about the English language better than I (and most people) can.

Another way of thinking about it is that non-AI systems must always perform a task correctly, or we'd say that they have a bug. Conversely, an AI system performs tasks in situations where there is some measure of uncertainty or subjectivity, and they might arrive at a way of performing the task that is suboptimal, or even entirely inappropriate, without being buggy - for these systems we'd say that they did their best given the circumstances.

In the case of this hospital study, if they had used a simple "beep if measure goes above X" system, that wouldn't have been AI, but they used an ML model which integrates many interdependent factors over time [0] and while it has a significant ratio of false positive triggers (and as such is often wrong), it applies what would absolutely count as "reasoning" in trained human nurses.

[0] "The deterioration prediction model was a time-aware multivariate adaptive regression spline (MARS) model (Appendix, Sections 1–4). The model is made time-aware by incorporating risk score predictions from earlier in the encounter, the change in risk score since the previous assessment, and summaries of changes in the risk score over time." https://www.cmaj.ca/content/196/30/E1027

unsupp0rted a day ago | root | parent | prev | next |

In Canada they're dangerously understaffed. And the staff are burnt out and lack qualities like: attention to detail, common sense and basic human empathy. Or at least I hope it's because they're burnt out and not because the hospital regularly hires amoral robots, which is also a possibility.

Der_Einzige a day ago | root | parent |

Either you go with America and get bankrupted by medical care if you don’t have excellent insurance, or you go to Canada or Europe where the average doctor is paid 1/3rd as much and there are significant waiting periods for non immediately necessary procedures. Heads I lose, tails you win.

People wonder why folks hate doctors or get “white coat” syndrome. Same shit from dentists wondering why everyone hates them.

diggan a day ago | root | parent | next |

> Europe where the average doctor is paid 1/3rd as much and there are significant waiting periods for non immediately necessary procedures

I'm not sure where exactly you evaluated this based on (personal experience I suppose?) but this hasn't been true for me in Spain with either public healthcare nor private. Don't remember it being like that in Sweden (public healthcare) either, and I'm sure there are plenty of other European countries where the waiting time isn't significant either, and you also get great care.

Some countries seems to just have figured out how to make healthcare costs manageable, with great care, well educated doctors/nurses and also relatively low waiting times. I'd probably still say they're underpaid, because they're literally saving people's lives, but I guess that's true for everywhere, even the US.

BeetleB a day ago | root | parent | prev | next |

Canadian doctors earn decently well:

https://www.dr-bill.ca/blog/career-advice/doctor-salary-us-v...

Sure, there's the exchange rate, but it's still quite good. The disparity for tech workers is much greater.

cmrdporcupine a day ago | root | parent |

I think they (doctors here) have other concerns more about regulation / paperwork and overhead that comes with it, less than total compensation. Family doctors anyways.

That and the schools simply won't graduate enough of them. Doctor shortage is a serious problem. But so is nurse shortage post-COVID.

System here seems to be in crisis. Combination of many factors.

But all my experiences in the last few years have been... very positive? Excellent recent care for my teen at McMaster Children's Hospital. Family doctor 5 minute drive away, can get appointments quite quickly. So, yeah, it's regional and situation dependent.

rscho a day ago | root | parent | prev |

The english NHS is not Europe in general. Some western European nations still have relatively good quality care without ruining themselves. Although admittedly, this is getting less and less common.

taeric a day ago | root | parent | prev | next |

I mean... this is a perfectly legitimate use of computational power? What is the downside?

I suppose there is a risk they will downsize more. But this is like thinking cameras were bad because they reduced the number of security guards needed to secure an area. No?

rscho a day ago | root | parent |

Well, calling this AI seems like a long shot. What seems causal here is 'warn early', and indeed I'm sure it would work even better if you outputted a warning displayed full screen on the nurse's phone. It's quite possible you could have the same effect with trivial thresholds instead of a stat model. Still, I'd say it's indeed a good use of computers in general to produce targeted warnings.

taeric a day ago | root | parent |

Oh, fair. To an extent, at least. If they had said these were ML processed samples, would you balk as hard at it?

That is, I'm willing to chalk up use of "AI" as a descriptor being an editorial choice. Agreed that it isn't impressive just because it is AI, but it does still seem to be a good use of computational power.

rscho a day ago | root | parent |

Why 'balking hard'? Just saying that this is trivial use of statistics, but for once it's intelligent use. Still, if the false positive rate is too high, the effect won't last long.

taeric a day ago | root | parent |

Apologies, I thought you were the GP.

I took your tone to be a bit of push back on this being a good use of compute.

freilanzer a day ago | root | parent | prev | next |

> So they're understaffed and could just look into the results more, oh wow what an use of computational power.

How is this not a good use of compute?

readthenotes1 a day ago | root | parent | prev |

We could view the patient as a process under control, with all the sensors we have, and simply apply process control technology to that without waiting for a human to interpret the data many hours after it's relevant.

rscho a day ago | root | parent |

Visit a hospital and see how measurement samples are taken. Your idea might be applicable with 10x the budget.

readthenotes1 a day ago | root | parent |

I've been there. They take the physical samples to minimize patient sleep.

Otherwise, you're hooked up to monitoring equipment...

rscho a day ago | root | parent |

... that beeps and boops nonstop. Yes, but on top of that the error rate in measurements is staggering. It's difficult even to get reliable vital signs. Also, much of the stuff we're measuring is of unclear purpose. We don't have a solid understanding of what the measurements actually mean for the patient.

readthenotes1 3 hours ago | root | parent |

I talked with some doctors. They said they already have alert fatigue, mostly from so many false alarms built into the systems they already have.