Let’s imagine the possibilities and theoretically demo the results based on current knowledge:
-
yes AI made the process fast and the patient did not die unnecessarily.
-
same but the patient died well.
-
same but the patient died.
-
same as either 1, 2, or 3 but AI made things slower.
Demo:
Pharmacy: Patient requires amoxicillin for a painful infection of the ear while allergic to penicillin:
AI: Sure! You will find penicillin in Isle 23 box number 5.
Pharmacy: the patient needs amoxicillin actually.
AI: Sure! The Patient must have an allergic reaction to more commonly used anti inflammatory medications.
Pharmacy: actually amoxicillin is more of an antibiotic, where can I find it?
AI: Sure! While you are correct that amoxicillin is an antibiotic, it is a well studied result that after an infection inflammation is reduced. You can find the inflammation through out the body including the region where the infection is located.
Pharmacy: amoxicillin location!
AI: Sure! Amoxicillin was invented in Beecham Research Laboratories.
What baffles me is why would you use an LLM when what you need is a digital inventory manager. Not bashing your argument’s merits. On the contrary, I think it depicts very well how people will shove AI-marketed shit on already-solved problems and make everyone’s lives worse because it’s ✨modern✨.
It’s the same crap like with blockchain.
People have no idea how sophisticated modern IT systems already are, and if you glue fancy words on solved problems, people will cheer you for being super innovative.
Ugh, blockchain. During the pandemic, I had absolutely no work to do so my boss asked me to make a presentation for him to present on the merits of blockchain. When my response was that it’s overhyped bullshit, he was not thrilled.
I made the requested presentation but it made me feel dirty, so I alt texted every slide’s graphics to include the counterpoint to the bullshit benefits being presented.
My company tried to jump onto the bandwagon in 2018 or so, but it fizzled out very quickly. Fortunately.
It’s not if you actually know what it is and what it’s for… A trustless public ledger.
My question is: is it being used for inventory management? Or is it being used to feed the entire patient file in to make sure the Pharmacist doesn’t make a mistake as well. Double checking for conflict in the prescription interactions and stuff like that.
Should it be relied as the only thing? No. Is it nice to have another set of eyes on every task? Probably? Could this be solved with the hiring of more pharmacy techs and an education system not driven by profit margins for the investors that actually facilitates the workforce’s technical skills? Yes.
Idk. Just sounds like shitty companies being shitty companies all the way down.
Further to this, to human is top err - so why would you start to rely on something that’s confidently incorrect so often.
It’s only a matter of time before this misleads someone terribly
Actual pharmacist here, working in pharmacy IT.
Unlike other industries, Pharmacy is not particularly thrilled about or interested in AI. In fact, my hospital explicitly blocks access to all LLMs.
I was actually kind of hoping to see what Microsoft is claiming here, and just walked away from this post more confused.
I think it’s in reference to this: https://news.microsoft.com/source/asia/features/taiwan-hospital-deploys-ai-copilots-to-lighten-workloads-for-doctors-nurses-and-pharmacists/
Looks like the benefit/headline comes from use of the entire software suite that provides access to a patient’s chart/medical history including checks for interactions/allergies. Most of that has nothing to do with AI but since it has a feature that generates a summary via a language model the whole thing is marketed as an AI Copilot.
Good thing, you don’t want medical advice from an LLM
You’re not great taking medical advice from a doctor either, seeing how often they’re wrong.
That’s fair, but they tend to be more right than an LLM :P
I was trying to find an article I read about a year ago, about an experiment where AI was assisting a doctor. Where it suggested questions and possible diagnosis for the doctor to look into.
IIRC the result was both faster and more accurate diagnosis. Too bad I can’t find it again now :(
Is “pharmacists seeing more patients” really a measure of something good? I’m a non-native English speaker so cut me some slack but all I can imagine is just longer queues in the pharmacy and more tired pharmacists (and people who now need to wait in the queue now).
“pharmacists seeing more patients” Implies that the queue moves quicker.
A pharmacist can only have so much time in their shift, so being able to more effectively use that time (see more people) would be a good thing.That’s a noble goal but does adding more people help the (long-term only, please) effectiveness? At what point does it start hindering it?
I would assume that someone like a pharmacist has to be focused all the time, stakes is high…
Do we have precise data about how physiological state of a pharmacist is changing through the shift? Do we know whether or not the pauses between people – which we might or might not have considered a wasted time – are actually essential for their ability to stay focused and reliable? (Is the answer the same for all of them?) Or maybe they could actually still use part of that time in a productive way, right? Also, why is there lack of people in the first place?
Focusing solely on adding more people to the equation seems to neglect factors like this. This tells me that whoever this factoid is trying to impress is not someone who I would want to trust with managing a pharmacy (or anything except maybe some production line) in the first place.
I read it as AI somehow making more people sick therefore more of them needing to go see pharmacists, therefore pharmacists seeing more patients
That’s a more realistic take. I for one would want the pharmacist to get AI help, that’s fine. But not start taking double the patients. There’s a people interaction aspect to this too. It’s health care not care for animals to get them ready for tomorrow’s dinner. But seriously don’t eat animals, they got feelings too.
I’ve mostly found that smart alerts just overreact to everything and result in alarm fatigue but one of the better features EPIC implemented was actually letting clinicians (like nurses and doctors) rate the alerts and comment on why or why not the alert was helpful so we can actually help train the algorithm even for facility-specific policies.
So for instance one thing I rated that actually turned out really well was we were getting suicide watch alerts on pretty much all our patients and told we needed to get a suicide sitter order because their CSSRS scores were high (depression screening “quiz”). I work in inpatient psychiatry. Not only are half my patients suicidal but a) I already know and b) our environment is specifically designed to manage what would be low-moderate suicide risk on other units by making most of the implements restricted or completely unavailable. So I rated that alert poorly every time I saw it (which was every time I opened each patient’s chart for the first time that shift then every 4 hours after; it was infuriating) and specified that that particular warning needed to not show for our specific unit. After the next update I never saw it again!
So AI and other “smart” clinical tools can work, but they need frequent and high quality input from the people actually using them (and the quality is important, most of my coworkers didn’t even know the feature existed, let alone that they would need to coherently comment a reason for their input to be actionable).
Listening to employees when making decisions, what a concept! It’s a shame many places don’t do that.
Even if this were true, did the pharmacists get a raise? Are they making more money? Or are they just seeing more patients (doing the extra emotional and mental labor that entails) and paying less attention to each one while Safeway and Walgreens pocket any increased revenue?
If anything, their tech hours got reduced.
I’ll take the non-AI using pharmacist for the win. Thank you very much.
Expert Systems are great for pharmacies, not the bullshit generators currently labeled as “AI.”
Suggestion: BS from MS about Al
helping a pharmacistfilling RxThey are using AI to help the pharmacist decrypt thedoctors’s writing
Thats actually not a terrible idea.
The pic being blurred and all, I thought it’s going to be some dad joke around “pharmacist can see more patients”
“Generate me 4096 images of pharmacy patients!”
I second the comment about this being a reason to reduce technician hours. Worked at the busiest store in my district the last 15 years of my career. We went from 3 pharmacists with several hours overlap on weekdays, down to 2 pharmacists with no overlap. Tech hours once was high enough to have 5 technicians on between 10-6, down to only having 5 total on staff. We went from a 24 location, down to being open only 11.5 hours a day. We were one block up from a Walgreens and one block down from a RiteAid that both ended up closing, and getting most of their customers who walked there. We had 2 major exoduses of staff and lost a good number of long time patients in the enshitification.
Even in a world where some new AI model could improve pharmacist throughput, it doesn’t compare to the skeleton crewing of corporate pharmacy bottom-line-go-up.
And because their LLM generated advice to people is bound to kill some of them, they can ‘see’ even more of them!
I am sure that AI leads to them seeing more people that need help
This post could have been titled “BS from MS about Al helping an MD”
My disappointment is immeasurable and my day is ruined.