https://www.washingtonpost.com/technology/2024/12/25/ai-health-care-medical-doctors/
The harm of generative AI — notorious for “hallucinations” — producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an “inappropriate” answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice.
Was at a lecture on this last week.
ReplyDeleteAI is not yet ready for primetime.
Like a slacking student, when it doesn't know something, it makes up facts and references. Someone who doesn't know the literature would be fooled into thinking its "imagination" is reliable information.
At this point, the consensus is that AI is not yet ready for prime time. For some, it's a time saver - tell AI to generate instructions for wound care which will take less time to review than to type up oneself.
I tried an AI clinical scribe practice in my practice to record clinic notes while I just spoke to the patient. It worked well for counselling sessions but with physical complaints it was limited because if I didn't recite the results of the physical exam in medicalese, it didn't know how to record it.
The time will come when it's ready but we're not there now.