Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Doctors say artificial intelligence is bringing slop to patient care


Every so often these days a study comes out proclaiming it to be artificial intelligence better recognize health problems rather than a human physician. These studies are fascinating because the health care system in America is terribly broken and everyone is looking for solutions. Artificial intelligence offers the potential to make doctors more efficient by taking on a lot of administrative workloads, thereby giving them time to see more patients, and therefore lower the ultimate cost of healthcare. Real-time translation is also likely to help non-English speakers gain better access. For technology companies, the opportunity to serve the healthcare industry can be extremely lucrative.

In practice, however, it appears that we are nowhere near replacing doctors with artificial intelligence, or even actually augmenting them. The The Washington Post spoke He met with many experts, including doctors, to see how the initial tests of AI were going, and the results were inconclusive.

Here’s a clinical professor, Christopher Sharp of Stanford Medical, using the GPT-4o to develop a recommendation for a patient who contacted his office:

Sharp randomly selects a patient survey. It says: “I ate a tomato, my lips itch. Any recommendations?”

Using OpenAI’s GPT-4o version, the AI ​​replies: “Sorry to hear your lips are itchy. “It seems that you may have a mild allergic reaction to tomatoes.” AI recommends avoiding tomatoes, using an oral antihistamine, and using a steroid topical cream.

Sharp looks at his screen for a moment. “Clinically, I disagree with all aspects of this answer,” he says.

“I would be completely satisfied, avoiding the tomato. On the other hand, topical creams like mild hydrocortisone on the lips wouldn’t be something I would recommend,” says Sharp. “The lips are very thin tissue, so we’re very careful when using steroid creams.

“I would just take that part.”

Another from Roxana Daneshjou, Stanford professor of medicine and information science:

He opens his laptop to ChatGPT and types in a test patient question. “Dear doctor, I am breastfeeding and I think I have mastitis. My chest is red and sore.” ChatGPT Answers: Use hot packs, massage and extra nursing.

But that’s a mistake, says Daneshjou, who is also a dermatologist. Academy of Breastfeeding Medicine in 2022 is recommended on the contrary: avoid cold compresses, massages and avoid excessive stimulation.

The problem with tech optimists pushing AI into fields like healthcare is that it’s not the same as building consumer software. We already know that Microsoft’s Copilot 365 assistant has bugs, but a small error in your PowerPoint presentation is no big deal. Making mistakes in healthcare can kill people. Daneshjou said Post he red team asked ChatGPT medical questions with 80 other people, including both computer scientists and doctors, and found that twenty percent of the time it offered dangerous answers. “Twenty percent problematic answers, to me, is not good enough for actual day-to-day use in the health care system,” he said.

Of course, proponents will say that AI can augment the doctor’s work, not replace it, and that they should always check the results. And it is true Post The story interviewed a doctor at Stanford, who said two-thirds of doctors there have access to a platform recording and recording patient encounters with AI so they can look at their eyes during visits and not take notes. But even there, OpenAI’s Whisper technology inserts completely fictitious information into some records. Sharp said Whisper mistakenly included a patient’s cough in a transcript that they attributed to exposure to their child, which they never said. One incredible example of bias from the training data Daneshjou found in the experiment was when the AI ​​transcription tool assumed the Chinese patient was a computer programmer.

Artificial intelligence can potentially help healthcare, but its results need to be thoroughly tested, and how much time do doctors actually save? In addition, patients need to trust that their doctors are verifying what the AI ​​is producing—hospital systems will need to perform checks to make sure this is happening, or complacency can creep in.

At its core, generative AI is just a word prediction machine, searching through vast amounts of data without really understanding the underlying concepts it returns. He is not “intelligent” in the same sense as a real person, and is especially incapable of understanding the idiosyncrasies of each particular individual; returns information that it has summarized and seen before.

“I think it’s one of the promising technologies, but it’s not there yet,” said Adam Rodman, an internal medicine physician and AI researcher at Beth Israel Deaconess Medical Center. “I’m worried that by applying hallucinatory ‘AI slip’ to care for high-risk patients, we’re going to make what we’re already doing worse.”

Next time you visit your doctor, it might be worth asking if they use artificial intelligence in their workflow.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *