Is AI good or bad for healthcare?

  • August 30, 2024
  • Steve Rogerson

Steve Rogerson finds that Sarah Worthy from DoorSpace has some serious worries about how artificial intelligence is being used in healthcare.

Sarah Worthy, CEO of DoorSpace.

An area that excites me about the growth of IoT and artificial intelligence (AI) is healthcare. Suddenly we have far more data about our bodies, we wear devices on our wrists and elsewhere that track a wealth of information about our activity while monitoring our vital functions. And now doctors are starting to use AI to help them diagnose problems using these data. This is all good. Correct?

I did think so until I chatted this week with Sarah Worthy, CEO of DoorSpace (doorspaceinc.com), a Texas-based company that specialises in employee relationship management systems. She has a very different view and even considers AI in healthcare as potentially evil.

Evil? Now that is a big jump, so let me explain. But before I do, I should stress that Sarah is very much looking at this from an American perspective where healthcare and finance are linked in ways very different to other parts of the world, and there lies the problem. She sees the way AI and machine learning are being used in healthcare is more about making profit that keeping people healthy.

“We are seeing AI that was designed for financial applications being used for the clinical side,” she told me this week. “It is putting money first. Profit is all the technology focuses on, forgetting people.”

These systems, she said, were about getting the expensive patients out of the system more quickly and getting a larger number of patients through the door so profits increased.

“We have a real problem in the USA in our healthcare system,” she said. “It is using AI for financial gain and it is causing patients harm. Insurers are using AI to put pressure to deny patient care when the doctor is recommending it. People who need healthcare can’t get it because AI is making the decisions for them. AI can be incredibly dangerous.”

But surely AI can be a good thing if it is used as a tool to help doctors diagnose problems. You feed in the symptoms, and AI suggests possible causes, ones maybe the doctor had not considered. Sarah said no. She quoted a survey of 150 cases where AI got it wrong half the time. Another but: We are not talking about AI getting it right or wrong but making suggestions to the doctor.

Here again Sarah is not convinced. She brought up the phrase that doctors should think horses and not zebras. In other words, look for the simplest explanation for the symptoms rather than for more exotic diseases.

Now, I get that if we are talking about a doctor working without AI. And we have all experienced this. The doctor will look for the more common reasons for a problem, and that makes sense. However, in doing that, they can miss, sometimes until too late, the rare cases when it is something unusual. AI can help here by putting the unusual cause on the table along with other reasons, so at least that is in the mix. In this case I think Sarah has it wrong. I want my doctor to consider all possibilities when I am ill, and AI can help doctors do that without it taking up a vast amount of time.

I agree AI is not perfect at the moment when it comes to diagnosis, and we are a long way from the holographic doctor in Star Trek, but surely an intelligent health system that can make diagnoses more accurately than human doctors is a reasonable goal, and we are never going to get there unless we let doctors use the technology as a tool today so it can improve over time.

I do agree with Sarah when she says an AI that puts profits before health is not what is needed, and when used in that way could be described, maybe, as evil. I also agree with her that using AI to train doctors as a replacement for having an experienced mentor is not a good plan, and she said that was happening in some hospitals.

“We are not ready for it,” she said. “We need to slow down a little. Healthcare is not a place where you can break things. We need to slow down and make sure we know what we are doing.”

She said it was better to let AI handle the work that doesn’t directly impact patient care, and even here she said it was important hospitals had proper data handling capabilities so the AI was analysing accurate information, including the records of the people working in different departments. An example would be if there were two departments carrying out similar procedures and one was having worse outcomes than the other. An AI with all the correct data, including about the people, might spot, say, that the levels of experience were different, and that could be addressed.

She also said AI could help doctors sift through the vast numbers of medical devices and drugs the FDA approves each year to find the ones that were relevant to their specialities.

“Physicians spend hours of their pyjama time studying these,” she said. “AI would be fantastic for that.”

Also, even though electronic health records have improved data handling immensely, there are still problems with data about a patient being split between different systems. AI, she said, needed a data stream that had all the data.

“You can’t feed AI pieces of paper,” she said. “You need data management in place. We need to get organised and work out what we need to solve, and this should be away from the financial goals. The goal of healthcare is not to make money, and any AI that disagrees is failing all of us. We need to demand companies build technology that serves us and not the stock price.”

Few would disagree with that, but my worry here is if we start throwing around words such as “evil” when talking about any technology, then we can lose sight of why we developed the technology in the first place. No technology is evil; it is up to us how we use it and I really do look forward to seeing AI bringing great advances in patient care.