WellSky CEO Bill Miller: Exercise Caution, Responsibility with AI in Hospice

Many expect AI to revolutionize health care, speeding access to care, improving diagnosis and prognosis, enhancing efficiency and achieving other benefits. However, providers need to see through the hype and ask the hard questions.

This is according to Bill Miller, CEO of the health care tech company WellSky. Kansas-based WellSky provides software and analytics to more than 20,000 client sites, including, hospital systems, blood banks, cell therapy labs, home health and hospice and other post-acute providers, government agencies and human services organizations.

As AI has proliferated in health care, the technology is spurring some concerns that the technology could displace workers or introduce some forms of bias into algorithms and processes. Rather than throwing the baby out with the bathwater, providers need to ask the right questions on how to mitigate the risks and maximize the returns, according to Miller.

Advertisement

Hospice News sat down with Miller to discuss current perspectives on AI, its potential benefits and possible risks.

WellSky WellSky
WellSky CEO Bill Miller

Are people expecting more of AI than the technology is currently capable of doing effectively?

I’ve lived through IBM-Watson and other sort of technological breakthroughs that were to change the way we did diagnosis and change the way we build [electronic medical records]. I think most people who have pulled away from those trends would say, “There was good, incremental improvement, but there was not this sea change of analytics, or sea change of the way that fundamentally, the health care system was architected digitally or otherwise.”

Advertisement

AI has a ton of promise. Like the beginning of most hype cycles, imaginations have run wild, and some startups have done some really creative things that may or may not be scalable in the long term. There’s an adoption curve that has to be addressed in post-acute care that’s different from other industries in general, let alone in the specific other types of health care.

Where we’re exercising responsibility and caution is when we start thinking about AI jumping into the diagnosis game, or somehow replacing the caregiver. We think of it more of how you could enhance the caregiver, keep the human in the loop. If we can help caregivers arrive at better outcomes for their patients by using AI tools and assisting them, then we’ll do that. 

It would also explain why we picked Google (NASDAQ: GOOG) as our partner, because I think Google shares the same transparency approach to how these large-language models are built. It doesn’t mean they haven’t made mistakes. It doesn’t mean that we are immune from any mistakes, but we have liked their philosophical approach.

Do you have any updates on how WellSky’s work with Google is proceeding, or the next steps for that relationship?

While, obviously, Google is gigantic, I think we’re one of their bigger partners in health care, particularly in the post-acute space, so it’s a part of the market they’re learning a lot about.

We’re a good partner in teaching the context of the end markets and what’s happening in the post-acute space. And on the other hand, you know, they develop code and tools at breathtaking speed, and I think they’ve upped our game and given us powerful tools. They’ve given us the ability in labs and otherwise to collaborate and build things together, which is collectively sped both of us up.

They’ve done a good job economically of keeping down the cost of these tools. Because it is the one thing nobody talks about: What are these tools going to cost and what is going to be their return? 

It’s still really early. I would say that the hype around AI has started to soften a little bit anyway, and health care will probably have the same reputation of being the last bastion of innovation. But I think we need to be careful. There is a modicum of caution and responsibility that has to be in all of our thinking.

There’s been a lot of discussion, as you’ve noted, about the benefits of AI, or at least the potential benefits of AI. Are there also some risks associated with relying on AI?

At the end of the day, these are large language models built by humans who have trained these tools to read these libraries. There’s already been examples of unintended bias in some of those models.

I think in many instances bias is unintended, but is a risk. I do think there’s a lot of promise for efficiency, and there are some people that interpret that as taking their jobs away. And we do hear that a little bit. But our industry is sitting on a workforce shortage like it has never seen. So all of our clients are excited about the idea of using AI to speed up processes, but there are some that are fearful that it is displacing workers.

Right now, we can use every help we can get to make the workers we have as efficient. But on the diagnosis side and the decision support, that’s where I think we’ll find the most areas where mistakes could be made.

You’ll have the hype for a while, and we’ll probably go into a phase where we’re not adopting things as fast as we all thought because we’re afraid of something that happened or there was a bad actor or a bad story, because of something on the bleeding edge that was meant to do really something great, but maybe didn’t take into the context the literally millions of variables it takes to understand and give care to someone dying.

When you talk about bias in this context, what does that mean? What kind of bias?

There’s been episodes where algorithms exclulded certain races, just the numbers didn’t come out. There’s bias that can seep into the way things are constructed in some of these models. It could be based on race. It could be based on particular illnesses. It could be based on geography. It could be based on family history.

These data sources have to be the sources of truth for the models to operate on them, and sometimes the veracity of the data sources are the things that you have to question the most, let alone the programmers that build the algorithms.

What are some of the most important things that hospice and palliative care providers should keep in mind as they implement systems like these?

I actually think they’re doing it. They may not know what they should expect to get in return, but they know enough to ask the questions and make sure that there was someone embracing the technology and showing a roadmap of how it’s going to make things cost less, be more efficient, deliver better care.

I’m glad clients are asking the questions, because it’s showing a level of involvement and progression. They are asking how they can see more patients in a day, whether scheduling could be automated, if they can speed up referrals, how they can identify which patients need hospice care faster. These are all things we are working on.

They do need to ask these questions about how it’s going to change the way they think about their technology partner, and what is that technology partner going to do to harness this to make life better for the patient and families as well as the caregivers, making their jobs less taxing and mundane and more rewarding.

Companies featured in this article:

,