AI in healthcare is one of the most researched areas today, particularly on the clinical side of healthcare. Sean Cassidy is the Co-Founder and CEO of Lucem Health. Having worked in digital health for the last twenty years, he joins me today to talk about identifying chronic diseases. Tune in to hear how AI and machine learning are creating efficiencies for different forms of healthcare data, and how changes and challenges are being addressed to improve the process. Going beyond workflow support, we discuss considerations to bear in mind when integrating AI into healthcare systems and how to meaningfully measure efficacy in a clinical context. Sean shares some hard-earned wisdom about leading an AI startup, reveals his big vision for the future of Lucem Health, and much more.


Key Points:
  • Introducing guest Sean Cassidy, who co-founded Lucem Health.
  • Defining digital health through an overview of Sean’s history in this industry.
  • The founding idea behind Lucem Health.
  • Different forms of healthcare data and how AI and machine learning can support them.
  • Navigating changes in external variables and patient circumstances.
  • The downstream diagnosis process and why patients are rarely re-assessed.
  • How Lucem Health’s approach facilitates doctors as they continue as they always have.
  • Considerations to bear in mind with the clinical adoption of AI beyond workflow.
  • How efficacy is measured in a clinical context.
  • Advice for leaders in AI startups.
  • A vision for the future of Lucem Health.

Quotes:

“We are focused on early disease detection almost exclusively, and so that is using AI and machine learning algorithms to, at any point in time, evaluate the risk that a patient may have a certain disease.” — Sean Cassidy

“Workflow is really important, but there are also other considerations that matter in terms of AI being more widely adopted in clinical settings and healthcare.” — Sean Cassidy

“We are always evaluating and trying to get a deep understanding of whether what we said was going to happen with respect to the performance of the solution is actually manifesting itself in the real world.” — Sean Cassidy


Links:

Sean Cassidy on LinkedIn
Lucem Health


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33.8] HC: Today I’m joined by guest Sean Cassidy, CEO and Co-Founder of Lucem Health, to talk about identifying chronic disease. Sean, welcome to the show.

[0:00:42.6] SC: Thanks Heather, it’s great to be with you today.

[0:00:44.5] HC: Sean, could you share a bit about your background and how that led you to create Lucem Health?

[0:00:48.5] SC: Yeah, I’m an old enterprise software guy. I’ve been doing this probably for longer than I care to admit. I came to healthcare kind of by happenstance in 2004. So, I’ve been working in what we typically refer to as digital health for the last 20 years. The work that I’ve done in digital health has been in data integration, data management, analytics, and in the latter days, more predictive analytics, and AI.

And so, I got the opportunity, basically through my network to have some conversations with Mayo Clinic back in late 2020 or early 2021 and they were thinking about investing in a new idea, which we can get to, you know, again, purely by with some good luck I suppose, got the opportunity to start the company with them.

[0:01:32.8] HC: Great. So, what does Lucem Health do and why is this important for healthcare?

[0:01:36.8] SC: Yeah, so, the founding idea for the company was fairly straightforward. There is a tremendous amount of research and development happening in AI and machine learning in healthcare, particularly in the clinical side of healthcare. We’ve had tools, AI-ish machine learning-ish tools for the administrative side of healthcare for a while, revenue cycle management, supply chain management, scheduling, that kind of thing.

But AI technology in the clinical realm is a little bit newer. The challenge that AI innovators have typically, those that work or are affiliated with healthcare providers is that they can do the research, they can access the data but they can’t get the power and potential of these technologies to the clinical workflow. That’s always been difficult for innovators that operate either tangentially or directly within an academic context.

So, the idea behind Lucem Health that I’m going to give you sort of a quick metaphor or analogy here in a second, and hopefully make it real for the listeners is we help solve that bench-to-bedside problem. So, we help AI innovators get their engines, so these very, very powerful algorithms to clinical workflow. So, we’re car builders and so we build a car around their engine and that’s the thing that actually delivers the real value both clinically and financially at the point of care.

[0:03:00.5] HC: And what role does machine learning play in this technology?

[0:03:03.5] SC: Well, it’s at the very center of everything that we do. So, again, if we’re solving the bench-to-bedside problem for AI and machine learning, then these engines that were dropping in the cars that we build are by definition, machine learning algorithms and so that’s how we have optimized our technology infrastructure is the support the specific and tangible needs of machine learning in the real world.

So again, just to make that real for everyone, what do we mean by the car? You know, what does the engine need in order to be able to provide real-world value? Well, we need to connect with the data infrastructure at a healthcare provider organization that wants to use these solutions and data is different at every provider, even if everybody is on Epic and increasingly these days, almost everybody is on Epic.

We have to take that data and we have to optimize, curate it, specifically for the needs of the AI models. So, they all operate the same way, in the sense that they take input data and they produce some sort of output value. We have to be able to take the output value and interpret it so that it is human-readable and human-understandable. We have to orchestrate the execution of everything that I just described.

And then crucially, and most importantly, we have to figure out how to get those insights into clinical practice, into clinical workflow in a way that is accessible to users, so right time, right place, right context, right stakeholder. Now, that’s all that we do with the car-making facility but ML to come back to your question is right at the center of all of our solutions.

[0:04:42.5] HC: So, you’re developing some of the ML solutions yourselves or just providing the infrastructure to make them possible?

[0:04:49.4] SC: That’s a great question. So, you know, we are working with AI innovators and ML innovators. So, we provide the capabilities that I described to them. So, we are not, at least, today, doing data science, creating algorithms. There is a lot of very promising stuff out there, it just needs the car. You know, it’s promising engines, just needs a car or a cars wrapped around it, that’s the value that we provide to our partners and ultimately the value that we provide, hopefully to the healthcare delivery system.

[0:05:22.3] HC: Let’s talk about more about the data that makes all these possible. What does healthcare data look like?

[0:05:28.1] SC: Yeah, well, you know, there’s a couple of ways to answer that question. It is increasingly voluminous, it is increasingly diverse in terms of its contents, there are new data modalities being put online all the time. So, there’s more and more information flowing, and as I said before, it can be heterogeneous, it can be different from site to site, it can be dirty and unoptimized and so a lot of times, it requires like I said, curation.

It requires a little bit of sort of cleansing and cleanup in order to make it useful in the real world. We work with structured data from electronic medical record or EHR technology, so like Epic, Cerner, and Celeron. We work with Signal’s data like ECGs, we also work with image data like the kind of thing you get from an MRI or an X-ray, or a CT scan. So, we work with a very broad sort of tableau of data.

But we still have to go to the process to make sure that that data is fit for purpose in terms of the needs of the AI engine that’s running at the center of everything that we do.

[0:06:38.8] HC: What would you say are the top one or two challenges in working with these different types of data and getting them in the right format and useable for machine learning?

[0:06:47.4] SC: I think it’s having efficient productive processes in technology that allow us to react to variability very quickly without incurring a whole lot of expense. One of our value propositions for our solutions is that they are simple and straightforward to implement at providers. We use standard integration techniques so we use fire for example in HL7 and other standards but we are going to encounter variability indifference.

We’re going to encounter you know, individual values and individual fields that don’t necessarily make sense for the needs of the AI engines that we’re trying to orchestrate the execution of, and so we have built technology infrastructure to make it real simple for our teams to see and understand the variability and to either push back indications to the providers of the data to their providers and their IT teams or to handle the variability inherently within our solution within our platform.

[0:07:48.5] HC: So, one of the variables that changes is the patient characteristics can change over time being, either it’s COVID hitting or you know, some other external factor. It could be scanners, different manufacturers, coming into a facility, different things like that. How do you support those changes so that the machine learning models can continue to perform well over time or to flag changes to indicate that the models might need to change?

[0:08:14.4] SC: Yeah. We have a – without getting too esoteric in detail, Heather, you know we have a particular deployment style that we use in a particular focus area for our solutions. We are focused on early disease detection almost exclusively, and so that is using AI and machine learning algorithms to, at any point in time, evaluate the risk that a patient may have a certain disease like diabetes or CKD or higher risk with stroke or cardiovascular disease.

And so, the longitudinal data about patients does matter at the point of time and is often in the input into our solutions. Once we’ve identified the risk for a patient, we don’t often go back and sort of rerun the algorithms against those patients because there’s a downstream diagnostic process that occurs.

Now, I suppose there is potential that time can pass and where the patient was not flagged being at risk for something, and say 2023, we get to 2024, and because of that, of the change in their data, they may be at risk at that point but again, we are evaluating the data as it exists at that point rather than necessarily dealing with what the state changes between those two periods of time.

Your question about drift is important, so when we think about the efficacy of our solutions and the efficacy of the underlying engines, the AI machine learning models, we are very careful to evaluate their efficacy value and impact across time and to be very aware when we’re seeing some potential deterioration in performance. So, we have some specific techniques which are proprietary that we use to do that.

But the shortest we answer your question is, is that it’s something that all of us who are in the business of delivering machine learning into a clinical setting need to be concerned with.

[0:10:20.6] HC: One of the key challenges with applying AI probably for any application but perhaps, specifically medical is making sure you’re solving a relevant problem and deploying it in a way that’s helpful. So, are there ways that your team and the machine learning and technologies you work with, are the ways that you help to be sure that it fits in with the clinical workflow and provide the right kind of assistance to doctors and patients?

[0:10:45.2] SC: Yeah, that’s a great question. That is a particular obsession of ours. There is a school of thought, which is quite pervasive in the industry that these new AI machine learning-generated insights should be served to clinicians via the EMR, and what clinicians tell us is that they are highly fatigued by all the alerts that they’re getting through their EMRs, all the emails that they’re getting in their inboxes, many of which are just noise to them.

And then the last thing they want is more flashing lights and more pop-ups and more things interrupting their ability to deliver care to patients. So, we have taken a particular approach that actually allows doctors to continue to practice medicine the way they always have and by surfacing risks of patients rather than trying to go inside the exam room with them, what we can do is we can help prioritize patients who are at higher risk.

And we can then have those patients come into the exam room and as I said before, like clinicians practice medicine the way they already have. So, when we think about the end-to-end value and efficacy of any solution, we take into account where does the rubber meet the road in terms of the workflow. I’ve been saying this quite a lot recently and I’ll say it again because your listeners might find it interesting.

There are a tremendous number of really innovative ideas and algorithms that are out there that on the surface appear to be able to deliver an enormous amount of value and potential in terms of better care delivery that are diagnostic yield and so on, unfortunately, when we look at a lot of these technologies, we cannot figure out ways to get those insights cleanly and efficiently into the workflow.

Or, we can’t figure out economically how to show to a healthcare provider that are deploying the solution with it at the center actually makes sense for that both from a clinical perspective and from a financial perspective. So, yes, workflow is really important but there are also other considerations that matter in terms of how do I put it? AI being more widely adopted in clinical settings and healthcare.

[0:12:59.9] HC: Thinking more broadly about what you are trying to accomplish at least in health, how do you measure the impact of your technology?

[0:13:05.7] SC: So, it’s really for us, it’s quite straightforward. If you think about it, if the basic pattern that we’re deploying is try to surface risks that patients may or may not have or be at risk for a particular disease or condition, then by having and after the fact, measurement infrastructure, which we do, we can evaluate what our research shows and what our published papers and other things show in other settings.

We can evaluate the performance of the solutions in any particular – at any particular site or any particular provider organization against those benchmarks and if we find that there’s differential, it could be that the solution needs to be tweaked. As I said before, it could be that the AI model needs to be tweaked or it could be, there’s a process cultural or other issue at the provider. So, we, another way to put it is, is that once we deploy our solutions we’re not done.

We are always evaluating and trying to get a deep understanding of whether what we said was going to happen with respect to the performance of the solution is actually manifesting itself in the real world and if it’s not, we intervene, figure out why, and take steps to improve it.

[0:14:18.8] HC: Is there any advice you could offer to leaders of AI-powered startups?

[0:14:23.0] SC: Yeah, and I kind of alluded to it before, Heather. It is really important that you don’t fall too in love with your data science, your algorithm, your technology. You don’t fall in love with pure accuracy, pure area under the curve, pure sort of you know, typical measurements of the efficacy of an algorithm. You need to think about, you need to be focused on, “How does this actually get implemented in the real world?”

“And can I figure out how to get somebody to pay for it? How can I show that there’s enough clinical and economic value that I can capture hearts and minds?” Let’s say you’re deploying at providers, at providers who are besieged with marketing materials and sales calls or led into digital health solutions are overwhelmed by the amount of hype that is being presented to them around AI and quite honestly, are just confused about where they should be focusing their time and attention.

You have to be able to show something that is unique and differentiate it and valuable, otherwise, no one will hear you. This is going to sound a little self-serving and a little bit of a commercial but that’s what we do. We help our partners figure those things out so that they can see these tools they’ve invested so much passion, time, and energy in actually can get into clinical workflows and deliver good in the world.

[0:15:43.7] HC: And finally, where do you see the impact of Lucem Health in three to five years?

[0:15:46.8] SC: Well, we have a set of solutions in the market today that we are extremely proud of and we’re getting a very good and positive reception for. We are going to continue to launch more and more of these things. Our vision for the company is that we will be deploying a broad set of early disease detection and treatment acceleration solutions around virtually any high-cost high-prevalence disease.

Everything from heart disease to cancers to diabetes and so on and that we would ultimately have the ability to be able to take patient data that’s common available, we run them through a whole series of these tools and surface risks and deliver that information to providers in such a way that’s really accessible, so that they can deliver much more comprehensive and proactive care for their patients and improve their lives.

So, it is a fairly bold vision but we feel like we’re on a trajectory to be able to achieve it in something like three to five years.

[0:16:42.9] HC: This has been great, Sean. I appreciate your insights today. Where can people find out more about you online?

[0:16:48.1] SC: Please join us at lucemhealth.com. We’re publishing a lot of material these days, blogs, and white papers, and other things that you might find interesting about the state of AI and healthcare about early disease detection, you know, and about some of the challenges that we’ve talked about today. So, come find us if you’re interested in talking more, there’s a usual info blocks there that you can click. We’ll respond quickly and we’d love to speak with you.

[0:17:11.0] HC: Perfect. Thanks for joining me today.

[0:17:13.2] SC: Thanks, Heather, I appreciate your time.

[0:17:14.7] HC: All right everyone, thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:17:24.8] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. And if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]