In the words of Dave DeCaprio, “We need to move from a reactive healthcare system to a proactive one.” Dave is the CTO and Co-founder of ClosedLoop, a data science platform for healthcare that is using predictive AI to make this crucial shift.

In this episode, we learn about the problem he identified in the healthcare system that he felt he was uniquely set up to solve given his background, and how ClosedLoop is working to solve it. Dave shares use cases for ClosedLoop’s predictive models and the challenges he’s encountered in applying predictive modeling to health data. We find out why model interpretability is so important and learn about the role of human mediation in ClosedLoop’s applications. Dave explains the ways in which biases manifest in the world of health data and how ClosedLoop measures and mitigates bias. To find out how ClosedLoop measures its models over time, as well as the impact of its technology, tune in! Dave closes with some astute advice for other leaders of AI-powered startups and his vision for the near-future impact of ClosedLoop.

Key Points:
  • Dave DeCaprio's background and how it led to the creation of ClosedLoop.
  • The healthcare problem he felt he was uniquely set up to solve.
  • What ClosedLoop does and why it’s important for healthcare.
  • Use cases for ClosedLoop’s predictive models.
  • The challenges of working with health data and applying predictive modeling to it.
  • Why the model interpretability in ClosedLoop’s applications matters.
  • Human intelligence mediation in the interpretation process.
  • How ClosedLoop won the CMS AI health outcomes challenge.
  • Examples of how bias manifests in models trained with health data; how to measure and mitigate bias.
  • How ClosedLoop monitors its models over time and how COVID affected its accuracy.
  • The way ClosedLoop measures the impact of its technology.
  • Dave’s advice for other leaders of AI-powered startups.
  • His vision for the near-future impact of ClosedLoop.

“There were a lot of things I felt were broken about healthcare that I couldn’t do anything about, but I kept coming back to this idea of using all the right data to make the right decisions and getting the right treatment to the right patient at the right time.” — Dave DeCaprio

“[We] put ClosedLoop together to basically tackle this data science and AI in healthcare challenge.” — Dave DeCaprio

“Where prediction in AI plays a huge role in healthcare is moving from a reactive to a proactive system.” — Dave DeCaprio

“Healthcare data is very complex. There are tens of thousands of diagnosis codes, there are hundreds of thousands of drug codes, and they’re constantly changing.” — Dave DeCaprio

“In almost all cases, the output of our model is mediated with human intelligence in order to actually make a decision about a patient’s care.” — Dave DeCaprio

“The most powerful measures of the impact are the stories we get from our customers.” — Dave DeCaprio

“If you want to build a robust company that’s going to be successful year after year and be able to grow and really tackle these problems, you eventually have to show tangible demonstrable benefits.” — Dave DeCaprio




[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at


[0:00:33.4] HC: Today, I’m joined by guest, Dave DeCaprio, CTO and Co-founder of ClosedLoop, to talk about data science for healthcare. Dave, welcome to the show.

[0:00:42.0] DD: Thanks, happy to be here.

[0:00:44.1] HC: Dave, could you share a bit about your background and how that led you to create ClosedLoop?

[0:00:47.7] DD: Sure, I started my career in enterprise software doing operations, research, and logistics kind of things. It’s a company called I2 Technologies way back in the late 90s and that was fun until the whole Internet boom then – but at the end of that run and the start of 2000, I was in Boston at the time, was trying to figure out what I was going to do next and got an opportunity to work on the human genome project.

I was kind of was looking around at pharma and got an opportunity to go to MIT to work on the human genome project. I knew nothing about biology or health or anything at the time but they said they were trying to scale up the human genome project and knew some people who knew software. So I knew software, so I was able to get that job and it was great.

I really loved being in the scientific environment and I really fell in love with – at the time, the genomics research I was doing and the potential impact I was going to have on human health back in the day and then to be part of you know, the human genome projects, something that’s sort of done once in the history of humanity was really exciting. So that kind of gave me a bug for healthcare and life sciences and I’ve become stuck with that since.

I was not in academics so after a few years of the research project, I moved into pharma and drug discovery. I did that for a few years, still being in Boston. I really enjoyed that but I didn’t have the tolerance for failure that you’d be able to have to work in drug discovery. It is a very, very tough industry and also it was a day in like 2009 where I was sitting there thinking, “Is the problem with healthcare that we don’t have enough pills?”

“Is that really the issue?” and I started thinking that drug discovery wasn’t actually the problem that I felt like the healthcare system was facing. Not to disparage pharmaceutical researchers in any way, they create life-saving drugs, huge impact but it just wasn’t – I felt like it wasn’t the problem that I was uniquely set up to solve and so I thought about you know, what things I could address and sort of came on healthcare delivery and maybe we aren’t really using all the right data to make all the right decisions all the time.

There are other problems with healthcare I think that were structural, the incentives were off and things like that and I’m not a policy maker. I’m not the – you know, I’m not in Congress, not the president. So there were a lot of things I felt like were broken about healthcare that I couldn’t do anything about but I kept coming back to this idea of using all the right data to make the right decisions and getting the right treatment to the right patient at the right time.

It was something that I felt like, I could make an impact in, and then in 2012 in the Affordable Care Act, Obama Care came out, I got opportunity just to work with some health insurance companies on these kinds of problems and that’s really when I sort of fell in love with you know, where I am now, which was AI and health and predictive modeling and healthcare delivery. So looking at actual real patient records and the real ways patients are treated and trying to make predictions about those to help improve care.

So I did that, I that for a while at various places. In 2017, I founded ClosedLoop with my cofounder Andrew Eye. We had known each other for about 20 years and so we knew we wanted to start a company together and finally, the opportunity came where it was the right company at the right time and put the – put ClosedLoop together to basically tackle this data science and AI in healthcare challenge.

[0:04:11.5] HC: Great, so, what all does ClosedLoop do and why is this important for healthcare?

[0:04:17.2] DD: So we are a data science platform for healthcare. So we bring in healthcare data of all kinds, so that’s medical and prescription claims records from insurance companies, electronic medical records from hospitals and primary care doctors and other physicians lab data, and then you know, other sorts of newer forms of data. We can bring in all of that healthcare data and then we have a couple of things.

One is a feature platform that basically turns all of that raw data into sort of useful derived features on top of it and then an automated machine learning platform that helps us build predictive models on that, quickly and easily and so our customers are data science teams at healthcare organizations that are trying to use data to make the predictions or just you know, business owners at healthcare organizations to use our internal data science team to use our platform in their behalf.

And the reason this is important is because as I said, using the right data to make the right predictions about what’s going to happen to people is usually important. We need to move from a reactive healthcare system to a proactive one, where instead of waiting for bad things to happen and then trying to fix them once the damage is already done if you can spot things before they happen, spot the trends that are leading to a bad outcome, it is much easier to fix the problem earlier.

An ounce of prevention is worth a pound to cure and so that’s really where prediction and AI plays a huge role in healthcare is moving from a reactive to a proactive system.

[0:05:55.3] HC: So with machine learning and these predictive models, what type of things are you trying to predict?

[0:06:01.7] DD: Yeah, there’s many use cases, a lot of what we do is what’s called population health. So again, instead of waiting for a person to make an appointment at a doctor and coming in, proactively scanning populations to look for opportunities to improve the patient’s care. Some specific examples of that are predicting unplanned hospital admissions.

So basically, just looking at the patterns of what’s going on with people and maybe they have stopped taking in medication they should be taking or they’ve had a declining value in their lab, set of values and results in their lab values and all of that is – it looks like a pattern that eventually leads to an expensive hospital stay.

Having your provider actively contact that patient, reach out, get them in some kind of care management program earlier to prevent that hospital stay is one use case. Other areas we look at, end-of-life care is a big one. So when you ask people how they want to die, everyone says they want to die surrounded by loved ones at home, peacefully.

That is often not the way life ends. People end up in a hospital, hooked up to machines and so, we use models that help predict mortality to help guide value of care decisions and help people make more appropriate healthcare choices at the end of their life and then other areas we looked at, chronic disease, progressions.

So figuring out when it’s time to move somebody’s diabetes treatment to the next level up. Many applications will stop there but a variety of different disease progression and kind of adverse outcomes that we can predict.

[0:07:40.4] HC: Do you go out looking for applications to apply predictive model for or are clients coming to you with their intended use and you are the one to solve it?

[0:07:50.6] DD: Yeah, a little of both I think. We have you know, some of the standard ones I mentioned, we know those are ones that have real impacts for healthcare providers today, for really any risk takers in healthcare and so when we go to an organization, work with somebody, we’re usually saying, “Hey look, these are the models we’ve seen are straightforward to implement there. We can get highly accurate results and they are very actionable.”

So we’ll usually guide our customers to go take a look at those but we’re also sort of asking, “Hey, what other problems do you have with this technology might be useful?” There’s a lot of sort of less clinical and more operational kinds of use cases. You know, typical, people want to know health insurance companies want to know who is likely to churn, who is likely to change insurance, those kinds of things and predictive modeling is useful in those cases also.

[0:08:42.9] HC: what kinds of challenges do you encounter in working with health data and applying predictive modeling to it?

[0:08:49.3] DD: Yeah, there’s several. I like to tell people, if you’re in healthcare, you better be excited about the mission because it is not an easy space to be in, it’s obviously a regulated industry. It’s highly complex and so there’s a lot of challenges. I’d say, you know, the first is the sort of the security and just access to the data.

Everything we work with is protected health information that’s covered by HIPAA and so we’d be very careful about working with healthcare data and the customers we work with require a high degree of trust just in order to share that data. So that’s kind of the first hurdle, the second hurdle is really the heterogeneity of the data.

There is you know different – because it’s often very segmented the different systems, there’s a data integration problem and healthcare data is very complex. You know there are tens of thousands of diagnosis codes, there are hundreds of thousands of drug codes and they’re constantly changing all the time.

Every time you go to the pharmacy to get a prescription filled, there’s a particular number associated with exactly what you got and those get rotated and so you have to sort of be on top of all that data, and all those data changes, which are all very specific to healthcare and be able to know and how to work with that data and in every area in natural language, doctors know the clinical text looks very different than regular text.

MRI images look very different than normal images. They’re actually 3D images and not regular 2D images. So almost every data modality, when you look at healthcare has some level of complexity to it beyond sort of typical Internet-level that you find around.

[0:10:23.1] HC: So I guess that means, first of all, you need to understand the nuance to those types of data but also you need to understand what the best types of solutions are because it might be different than for you know, standard text data or for photographs. You might need to apply something slightly different, is that right?

[0:10:40.8] DD: Yeah, yes and you know, it’s interesting, there’s, you know, as the AI techniques have advanced. I think things have gotten kind of easier to specialize for different data types. So that’s, you know, it’s now more of attractable problem but it’s definitely something where you need to be thinking about it, “Okay, how am I going to customize these techniques to work within healthcare?”

[0:10:59.4] HC: What about model interpretability? Is this important for the types of applications you’re working with?

[0:11:04.0] DD: Yeah, it’s absolutely essential for everything we do. You know, healthcare decisions, we are influencing serious healthcare decisions and so in almost all cases, the output of our model is mediated with human intelligence in order to actually make a decision about a patient’s care.

Nobody’s just sort of completely dictating their care based on what the computer says today and in order to have you know, we’re getting a result with a doctor that’s making a recommendation or showing some, you know, basically pointing out something that might be likely to happen but it’s a – you know, the doctor needs to be able to understand that prediction and why the prediction was made the way it was in order to actually be able to understand it because in many cases, they need to know, you know, what data exactly went into that.

The doctor may have access to data that the model didn’t have access to that may give them additional context and so interpretability is very important. We build on the same interpretability techniques that kind of are in the industry, so core things like Shockley values, we use very heavily but I think going from sort of the raw math mechanics into what’s actually interpretable and useable and how do you present all that in a way that a clinician could understand is an area where we’ve done quite a bit of work.

[0:12:22.5] HC: So then how do you figure out how to present it so that it’s useful for that end user?

[0:12:27.2] DD: Yeah, try it a lot. It’s basically, the only technique we have been able to find is we did a lot of experimentation and still do, do a lot of experimentation with how we present results, in order to be able to get the best response. We were part of a contest a few years ago.

So there was the CMS AI health outcomes challenge, which is a 1.6 million dollar X prize basically sponsored by CMS which is the federal government agency that runs Medicare and they sponsored this prize to create AI that physicians trust. So this was a two-year competition over 300 teams entered.

IBM, Merck, The Mayo Clinic, Deloitte, there are a lot of big names after this thing. This little company in Austin called ClosedLoop AI won the whole thing so – but a lot of what we did in that was actually just user testing with nurses, care managers, physicians, showing them what we had, showing them predictions, saying, “Take a look at this patient’s case, take a look at the output of this, is this helping you?”

“You know, what about this makes sense to you, what doesn’t? What do you feel here is actionable, what doesn’t?” and then taking that feedback doing another iteration, and trying it again. We did about 20 iterations of our interface for that challenge.

[0:13:44.3] HC: Yeah, machine learning overall is very iterative but you know that’s a very good point just on the outputs and understanding what form those outputs and that interpretability should be. You need the feedback and you need to repeat it and for multiple end-users and so on in order to get it right.

[0:13:59.9] DD: Yeah, I think you’ve seen that with the recess in updates from Open AI and ChatGPT. Those language models have been super useful for a long time but actually training them with a human in the loop on, “Hey, what’s the output that a human finds that is actually useful here is really what allow them to turn the corner and so you know, we certainly don’t have that level of automation in the iterative testing we do but I think that is eventually that I don’t see a way around like you build things that are interpretable on humans. Eventually, you have to ask humans if they can interpret them.

[0:14:31.1] HC: So bias is another topic that’s talked about a lot in AI right now, how might bias manifest on models trained with health data?

[0:14:40.4] DD: Yeah, there is many, many different ways. So just to kind of, you know, there is several, several just really huge examples where bias can be a giant problem in healthcare. I mean, the first is just the underlying data generation. You know socioeconomically, they are just vastly different uses of the healthcare system. You know, for a huge example much of the data we have comes from the billing process and the insurance process and all of the uninsured people basically don’t have electronic health records that are useful in a way that could go into machine learning models.

So the models inherently are all biased just at the start towards people who are insured and so – and people who aren’t insured tended to be from poor socioeconomic status. So you’ve got very, very basic biases built in. There is also other examples. There is a large example a couple of years ago where a model from Optum was trying to predict who needed additional care management resources and they were using cost to predict that.

So they were looking at future healthcare costs and trying to determine who is likely to be expensive in the future so that we could give them more resources now. It turns out this study was biased, there are racial differences. If you look historically in the cost to treat certain conditions, so for white individuals, the cost to treat, say, diabetes or chronic kidney disease is often much more expensive than the same disease being treated for black Americans.

So when you predict cost, you are going to build in those same structural differences and then that effect was the model thought white people were sicker than they really were, which had then effect of essentially devoting more resources to people who didn’t actually need them and that was very a prominent case of racial bias being built into the models. There are many more subtle cases where this can also happen but it’s a huge issue.

You know, in order to mitigate it, first of all, it is a very challenging problem and so I think you need to tackle it from just several angles. I think one of the keys is you have to be routinely looking for it and measuring it and trying to understand where you have bias and where you don’t and so that’s one key thing is measure it. Another is we have some fairness techniques in that mathematical tools we’ve developed to sort of identify some of these problems, like the thing I discussed with the Optum situation.

Where we can allow people to sort of look at multiple different outcomes for their model and make sure that maybe one of the decisions they made, the model isn’t biased because of any single decision they made. So if you can look at several different metrics and they can – like several different potential outcomes and your model is consistent across all of them, that helps you reduce the potential for bias that might be present in one particular data source or one particular outcome.

[0:17:34.5] HC: Once your models are deployed, how do you monitor them? How do you ensure that they continue to perform well over time as different things change, as you know, COVID hits or something else changes in healthcare?

[0:17:49.0] DD: Yeah, COVID was brutal. COVID was brutal for everyone for many reasons but also in particular if you are running predictive models in healthcare during time, it is very, very tricky because every health utilization pattern changed essentially overnight to something we had never seen before. So that is obviously – you know, partly what you needed to do there is understand when are you sort of outside the bounds of your model and when are you in a situation where you shouldn’t be using that model anymore.

Yeah, specifically part of what ClosedLoop does is we focus a lot on ML ops and about just making sure that when people put healthcare models in place, we can sort of – we run those things very reliably and predictably. We have QC checks sort of throughout the process. We have versioning within the platform on the entire end-to-end pipeline, which I think is very important.

Starting from your raw data through all the transformations and ETL you do and then any feature engineering that happens and the actual model building and then generation of reports and any post-processing, we basically take that whole thing in our ML ops pipeline and are able to version that entire pipeline and run multiple parallel instances of it so that you can reliably know that the thing that you are running in production is staying consistent in production.

Then when you want to make an update, you can run that entire process totally in parallel to be able to parallel test things and then you need to monitor for things. You need to have QC checks along the way and then also to monitor for feature drift and for just accuracy, you know, the distribution of your predictions looking for changes there or ultimately accuracy. In healthcare, a lot of the things we do have long timeframes.

We are trying to predict whether someone is going to have a hospitalization in the next year. It is difficult, we can’t measure accuracy as a useful way to determine when the model is outdated because it takes a year to determine if a given prediction is accurate. So it is very important to do some of these upfront stuff and that’s what the ML ops capability and our platform is, is monitoring for future drift.

Monitoring for data drift, monitoring for shifts in the outcome distribution, where you can’t tell if something is accurate within a two month span but we can tell that there’s maybe been a shift in the distribution or are we predicting everybody who have gotten healthier or everybody to have gotten sicker in the last two months. Generally, you can assume when something like that happens, there is an underlying cost in the data and it’s an issue with your model.

You know, that is one of the reasons COVID was such a big deal it’s because all of the distributions essentially shifted overnight. At the same time, we had to figure out, “You know, are the models still useful? Are they still?” a lot of our models are used for ranking and sorting. So we need to determine who are the most important patients to reach out to and in many cases with COVID, we found that even on the models were less accurate overall, their ranking was still appropriate and still made sense and they were still useful even though they were less accurate.

That takes a bunch of analysis to convince yourself that’s the case but that is where we ended up there.

[0:20:51.0] HC: Thinking about broader, about what you’re working on at ClosedLoop, how do you measure the impact of your technology?

[0:20:57.3] DD: One life at a time is always how we’d ideally like to do it. You know, I think there is the most powerful measures of the impact are the stories we get from our customers. They are not the most quantitative but they are the most powerful but when we were first starting out and we were predicting Medicaid, we were trying to find people who needed extra. There were social workers who are trying to find candidates for social needs interventions into Medicaid population.

One of the care managers and one of our customers told us that our model had pulled out a 14-year-old boy and they have highlighted a potential suicide diagnosis and targeted this 14-year-old boy’s being a potentially high-risk patient that did not show up on any of their traditional – he was someone who would completely slip through the cracks in their traditional process, been in foster care, have been bounced around.

There is nothing overt in his medical record that signaled that this was going to happen but with someone who is in a lot of trouble and our model spotted him and we were able to get them the help he needed. So there are a lot of ways we measure the impact that that is the most visceral for me and the reason I get up every morning and do this but you know when you look at those stories and aggregate, what they add up to is better outcomes for people.

So overall people’s disease trajectory is improving to a better path and ultimately that turns into a lower total cost of care for those patients, which ultimately turns into a financial reward either for the provider or the insurance company depending on how it’s all set up but so we are able to overtime and with full population sort of measure a tangible financial impact for our patients. In there nearer term, we can measure at a population level improved outcomes.

So there is a lot of quality metrics and measuring that goes on for hospitals and pairs in the healthcare system to determine what their quality is and whether their patients are hitting certain targets and we can see movement in those targets based on the programs they could have combat it and try to track those.

[0:22:58.7] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:23:03.6] DD: You know, this last question of understanding how you’re going to measure impact is I think the most important. At the start, when you have a new fancy AI technology get a lot of people excited very easily. They envision the power of something and you capture their imagination but if you want to build a robust company that’s going to be successful year after year and be able to grow and really tackle these problems, you eventually have to show tangible demonstrable benefits.

You have to show that using your AI actually is saving money, improving health, moving some meaningful metric, be it financial or otherwise for your customers, and if you are not doing that, eventually some new technology will be able to come in and people get excited about a new thing and they’ll move on. So you know I would say if you are doing something interesting and cool, don’t hide the fact that it is interesting and cool.

But realize that you need to turn that excitement into tangible results and you have some time to do that. People are willing to be a little bit patient with this technology but they’re not willing to wait forever and so that’s extremely important.

[0:24:10.0] HC: Finally, where do you see the impact of ClosedLoop in three to five years?

[0:24:14.7] DD: I think there is so many different ways but really one of the things we’re looking at and where we like to have our biggest impact is helping the healthcare system to learn faster. If you look at the way that the healthcare system learns today, there’s clinical trials, which are used on about one to two percent of patients are enrolled in a clinical trial and that’s really the structure by which the healthcare system learns.

So we are only 98% of the healthcare that occurs every day is not really contributing to the body of knowledge that helps us figure out how to better care for patients and we have to be doing that and so the biggest hope I have for ClosedLoop over the next few years is that we’re able to help our customers use data to make better decisions and then use the outcome of those decisions to measure what works and what didn’t.

To try different things, try different experiments on how to care for patients. We can predict the future but our customers still have to change it. We can predict that a person is likely a data hospital admission but you still have to do something that tangibly changes that person’s likelihood of that admission. We know some things that work but there are many more experiments we need to try and we need to be doing those experiments faster.

That’s one of the areas where if ClosedLoop can help overall accelerate the rate of learning within the healthcare system and change us even if it’s from one to two percent of the healthcare interactions that we’re learning from if we can change that to learning from 10 to 20% of those interactions, that’s a tenfold increase and I think that would be huge.

[0:25:51.2] HC: Dave, this has been great. I love the approach to data science for healthcare that you’ve taken at ClosedLoop. I expected the insights you shared will be valuable to other AI companies. Where can people find out more about you online?

[0:26:02.8] DD: Sure,, we are also on Twitter and yeah, our website is the best place to get started, or feel free to contact me.

[0:26:11.5] HC: Perfect. Thanks for joining me today.

[0:26:13.6] DD: All right.

[0:26:14.3] HC: All right everyone, thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.


[0:26:25.4] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share it with a friend and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at