To succeed at an AI startup, you have to be able to show your work and its value. During this episode, I am joined by Brigham Hyde, Co-Founder and CEO of Atropos Health, to talk about his app that gathers real-world evidence for healthcare. He is an entrepreneur, operator, and investor who is deeply immersed in the potential of data and AI. Join us as he shares his journey to creating Atropos Health, why he believes AI is important for healthcare, and the potential it holds to bridge the evidence gap. We discuss how the lack of diversity in healthcare data has impacted patient outcomes leading up to this point and explore some of the methods Atropos uses to get the most out of machine learning. We discuss the AI data-gathering process, how each setup is validated and adapted, and how he measures the impact of his technology. In closing, he shares advice for other leaders of AI-powered startups and offers his vision for the future impact of Atropos.
Key Points:
- Welcoming Brigham Hyde, co-founder and CEO of Atropos Health.
- His journey to creating Atropos Health after working in other medical AI arenas.
- Why AI is important for healthcare: the evidence gap.
- Atropos’s perspective on the role of real-world evidence.
- How the lack of diversity in healthcare data sets impacts patient outcomes.
- Methods Atropos uses to leverage machine learning to ensure that patient populations are supported.
- The data-gathering process.
- How the setup is validated and adapted according to need.
- Measuring the impact of the technology.
- Advice for other leaders of AI-powered startups.
- Where Brigham foresees the impact of Atropos in three to five years.
Quotes:
“At Atropos, we focus on the automation and generation of high-quality real-world evidence to support clinical decision-making with the dream of creating personalized evidence for everyone.” — Brigham Hyde
“We see the role of real-world evidence and observational research as a great way to supplement that gap.” — Brigham Hyde
“It's our ability to create that evidence, transparently show you the populations that are being used and the bias that is involved, and the techniques to remove that bias that are the key.” — Brigham Hyde
“You've got to be able to show how what you're doing works, that it's not biased, and that it's applicable to the health system you're working with, and it's got to be done in extremely high quality.” — Brigham Hyde
Links:
Brigham Hyde on LinkedIn
Brigham Hyde on X
Atropos Health
Atropos Health on LinkedIn
Atropos Health on X
LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
[INTRODUCTION]
[0:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.
[INTERVIEW]
[0:00:34] HC: Today, I’m joined by guest, Brigham Hyde, Co-Founder and CEO of Atropos Health, to talk about real-world evidence for healthcare. Brigham, welcome to the show.
[0:00:43] BH: Thanks for having me, Heather.
[0:00:44] HC: Brigham, could you share a bit about your background and how that led you to create Atropos Health?
[0:00:48] BH: Yeah, sure. I’m a Co-Founder, CEO at Atropos Health. I helped spin the business out of Stanford University about four years ago, with my co-founders, Nigam Shah, who’s the Chief Data Scientist at Stanford Health Care, and Saurabh Gombar is our Chief Medical Officer. I’ve been building businesses in health tech and health data for the last 15 years, companies like EVERSANA, ConcertAI, Decision Resources Group, and others. I’m a PhD by training, so I’m a PhD in clinical pharmacology and started on the bench as a clinical researcher, but I’ve been chasing the potential of data and AI ever since as a entrepreneur, operator, and investor.
[0:01:26] HC: What does Atropos do and why is this important for health care?
[0:01:29] BH: Yeah. At Atropos, we focus on the automation and generation of high-quality real-world evidence to support clinical decision-making with the dream of creating personalized evidence for everyone. The reason that we chase that dream is that one of the big problems in health care we feel is called the evidence gap. There’s not enough evidence for care. The stat that we use in the US is only about 14% of everyday medical decisions have any high-quality evidence behind them. That’s because we simply don’t have enough clinical trials, enough studies to support those everyday decisions.
While, I think, we all want us to run more clinical trials, and we certainly applaud that, the reality is we’re not going to run a trial on every end of one patient. Even the trials we do run, often exclude most patients. Those with comorbidities, underrepresented groups, women, children, and elderly, certain demographic populations. We see the role of real-world evidence and observational research as a great way to supplement that gap, and our big contribution to that is shortening the time it takes to do those studies and making it possible for even a clinician at the point of care to order a study just with a few sentences of text and get a publication grade answer back in even minutes. That speed and automation increase allows us to try and fill that evidence gap.
[0:02:53] HC: How does the lack of diversity in healthcare data sets impact patient outcome, even aside from AI?
[0:02:59] BH: Well, in data sets, I would start by saying, there’s a lack of diversity in clinical trials, and there’s a bunch of reasons for that, some worse than others. But really, we design trials with a lot of inclusion and exclusion criteria, and those systematically exclude certain populations; women, children, the elderly, and other demographic populations with less access are systematically underrepresented. The evidence base, we do have, which is already not enough, is already really held on a small set of populations.
That’s one of the great benefits of real-world data, or healthcare data, is that we actually can see data on all these different populations, and then what we do is we run studies on that data to produce evidence that represents those groups.
[0:03:48] HC: Going back, again, to when you don’t have that diversity in the clinical trials, what happens when you build AI models based on these biased data sets? What are the consequences there?
[0:03:57] BH: Well, I think that’s the point is you need to run them on the other populations. I think the key to that throughout is just transparency. I mean, where’s the acknowledgement of bias in the current evidence base? It’s really not there. We’re taking trials run, the colloquial joke is that most cancer trials are run on white male triathletes at Memorial Sloan Kettering. That’s not everybody. When we approve drugs on that base of the FDA, and I do think for regulatory approvals, that’s probably right. What are we doing in the real world to show that those trials work on those other populations? It’s our ability to create that evidence, transparently show you the populations that are being used and the bias that is involved, and the techniques to remove that bias that are the key.
[0:04:42] HC: How do you use machine learning in solving this?
[0:04:44] BH: Our core technology is not magic AI. It’s hardcore automation of production and observational research and causal inference studies, which is, I guess, on some branch of the tree of statistics speaking of AI, but it’s really the core foundational statistical programming involved in most research that’s observational, or clinical trials. Really, you’re holding steady all variables about the patient population, and controlling for one, usually a treatment difference, like drug A versus drug B.
We do leverage machine learning a bit in our Cisco programming. We do something called high dimensional propensity score matching. What this is doing is it’s attempting to remove bias and confounders that can exist in real-world populations, but it’s simply, you want to make sure if you’re comparing drug A to drug B, that the population of folks on drug A aren’t all young and healthy, and the people on the other drug are all old and sick. You want to make sure it’s a balanced apples to apples comparison. That’s just one of the ways we remove bias and leverage machine learning there to create lasso features, basically selecting how we’re going to ensure that those patient populations are similar.
[0:05:56] HC: How do you gather the healthcare data in order to do this?
[0:06:00] BH: Our technology is cloud-based and federated. What that means is we install wherever the data is. We work with the health system. We’ll install our technology in their cloud, their secure environment on the de-identified patient data, and we’ll run those studies locally, and no data ever leaves, stays where it is. We just produce the studies to answer the questions out of it.
[0:06:22] HC: How do you go about validating this setup works and the models you create and improve things?
[0:06:28] BH: Well, we follow through what actions were taken with the results. We interview the doctors. We also track policy and decisions were made. We published a study maybe a year ago, talking about the ROI this creates for healthcare systems, not only in terms of advancing research and improving patient care, but also, improving outcomes and cost of care.
[0:06:47] HC: There’s a lot going on related to AI and regulation in different countries. How might new AI regulations influence healthcare applications?
[0:06:56] BH: I think there’s some version of it coming. I’m involved in a group called CHAI, which is the Coalition for Health AI, and I sit on the generative AI sub-committee there, and my co-founder, Nigam, is on the board. That’s a non-regulatory group, but they’re trying to set forth some of the standards that can be applied. My personal belief is that we probably have a pretty good foundation in the FDA 21st Century Cures Act, that could be extended for some of the use cases here. I’m also cautious to say that every health system should be approaching their local discipline to this, how they decide what AI is safe to deploy and how they test that it’s working.
I think, ultimately, though, post-election, regardless of outcome, there will be some level of regulation in the space, which in some ways, I think is a good thing. I mean, I think if all of us building the space had something to tack to, I think it was a reasonable standard, I think people would move towards it. Ultimately, that’s going to create, is a flight to quality and transparency. You’ve got to be able to show how, what you’re doing works, that it’s not biased, and that it’s applicable to the health system you’re working with, and it’s got to be done in extremely high quality. That’s something that we’ve always held as a tenet as a company.
[0:08:09] HC: You mentioned impact a little bit already, but how do you measured the impact of your technology?
[0:08:13] BH: Yeah, I think it starts with users. Doctors are already overwhelmed. Married to a physician, I know what it looks like to have a spouse doing her notes every night at the dinner table. I think, trying to come up with a user experience that is built for them and doesn’t cause increased burden on them, instead removes burden and has a great experience, understandable experience for them. We focus first on that.
I think, secondly, you want to look at, what does this new evidence lead to? Do we make a better decision for a patient? Does that better decision become something they should factor into policy? Do we change policy around care? Ultimately, what you’re looking at is already improving outcomes over time. Are we optimizing costs in the system?
[0:08:59] HC: Is there any advice you could offer to other leaders of AI-powered startups?
[0:09:02] BH: No. I mean, I think we’re just headed into this wave of quality and transparency, and people need to focus on that. I think the good news there is when these new tools come to fore in popular perspective, ChatGPT and everything else, the initial wave is about figuring out where to leverage them, and then I think the very second wave is how to do that really effectively and at high quality. We’ve always had an approach to publishing everything that we do. We have over a decade of published, peer-reviewed research behind what we do, and many of our everyday studies go on to be published. I think that belief that you have to show your work and be able to demonstrate value, I think, that’s the way to go. I think those that do that are going to be successful.
[0:09:47] HC: Finally, where do you see the impact of Atropos in three to five years?
[0:09:51] BH: I would like for us to run this never-ending journey. We want everybody in the world, every clinician, every patient-clinician interaction to be informed by personalized evidence. That’s a big goal. But we’re patient. For me, I look at the impact we’re having in terms of the users, in terms of decision-making, and we’d like to just continue down that path. Including expanding this globally, because in many ways, the evidence gap I described is even larger overseas. There haven’t classically been as many studies on those populations. I just think, evidence is truth. We should seek the truth, and everybody should have access to that.
[0:10:30] HC: This has been great, Brigham. I appreciate your insights today. Where can people find out more about you online?
[0:10:35] BH: Yeah, atroposhealth.com. We’re also very active on LinkedIn and Twitter. We publish our research there. We highlight users and launch new tools all the time, including a great application we just launched called ChatRWD. Come check us out.
[0:10:49] HC: Perfect. Thanks for joining me today.
[0:10:51] BH: Thank you.
[0:10:52] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.
[END OF INTERVIEW]
[0:11:01] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people and planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.
[END]