What will it take to bring affordable, accessible, and timely healthcare to all? Curai, an AI-powered virtual clinic, is on a mission to do just that by leveraging AI to enhance the efficiency of licensed physicians through text-based virtual primary care. In today’s episode, I sit down with Anitha Kannan, head of AI and founding member of Curai, to talk about the transformative potential of virtual primary care and its role in scaling healthcare access.

In our conversation, Anitha delves into the technical aspects of using large language models for patient data processing, the challenges of training models with clinical data, and the strategies Curai employs to ensure high-quality care. We also discuss the innovative ways Curai integrates AI into healthcare, the significance of multidisciplinary teams, and Anitha’s vision for the future of virtual care. Tune in for an insightful conversation on scaling healthcare through virtual primary care and learn how Curai is making a real impact!


Key Points:
  • Some background on Anitha Kannan, and how she joined Curai.
  • An overview of Curai’s services as a virtual healthcare practice.
  • How they provide affordable and timely healthcare access through AI-enhanced systems.
  • Machine learning’s role in history taking, information gathering, and summarization.
  • How AI streamlines the workflow for physicians.
  • Their use of large language models to process patient data.
  • Training model challenges: ensuring clinical correctness and handling data omission issues.
  • Best practices they’ve developed for validating models and the importance of evaluation.
  • Fundamental differences between their work and how other LLMs, like ChatGPT, are trained.
  • Their strategy for balancing long-term research aspirations with short-term product development.
  • An overview of their multidisciplinary teams and how this contributes to their success.
  • Anitha’s hopes for the future of Curai; particularly through partnerships with healthcare organizations.

Quotes:

“Our mission is to provide the best health care to everyone.“ — Anitha Kannan

“Today, [Carai runs] a text-based virtual primary care practice. We have our licensed physicians or experts in their fields. Then we supercharge them and bring about a lot of efficiencies by leveraging AI.” — Anitha Kannan

“It's very easy to build 80% of a good product with AI today, but I think to get it to 100%, [and] to get it to scale, to be useful in [the] real world — evaluation is the number one thing.“ — Anitha Kannan

“At Curai, the AI team is composed of clinical experts, subject matter experts, researchers, and machine learning engineers. Every project, long-term or short-term, has a mix of these types of expertise in it. This allows us to work through the problem much more effectively.” — Anitha Kannan


Links:

Anitha Kannan on LinkedIn
Anitha Kannan on X
Curai Health


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33] HC: Today, I’m joined by guest Anitha Kannan, head of AI and founding member of Curai to talk about virtual health care. Anitha, welcome to the show.

[0:00:41] AK: Thank you, Heather. It’s a pleasure to be here.

[0:00:43] HC: Anitha, could you share a bit about your background and how that led you to Curai?

[0:00:47] AK: Yeah. Maybe for people who don’t know about Curai much, we are a virtual care practice and our mission is to provide the best health care to everyone. I got the opportunity to join Curai as part of the founding team, and when you think about joining as part of a founding team, you always think about what is the mission of the company, where are we going with this, and whether this is something that is relatable to you, because that’s super important when you are starting afresh.

I have lived over three continents now and multiple countries in these three continents. Time and again, we hit this roadblock when it comes to health care access. It’s either of not affordable, and even if you have really good health care and, kind of provided health care systems like in Canada, or the UK, or part of India, you realize that the timely access to care is a more important factor that comes into place. Often people are either not served at the right time, or they just cannot be served at all, because of the cost.

When these two starts to fit together, the living conditions of people, it’s just so magical. I found out that caring for my father. When you look back and think about how can we solve these two issues, it always comes down to that, you got to really scale your physicians. At the same time, you also think about rethink how health care systems should be set up, right? Not just put some bandages on it, but really, we think, and part of the puzzle comes from leveraging technology into health care systems and fundamentally rethinking how we should be done.

The founders of Curai, both Neal and Xavier, painted a picture which looked like we are investing this in a very long-term horizon, and really reimagining what health care could look like. So, that was the number one reason for me to jump, right? The second part of it comes from just my technical background. I’ve been in machine learning before. It was even known to maybe 1,000 people. It was very early on in the field, and I spent over. I got my PhD in Toronto, University of Toronto.

Then I spent over next 10 years working at Microsoft Research and Facebook AI research in the field of machine learning. Just bringing that, and taking that into really impacting people in a very fundamental way made super sense for me. It’s the mission, and then what we can do with AI as part of the health care system, and then just my background. Then we know that we’ll be bringing in a lot of people with same mission orientation to the company to really solve this problem. That’s a long way of answering this question, but I think at the heart of every phoning member and a phoning team, you end up having to have that drive, and maybe everyone does.

[0:03:38] HC: It’s definitely important to care about what you work on. What does Curai do?

[0:03:43] AK: Today, we run a text-based virtual primary care practice. We have our licensed physicians or experts in their fields. Then we supercharge them and bring about a lot of efficiencies by leveraging AI. For instance, if you are on to Curai now, you’ll realize that our AI basically does a lot of what’s called as a history taking, which is gathering a lot of information from the patients about the reason for the visit, summarizing that, showing that to the patient, and making sure they want to add more information or edit things,

Then we escalate them up to the licensed physicians for their state. We operate all over that at US. I know doctors in the US are licensed in every state that need to be in a patient [Inaudible 0:04:28], and then those doctors take over, and then continue to either working up with the patient on their issue, have a talk about their diagnosis and treatment options available, and then finish the visit. While the doctor is working with the patient, the AI also supports them in providing additional information that I think that they may need and the whole encounter is done, the charting, everything is done, while the doctor finishes up with the patient, and then the AI further follows up with the patient.

You can see that the entire flow. There’s a tight integration of AI for efficiencies, but the decision making, and what is right for the patient, always lies in the hands of our licensed physicians that are well-trained in the text-based of virtual practice. We work directly to the consumers, but we also are, I mean, the one that I’m really excited about is there are many, but one of them is like with Tufts Medicine, where we actually work closely with them. A patient who are at Tufts Medicine can first reach out to Curai, and then if they need to be seen by a doctor in person, that’s made possible.

We work with the scans, healthcare and action for people who are not, don’t have like a home, and like about 10% of the population in the US is like that. We work with Amazon. Basically, we work with healthcare organizations and health plans to ensure that we can provide access to care affordably to a lot more people.

[0:06:01] HC: Can you tell me more about the machine learning part of this? What pieces of it do you use machine learning for, and how do you set up these models?

[0:06:09] AK: Yeah. So, as I was saying how we work at Curai, like we look at the entire blueprint of primary care practice, look at where AI can be really good at, and can be legally do so, for instance, gathering information from the patients. We build these models and help bring in a lot of efficiencies in the entire ecosystem. To do this, you need to really work very closely with the physicians, with the clinical experts, and your building models that are clinically sound, well evaluated, and so on. We do that for conversational agents where the AI can talk with the patients about for information gathering. We also help, the AI also helps the doctor in getting more information from the medical history and charting and so on.

[0:06:59] HC: Are these all based on large language models?

[0:07:02] AK: I mean, when you’re doing a text-based systems, I think you are the biggest breakthroughs in the last couple of years as being on using large language models, because they provide the world knowledge. You start that as a base foundation, if you will, but at the same time you bring in, when you’re building the systems off them, you still have to be very thoughtful about how to do them correctly, and in a clinical – correctly, in the sense of clinical correctness, because even if you’re gathering information from the patients, you still want that to be like correct, and what is really needed. There’s a lot of both evaluations and a lot of effort put into really building the systems effectively.

[0:07:41] HC: What types of data are you working with?

[0:07:43] AK: We are a text-based service. So, all our data or in patients, be identified, anonymized data that we use on our systems that relates to medical history and the current health issue that the patient may have. Then we also bring in clinical guidelines that are available and protocols and so on to further make sure that this is all like still can be well cited and actually leverages the clinical expertise that the community has built over the very, very long time.

[0:08:14] HC: What kinds of challenges do you encounter in working with and training models based off of patient data?

[0:08:19] AK: I think the key here is to think about how are we building these systems and are we evaluating the systems correctly, because if you think about mission learning the last five years is very different from people, I’ve used to doing mission learning before, where you can collect a fixed data set, you can make some assumptions about the data set, and then you’re testing on a data set that has the same properties, if you will, or assumptions.

Where we are moving on from is the ability of these models to be have lot more emergent capabilities and scale. The actual data distribution is a lot bigger and noisier than if you will in a traditional way of looking at mission learning, but then with that comes the power of being able to scale to really new applications. What we have done is spent a lot of time at Curai just thinking about how do you evaluate these systems very effectively.

Just to give you like a little bit of an example here, when you go to access to GPT-4, I think there’s a lot of paper that talks about the goodness of GPT-4. I think our very first paper talked about how can we improve systems that you build on top of GPT-4. We call this a DaRa, where you have two agents actually working closely in a conversation style. One designer and another one is actually, the one who’s doing the job, and then the which is a designer and then the researcher.

The researcher looks at and gives feedback for the decisions on the research and the designer actually improves the system over time. This kind of looking at where the things are failing and making sure that you actually have the right guardrails and the right approaches to taking the problem is important. We look into evaluation of these systems as well and can talk about, because existing metrics fail at evaluating the clinical correctness of these methods.

We spend a lot of time researching on those as well, like what is the right evaluation metric for it and how do we design these metrics, and how do we combine them with human evaluation. Our recent work called MED-OMIT is really looking at, you know, we talk about hallucinations a lot in these models, but we also looked at this for which we have written a couple of papers, but I think this is a particular one that we published recently is looking at omission. When you do summarization, what kind of things can these models omit. How can we use that? How can we probe for that?

Then be able to come back with an evaluation metric that says how much of omissions happened and whether those emissions actually play a role in the downstream tasks, so that we can quantify the omissions. We publish a lot in these things and that helps us also to have the scientific community thinking about these problems as well.

[0:11:12] HC: Are there any best practices you’ve developed on how to validate models and the things that you’ve learned along the way?

[0:11:18] AK: Again, going back to this, evaluation is like the number one thing. It’s very easy to build 80% of a good product with AI today, but I think to get it to the 100%, like to get it to scale, to be useful in real world. One of the big challenges people encounter is evaluation and over the years we have formed a lot of skills in it and we go through, for instance, with these large language models or the models we build on top of these language models and more broadly foundation models, we do different types of evaluation for the course of time.

We start with very simulation-based ones to retrospectively look at our data, to prospectively try it out on our clinic with the providers to get the feedback. While we do these kinds of evaluations, we also design metrics to look at what should we look for in these generated outputs. Then we combine that with a team of expert clinicians, we actually have whose primary job is to actually evaluate these systems and try and get comprehensive between some upper automated metrics and human evaluations, so that we can actually start surfacing things that doesn’t work much more easily than to just randomly sample and evaluate, which also something we do.

[0:12:32] HC: When you’re training these models, are there fundamental differences between what you’re doing versus a generic LLM, like ChatGPT is trained?

[0:12:41] AK: There is definitely elements of, like differentiation that needs to happen, because you are looking at clinical data. A pre-trained model or even fine-tuned for an hour of text is not really looking at clinical process that goes into, like coming up with a diagnosis. For instance, if I said someone had a headache, whether you land on a migraine for the patient or attention headache or like a brain tumor, it depends on like different pathways you could have taken, a doctor could have taken to get all the information that is needed to get to that diagnosis.

Now, these are not that level of clinical reasoning the current LLMs do not have. They are in some sense they have a longer context window, but still very myopic in terms of the clinical reasoning. So, a lot of our own efforts has been on thinking about those and how do we try to impart some of these clinical processes into these models in such a way that we can actually glean a lot more information even if it is just for information gathering from the patients.

[0:13:43] HC: How does your team plan and develop a new machine learning product or feature in particular with some of the early things you do in the process?

[0:13:50] AK: Yeah. So, we are somewhat of a long-term thinker. We start with where our true and north star should look like for some of these things. Where can this AI system can go and work backwards from that. This allows us to think about what type of hurdles we lose phase and how can we de-risk the key components. A lot of our research focuses on de-risking key components. Here it’s actually, it’s even more interesting with LLMs and the rapid phase with which these models are being developed, because you want to be spending time on where the most ROI is even when you’re de-risking these.

We look at where the research is going and we look at where do we want to get to. Then we look at what problems would be solved by the next version of the foundation. How to fuel in the next three, six months? What are going to stay the same? Then we make a call on which ones to de-risk. Then as we start de-risking, it opens up to how it affects the product as well.

A good example of that is that models are starting to look like they have a long context window, but if you look at literature and look at the kinds of models that are being developed, they’re still very autoregressive, which is good in some ways for writing. But also, you know that this is not going to be helping you to really understand the long context. I mean, you can see already Stanford has a couple of papers on saying how the model can understand the beginning and end, but not so much about the middle of the context if you will. We notice that all the time as well.

When you have really long medical histories and when you have patients that have a long record with us, it becomes very important to think about that this is not going to be a really large solved problem. So, you really have to start thinking about what is the right amount of information, how do you understand the patient records in such a way that these are amenable to the right amount of inputs that these models are good at taking. Then you open up a research question there in that space.

Similarly, the evaluation metrics, what evaluation metrics will continue to hold for a very long time and can we design those? To us, starting from those longer term, one year kind of a thing and working backwards from that, and then de-risking it, such that it’s still aligns to the product is actually one of the important steps we do at Curai. You can see that from our publications as well. We have a work in that. We’re just like a seven-year-old company and in the startup space, we have about 23 published papers and two more in the works. Also, well cited as well across labs that do health care. That’s how we exactly approach this. Just have a long-term thinking and de-risking things as we go.

[0:16:32] HC: Yeah. That long-term thinking and other research that you’ve published, that can be a challenging thing to balance with the shorter-term focus of most startups. Do you have any insights to share that might be helpful to other companies that are trying to balance the short-term product development versus longer term research question?

[0:16:51] AK: Yeah. I think a big thing I would actually sell every company that is in the AI healthcare space, in AI space and particularly in healthcare space to make sure to have this balance, because it’s very easy to get into the term of saying let’s just do a short-term work, because that will help us to move forward. I think what is going to sustain and really solve the problems and really in our case get to our mission is to have that long term thinking connected with the short term.

What we are found to be very, very helpful is a very similar strategy, right? Where even in the product, if you’re building something, always talk about what is a north star for this particular feature on the product would look like. You will always notice that there’s a short-term thing you could do and a lot of people just do only the short term and leave it, but then you have this north star that you’re not able to – probably not able to get to. We found that our approach has been to draw the north star for what the product feature could look like, and then work backwards from that, and then say, “Okay, what can you do in the next three, four months, and what can you do in the next eight months?”

We hire on the team, like researchers who can spend the time on the eight months and so on, and have a different, I mean, often what happens is that the incentives don’t get aligned, right? Have the – stuff team actually wanting to do that eight term and be appreciated for that in some sense, is what I would say as an important feature, because not everything has to be short-term product impacted, because as soon as you start to uncover these bigger term things, they will start to pay off eventually.

That’s what has helped us to move rapidly from our own approaches to features in the product to, like a complete revamp with large language models in the 2023. I believe we got early access, 2022. I think that you need that balance, if you will, and not forsake that balance ever.

[0:18:49] HC: Is there any advice you could offer to other leaders of AI powered startups?

[0:18:53] AK: I do have advice I can talk about, like things that has worked for us that others may find very, very helpful. One of them is having very multidisciplinary teams. A traditional mission learning, how mission learning was done where they can be in a separate team and work on a research problem on their own, no longer works very well, because the field is moving a lot. You’re going to get the biggest benefit of the work when you can have researchers who are willing to sit on a problem for a longer time, subject matter experts who can actually help with giving the insights and even sometimes helping in that research agenda a lot of the times actually.

Somewhat of people who can just hack up things and give you the first feel of things. You are typically, like strong engineers you have in the team. If you can combine all of them and have teams that are very cohesive and being able to understand each other’s strength and play to their strengths and learn from the others, that team usually succeeds. At Curai, the AI team is really composed of clinical experts, subject matter experts, the researchers, and the machine learning engineers.

Every project, long-term or short-term has a mix of these types of expertise in it. This allows us to work through the problem much more effectively and solve and get to go no-go solutions very, very quickly on some of our efforts.

[0:20:22] HC: Finally, where do you see the impact of Curai in three to five years?

[0:20:26] AK: We’re starting to see many of that all, right now with our partnerships across healthcare systems and also in receiving, like personalized care for people. I just go back to the introduction, which is really about timeliness of the care and affordability of the care. These are like the most important things combined with the best quality care. I think we have doctors who are really, really good. We can supercharge these doctors and really lead up to very effective access for care for people. That’s going to be the number one mission.

I think getting as more and more closer to our mission is going to be the thing for us. I’m super optimistic that this is actually, we are on the right track for it. I actually also, encourage more people who are looking for what can they do is to think about healthcare as another, like a big place where they can make breakthroughs, because when I talk to people, often many of them think that it’s a very slow-moving industry and can’t really make a difference, but I really think that it is possible and it is important that we make a big difference in healthcare to provide that care that people everyone needs. I’m very, very optimistic that we’re heading the very right direction, and we should be able to serve more and more people.

[0:21:42] HC: This has been great. Anitha, I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:21:49] AK: I’m on LinkedIn. If you look for Anitha Kannan at Curai you should be able to see it. I’m on Twitter too, but I hardly write on Twitter. LinkedIn is the right place, I think.

[0:21:59] HC: Perfect. Thanks for joining me today.

[0:22:01] AK: Thank you.

[0:22:01] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[OUTRO]

[0:22:11] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people and planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]