What if AI could improve the outcomes of clinical trials by making them more efficient and reducing the number of patients receiving placebos? Well, today’s guest, Charles Fisher is here to tell us all about how his company, Unlearn AI, is creating digital twins to do just that! In this conversation, you’ll hear all about Charles' academic background, what made him decide to create Unlearn AI, what the company does, and how they work within clinical trials. We delve into the problems they focus on and the data they collect before Charles tells us about their zero-trust solution. We even discuss Charles’ opinions of how domain knowledge should be used in machine learning. Finally, our guest shares advice for leaders of AI-powered startups. To hear all this and even find out what to expect from Unlearn in the near future, tune in now!


Key Points:
  • A rundown of Charles Fisher’s background and what led him to create Unlearn AI.
  • What Unlearn does, what digital twins are, and why they’re important.
  • How clinical trials work and how they are used within Unlearn.
  • The kinds of data they use and how they tackle these clinical trials using machine learning.
  • What a zero-trust solution is and how Unlearn guarantees that their results are accurate.
  • Charles shares his thoughts on the role of domain expertise in machine learning.
  • His advice for any leaders of AI-powered startups.
  • What we can expect from Unlearn in the next three to five years.

Quotes:

“[Unlearn is] typically working on running clinical trials where we might be able to reduce the number of patients who get the placebo by somewhere like – 50%.” — Charles Fisher

“[Unlearn] can prove that these studies produce the right answer, even though they leverage these AI algorithms.” — Charles Fisher

“It's very difficult to find examples where you can actually have a zero-trust application of AI. I actually don't know of another one besides [Unlearn’s].” — Charles Fisher


Links:

Charles Fisher on LinkedIn
Charles Fisher on X
Unlearn AI


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[EPISODE]

[0:00:33] HC: Today, I’m joined by guests Charles Fisher, Founder and CEO of Unlearn AI to talk about clinical trials. Charles, welcome to the show.

[0:00:42] CF: Yes, thanks for having me.

[0:00:42] HC: Charles, could you share a bit about your background and how that led you to create Unlearn?

[0:00:45] CF: Sure. I am a biophysicist by training. So, I did my PhD in biophysics at Harvard. I had every intention of becoming a professor. I did two postdocs and physics after grad school. That’s a pretty clear picture typically that I wanted to stay in academia. But eventually, found my way into industry working as a machine learning researcher at Pfizer, sort of at the – I would say, kind of, they were dipping their toe into thinking about kind of machine learning for drug development.

I ended up moving out to San Francisco, working at a company called Leap Motion as a machine learning engineer and that’s where I met my co-founders, Jon and Aaron. So, the three of us are all actually PhDs in physics and math, and then AI researchers, and decided to start a company in 2017, really, where – our kind of original thesis was that generative modeling was going to be the future of AI. But rather than being focused on generative models for text or for images, a lot of the stuff that have consumer applications, we were thinking about how we could use generative modeling to solve problems in science.

So, instead of trying to build mechanistic models of things, could we learn them? What’s the most interesting kind of thing, like system that we might want to simulate? We thought that that was the human body. That was kind of the origin of the company was, how could we apply – actually, it was kind of twofold. Not only how could we apply our training in ML research to problems in medicine. I actually think the reverse was also really important at the beginning, which is that kind of the types of machine learning problems that we were interested in needed to explore this vertical, were really unexplored. So, we thought that by focusing on machine learning for medicine, we would actually discover interesting new machine-learning algorithm.

[0:02:37] HC: What does Unlearn do and why is it important?

[0:02:39] CF: Well, in the very, very high level, we are interested in solving AI for medicine. That’s the kind of thing where – well, that’s obviously important. I think the challenge is that it’s in addition to it being obviously important, it’s obviously really hard. It’s really hard for actually a number of different reasons.

One reason is that the complexity of the human body is really, I think, kind of overwhelming. Typical human body has about 37 trillion cells in it, right? Each of those 37 trillion cells expressing something like 20,000 different proteins. You try to think about, “Well, how would you try to create a reductionist piece-by-piece model of the entire human body describing every protein being expressed in every single one of these 37 trillion cells?” And it just becomes obviously impossible.

So, if you can’t build kind of mechanistic models of human health, you kind of can think about it in the opposite direction, which is, can we build top-down models of human health instead? That’s really what we focus on, which is how can we learn generative models of human health that allow us to kind of forecast what will happen to a person’s health in the future, in a variety of different scenarios. We would call that a person’s digital twin.

Let’s imagine I, Charles Fisher, I want to know what’s going to happen to my cholesterol or my risk of developing certain types of cancer, or my cognitive function, or all of these different things in the future, depending upon the types of interventions or things that I do today. So, that’s the types of things that really high level, we want to be able to answer. The place where we are applying these technologies today is within clinical trial.

Maybe I’ll pause there and see if you have any other questions about that high-level mission before talking about the actual today use case.

[0:04:29] HC: Let’s talk a bit about clinical trials then. How do they work for those in the audience who aren’t familiar?

[0:04:35] CF: Clinical trials, what we really want to be able to know is how effective is a new drug in comparison to usually not – either nothing. What would happen if you don’t get that drug? Or some currently existing treatment. So, is it better or worse, safer, less safe. It’s always this comparative question. In order to do that, ideally, you would be able to run simultaneously two different experiments.

Let’s imagine I want to know how some brand-new experimental drug affects me, say it affects my cognitive function. What I would want to be able to do is to give me that drug and to observe what happens to my cognitive function. It gets better. It gets worse. I want to then get in a machine and go back right to that point in time in which I took that drug. Instead of taking it, I’ll take a placebo, like a dummy treatment, and I will observe what happens. The difference between those two, what we call potential outcomes, that tells you how effective the treatment is.

The problem is something that people call it the fundamental problem of causal inference is that you can only ever observe one of those outcomes. So, I can either give a person the drug, or I can give them the placebo. I cannot do both of those things simultaneously. So, what we do within clinical trials effectively is we use a patient’s digital twin to forecast what would happen to them if they got a placebo. That allows us to work with pharmaceutical companies, biotechnology companies who are developing new drugs to run these clinical trials where you don’t need to give as many people a placebo. If you could do this perfectly, if you could have models that were perfectly accurate and make no mistakes, you could effectively run clinical trials, but nobody gets a placebo.

Today, that’s not typically the case. I mean, as everyone knows, machine learning models always make various kinds of mistakes. So, we are typically working on running clinical trials where we might be able to reduce the number of patients who get the placebo by somewhere like, let’s say 50%. That is a huge impact both because clinical trials cost hundreds of millions of dollars. They often take even individual studies three to five years. So, by reducing the number of people that you need, you can dramatically decrease the time and cost it takes to complete a clinical trial.

In addition, we do it in a way where you’re only reducing the number of people who need to get a placebo. So, if you’re a patient and you want to volunteer for a trial, that actually increases the likelihood that you’re going to be assigned to get the experimental treatment, which is usually the reason people participate in trials in the first place.

[0:07:11] HC: How do you tackle this with machine learning? What type of data do you use? How do you set up the problem?

[0:07:17] CF: One of the challenges in working within the kind of medical space really broadly, is that the data that anyone wishes that they had, that we don’t have. It’s true for every question in medicine, and you’re like, “I really wish I had this dataset.” It just kind of never exist.

So, the first thing that we have to do is to try to build a really high-quality dataset that we’re able to train these models. We call them digital twin generators. We wanted to train these digital twin generators. We want to be able to have rich, longitudinal data about individual patients. So, what I would mean by that is, if you think about what happens when a patient participates in a clinical trial, they are going to typically enroll in the study, and then they’ll have to go into usually an academic medical center. Let’s say every month for like a year, or a year and a half. I mean, these things depend how often you have to go into your site, your clinical site, and for how long. Depends on the disease.

But let’s imagine we’re looking at something like a neurological indication where you’re going to go in for a year, year and a half, every month or two months. That’s quite a lot of data compared to – about that patient, compared to what you think about getting just from like your electronic medical record. Each time you go into this academic medical center, you’re going to get sort of a barrage of assessment. So, for each individual disease, like say, Alzheimer’s, you might get a number of different tests of your cognitive function. You might get an MRI, you might get a PET scan, you might get blood tests. So, you’re going to go in every month, two months, or whatever, you’re going to have all of these data that get collected.

What our goal is going to be, when we talk about creating a patient’s digital twin is to enable us to answer any question that a clinician might have about that particular patient. We need to be able to generate data that have that same type of richness. That means that our goal basically is to do kind of multivariate time series modeling. How can we learn these multivariate time series models from these types of data?

The challenge from kind of the data side is that individual clinical trials are small. So, we might say we want to start and build a big data set of clinical trial data. But an individual clinical trial, typically, might have a couple hundred patients for that. We wanted to get data from tens of thousands of patients in a given disease to be able to really train a good model. We have to aggregate data from a huge number of different clinical trials to be able to do that.

I think everybody who works on machine learning in medicine kind of comes out to somewhere around 80% of their actual effort, is really spent on data cleaning, and only about 20%, probably on the machine learning side. I think that’s likely true for us. Right now, we built up a dataset of about one million patients or so, a little bit more than that, all by kind of aggregating a bunch of different small, much smaller clinical trials, really. So that we can get that kind of rich data that we want to be able to learn from.

[0:10:25] HC: All of that enables you to build the digital twins for a specific clinical trial. Is that a separate model that then you apply to the digital twins you’ve already learned?

[0:10:35] CF: Yes. So, we call these, again, these digital twin generators. They’re trained on all of this historical data. We have this dataset of one million patients all taken from previously completed clinical trials. Then, we’re going to train a generative model on those data. Right now, we train them kind of one disease at a time. So, we would take out – like select all the patients with Alzheimer’s disease and train a model for Alzheimer’s disease patients. That thing would be trained before we’re ever working in a clinical trial, because we’re just training them and updating them all the time.

Then, when we actually go to apply that model in a clinical trial, so actually, the model weights and things have to be pre-specified before the trial starts. Because you need to make sure that you’re not actually training the model on data from the study for various regulatory reasons. So, the weights are kind of pre-specified, and then you’re basically just doing inference or generation during the actual study. If I participate in a clinical trial, I would enroll in the study, and at my very first visit, collect a whole bunch of data about me. That would then be input into one of these models and it would condition on it and then be able to forecast what that person’s future would look like.

[0:11:46] HC: So, the future is whether this drug is a good candidate for them or they’re likely to do well with it?

[0:11:52] CF: It’s actually what will happen to them if they don’t get that drug. So, again, if you think about in a clinical trial, you have these two different components. There’s the question of what happens if the person gets this experimental drug. But by definition, it’s an experimental drug at a phase two study that probably hasn’t even really been tested in people before, but just for safety in phase one. There’s no data on how patients respond to that drug, which makes it a really – if you want it to predict how patients will respond to hypothetical drugs, this is like so far beyond what we’re capable of doing in machine learning today. I think that’s a great question. I hope that 20, 30 years from now, we’re able to solve that question.

But today, to kind of make this look more like a traditional machine learning problem, we think always about, we are simulating what happens to a patient on a currently available treatment. Because that’s what you have in the control group of clinical trials. Patients receiving currently available treatments or placebos.

In that case, again, we can get data from tens of thousands of people who have taken that drug, and that’s kind of what we’re learning from. So, it looks more like a normal machine learning problem. We say, “Hey, I’ve got 1,000 patients who have data from 1,000 patients who have received this drug. I train a model. What will happen to the 1,001 patient?” That’s kind of the way we generally think about framing the problem.

[0:13:14] HC: That makes sense. So, I was reading your blog on your website and came across a number of interesting things. One of them is the concept of a zero-trust solution. I was wondering if you could explain what this is, and maybe some examples of zero-trust and non-zero-trust applications that you have at Unlearn?

[0:13:32] CF: Sure. I don’t think actually that – well, maybe I’ll talk about maybe a little bit about what I mean by zero-trust. Let’s say that you want to use some AI system to solve a particular problem. Kind of my point of what a zero-trust system would be would be that you have some procedure that allows you to solve that problem, where you can guarantee that you get the right answer, even if your AI system malfunctions, which is a very, very interesting problem.

If you think about, I don’t know, think about some examples of areas where you don’t have zero-trust systems, or things like in self-driving cars, right? If you’re in a self-driving car, and you do not have your hands on the wheel, or you’re not paying attention and the system malfunctions, you’re going to crash. So, what we’ve – society, so far, what the solution has been, has been to have a person who is behind the wheel where if the car malfunctions, you have somebody there to be able to take over and prevent the car progression.

I think similarly, you’ve seen a lot of things in medicine that are similar to that, where you have kind of a human in the loop in addition to the algorithm. So, a typical use case for machine learning in medicine might be to create what we would call a clinical decision support system. Basically, this is going to be a system that is going to provide information to a doctor, but it’s ultimately, the doctor who’s making the decision. So, these are areas where I think you kind of have a degree of limited trust where because you have a human in the loop, even if the system, your AI system isn’t really doing all that great, the idea is that the human is going to be able to solve that problem.

But it’s still not – what we have in clinical trials is really unique, where we can actually mathematically prove that the studies that we work in, where we incorporate information, or data from a patient’s digital twin, that we can actually prove that those studies have the same properties as clinical trials that don’t incorporate patients’ digital twins. What I would mean, let’s say, maybe it’d be kind of colloquial, I would say, we can prove that these studies produce the right answer, even though they leverage these AI algorithms.

So, if you have a model that is really, really good at creating patients, digital twins. It can forecast their future clinical outcomes with extremely high accuracy. You end up being able to run a study where you can reduce the number of patients that you need in the placebo group by a lot. The better and better that the model gets, the smaller and the smaller that placebo group can be. But you can kind of take that in the reverse direction. You go, well, the worse and worse the model gets, the larger the placebo group needs to be. So, you just keep going backwards. You end up in this situation, which is, “Well, what’s the worst-case scenario?” The worst-case scenario is that you just run a regular clinical trial. So, you end up in a situation where you have 50% of your patients on your experimental treatment. You have 50% of your patients on a placebo. You create digital twins of all these people, but it’s just noise. So, it’s not useful.

But you still end up at the same exact – like the worst-case scenario is just a regular clinical trial you are going to run anyway. This actually has nothing to do with the underlying machine learning algorithms that we deploy at all. It has to do with the way that you incorporate the outputs from these models into the clinical trial. So, the idea is that we can develop robust clinical trial designs where you can leverage AI, but you can guarantee that the clinical trials still produce the right results, even if there was some sort of malfunction, or the AI wasn’t working as we expected it to.

[0:17:23] HC: So, it’s a situation where it’s really a no-brainer to use AI. Worst case, nothing’s been harmed. And best case, even proves things because you can use a smaller placebo group.

[0:17:33] CF: That’s right. It should be a no-brainer, I think. It should be a no-brainer. Because the science here is, it’s really pretty cut dry, in the sense that that’s exactly right. But it’s a very conservative industry, and one that is not familiar with machine learning, not familiar with AI, not even really broadly familiar with computational methods. So, getting people to buy into that has taken a long time, actually. I think it’s challenging to get new technologies adopted into medical research, kind of broadly, actually, in drug and pharmaceutical development, in particular.

[0:18:09] HC: When you’re trying to get this type of system in use for a clinical trial, do you explain the concept of a zero-trust solution and why it should be a no-brainer, obviously, in different words?

[0:18:21] CF: We try to. What has been effective for us, is we were able to go and get this approach actually reviewed and qualified by the European Medicines Agency. We’ve also had similar discussions with the FDA. That took a long time. I mean, it’s like a couple of years process to go through and to get something qualified like that. But ultimately, that was the most effective thing going. We have academic papers that walk through mathematical proofs of these statistical properties and stuff like that. But in my mind, that’s the highest evidence that you can have. The highest and the most rigorous robust evidence you could have is a mathematical proof that shows that you have the characteristic you say you have. But those things don’t necessarily resonate with our customers. Our customers, by and large, are medical doctors, or biologists. They’re not mathematicians. They’re not interested in mathematical proofs necessarily.

So, I think what’s really been more effective for us has been in terms of getting people to buy into it has been working with the regulators directly.

[0:19:25] HC: You have to speak their language, and if they speak the regulatory language, I guess that’s the way to go.

[0:19:29] CF: Yes. And they don’t like math.

[0:19:34] HC: Well, as long as you can understand the customer base and be able to target them appropriately, I think that’s the important thing here. As much as I’m fascinated by the concept of zero-trust solution because I hadn’t heard this concept before. Not everybody is.

[0:19:47] CF: Yes. I’ve been trying to coin this term and make it popular, but I don’t think I’ve ever heard anyone else use it. But in part, I don’t know if I hear people talk about zero-trust systems because it’s very difficult to find examples where you can actually have a zero-trust application of AI. I actually don’t know of another one besides ours. I’m sure if I thought about it, we could come up with examples. But within clinical trials, it comes out of randomization. The fact that you have random treatment assignment is what allows you to get a zero-trust application.

So, you could think about AB testing and stuff like this as well, because AB testing is just a clinical trial. Really, the statistics of those things are exactly the same. So, any AB testing, anything where you’re doing random treatment assignment, and you’re leveraging sort of predictions from AI, or machine learning methods within the analysis of those data, you can usually come up with something that looks like what I would call a zero-trust system. But there aren’t a whole lot of applications outside of kind of – that at least that I’ve thought about that allow you to get to that level.

[0:20:51] HC: Well, because it’s rare, I still think this is a new and fascinating property.

So, another topic I want to dig into that I pulled from an article on your blog is the role of domain expertise in machine learning. I found a quote of yours I found particularly interesting that I wanted to learn more about. You said, “I think it’s better to use domain knowledge to create synthetic examples for training machine learning models with unconstrained architectures than try to directly incorporate that knowledge into the architecture itself.” Could you elaborate on what this means and how you came to this conclusion?

[0:21:28] CF: Well, I think that for a long time, maybe not the last five years, so much, but everything before the last five years, feature engineering was really kind of one of the main approaches to solving problems in machine learning. You would have an attempt by researchers to think about what types of architectures or features they would expect to be good for predicting whatever it is outcome that they’re trying to predict.

For example, we have a lot of information about, well, kind of underlying biology, sometimes, these diseases that we’re working on. Should we be reading biology textbooks and medical textbooks and neuroscience textbooks and trying to take that information, and to craft algorithms that are specific for specific diseases, for example. I really strongly believe that this is the wrong approach. There’s a couple of different reasons for why I think that.

One, is really just sort of Richard Sutton’s, ‘The Bitter Lesson’, that every time people try to – researchers tried to take domain knowledge and basically input it into their algorithm in a way that they think makes sense, that may improve performance in the short term. But kind of invariably, the bigger lesson is that those things have always just kind of lost to data-driven algorithms as you scale them up. It’s usually more effective to try to think about how can I create just the data-driven approach? And how can I scale it up? Instead of how can I incorporate domain knowledge actually, into the architecture or something like that?

Similarly, there’s been a ton of examples of people screwing this up in biology, just kind of like over and over and over again. Still now, I see papers being published all the time of people just kind of really, I think, messing this up, because they seem to think that the knowledge that we have about biology is correct. They’re like, look in papers, and they’d be like, “This paper says that gene A regulates gene B. Let me put that into my machine learning architecture.” But this is wrong, because like an enormous amount of the facts, of the so-called facts that we have in the biological literature are just wrong. That’s like an enormous amount, I think, of what we think we understand about biology now is probably false.

So, you take all that information and you put it into your algorithm. It’s just not true. That makes it really difficult though, if you later on, you learn something new. How do you update your model with this new information if you baked it into the architecture? It’s incredibly difficult. So, it’s really a lot easier if you want to try to incorporate some special knowledge into a machine-learning model. It’s just make examples of that thing, and then train the model on it. Now, in the last five years, of course, I think that this is exactly pretty much what everyone is trying to do now. If you look at self-driving cars, synthetic data, where training goes on simulated data where you don’t put in a whole bunch of rules, necessarily, would that say like in this scenario, cars should do X, Y, or Z. Well, you could do that. But it’s a lot easier to just have a big kind of neural decision engine where you create those examples and you train the driving, self-driving system on the simulator.

Same thing with, if you think about large language models. You could go in and you could write in a whole bunch of rules about how the language model is supposed to respond to certain types of questions. But that’s not what people actually do. What they actually do is create synthetic examples, right? Then, you train, you fine-tune these models on these, like instruction following data sets and things like that.

I think that’s just generally a much more scalable approach, is rather than thinking about how to make special architectures that incorporate things, I think it’s just much better to just make general architectures, and then create, sort of make – think about designing your training set to get the desired behavior, rather than designing your architecture to get the desired behavior.

[0:25:38] HC: So really, using the domain knowledge and then for the generation instead for the modeling. But still, domain knowledge is very important.

[0:25:47] CF: Use domain knowledge to decide what goes in your data set, not what goes in your architecture. That’s what I’m arguing.

[0:25:53] HC: Gotcha. Is there any advice you could offer to other leaders of AI-powered startups?

[0:25:58] CF: Advice for other leaders of AI-powered startups. That’s a really broad question. Advice on what?

[0:26:04] HC: Anything that you’ve learned along the way, whether it’s related to AI, related to running a business, the intersection thereof?

[0:26:12] CF: Yes. That’s a super broad question. I mean, I have tons and tons of thoughts about things that I’ve learned along the way about running businesses, for sure, starting companies. ML and AI research, I don’t know if I necessarily have a lot of advice there. Because I do think that the approach depends a little bit on the problem you’re trying to solve.

But I guess if I had to think about one big thing that I learned, starting and working at Unlearn, that’s different from what I thought at the beginning. I’ve been at this now for about seven years, almost seven years. Coming from an academic background, and spending a little bit of time in industry, but always kind of actually, as an engineer, my kind of initial thought when starting the company was that success was all about finding other really experienced people. So, in hiring a lot of – our initial interviews were really hard. Hours at the chalkboard proving theorems, that’s what interviews at Unlearn look like, and we were really focused on trying to find people with certain types of skills, certain types of knowledge, that type of thing.

I no longer believe that that is the right approach to building a team at all. I think that’s a total mistake. My experience has been that the most important thing that you can do in hiring for any role is to hire for attitude. So, trying to find people who have the right attitude for your company, that could be different, depending on what type of company you’re running. But I think hiring for attitude, hiring go-getters, hiring people who are mission-aligned, and really want to work on your problem, those people should be able to pick up the skills. Probably the number one piece of advice that I would have is to think really, really hard about not about the types of knowledge that you want people around you to have, but the types of attitudes you want people around you to have. Then, to really focus on trying to hire people for that attitude.

[0:28:11] HC: Finally, where do you see the impact of Unlearn in three to five years?

[0:28:14] CF: In three to five years, we’re pretty much entirely focused on this problem and clinical trial, thinking about how we can expand. We started really working in neurology, Alzheimer’s disease, Parkinson’s, ALS. Now, we’re starting to think about how we can expand into other indications. So, within the next three to five years, I hope that we are effectively working in clinical trials in all disease areas.

Beyond three to five years, our goal is actually to move out of clinical trials at some point. So, we’re starting clinical trials. Now, once drugs actually approved, there’s a bunch of questions that we think we can help to address around what we would call comparative effectiveness. Looking at how drugs that are on the market, which one of them is actually performing the best, we want to be able to look at some of those questions, and then eventually move into even personalized medicine to where your physician may be able to use your digital twin to think about, basically to test different things in the computer before they test them on you, which is the exact opposite of how it works today.

Today, you walk into the doctor’s office, they just try drugs on you and see what works. We think that all of that kind of trial and error should happen in a computer. But a lot of those questions are longer-term. These three to five years clinical trial question is going to be hard enough.

[0:29:25] HC: This has been great, Charles. I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:29:33] CF: They can follow me on Twitter, @charleskfisher, or just check out unlearn.ai.

[0:29:39] HC: Perfect. Thanks for joining me today.

[0:29:41] CF: Yeah, thanks for having me.

[0:29:42] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[OUTRO]

[0:29:51] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]