AI seems to be taking the world by storm, and it is easy to use this new technology for either good or bad. Today I am joined by Sérgio Pereira, the VP of AI Research, Oncology Group, at Lunit, a company using AI for good by conquering cancer with machine learning.

You’ll hear about Sérgio’s professional background, Lunit’s missions, how they use AI for cancer screening and treatment planning, and so much more. Sérgio delves into how they read imaging before discussing the differences between supervised, self-supervised, and contrastive learning. Lunit has created an incredible dataset called Ocelot, and he tells us all about its benefits, how they published it, and why publishing a paper while ensuring that quality products are being produced is a challenge. Finally, Sérgio tells us his hopes for the future of Lunit.


Key Points:
  • A brief overview of Sérgio Pereira's background, and what led him to Lunit.
  • Lunit’s mission and why it’s important in fighting cancer.
  • How Lunit applies machine learning for cancer screening and treatment planning.
  • What most of the Lunit products are based on and how genetics come into play.
  • Why subjectivity becomes an issue when it comes to reading images in machine learning.
  • Sérgio explains what supervised, self-supervised, and contrastive learning is.
  • The lessons from machine learning work that can be applied to other types of imaging.
  • He tells us about Lunit’s dataset, Ocelot, and the benefits of it.
  • How Lunit publishes their datasets.
  • The challenges of getting a paper out while getting a product on the market.
  • Sérgio shares some important things AI-powered startups need to consider.
  • Where Sérgio sees the impact of Lunit in the near future.

Quotes:

“Our mission at Lunit is to conquer cancer through AI.” — Sérgio Pereira

“We don’t have many products at Lunit, that’s a fact, but the ones we have, we believe they are [the] best-in-class.” — Sérgio Pereira

“Mistakes in healthcare can have a very big impact, so we need to be able to show and demonstrate that our products work as we promised.” — Sérgio Pereira

“AI can be used for good and for bad. Let’s make sure we work on the good part.” — Sérgio Pereira


Links:

Sérgio Pereira on LinkedIn
Sérgio Pereira on Google Scholar
Sérgio Pereira on Twitter
Lunit Inc.Paper: OCELOT: Overlapped Cell on Tissue Dataset for Histopathology
Dataset: OCELOT
Paper: Benchmarking Self-Supervised Learning on Diverse Pathology Datasets


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33.3] HC: Today I’m joined by guest Sergio Pereira, the vice president of research in the oncology group at Lunit and we’re going to talk about AI for cancer care. Sergio, welcome to the show.

[0:00:44.1] SP: Well, thank you. Thank you for having me here and it’s my pleasure to be here.

[0:00:47.2] HC: Sergio, could you share a bit about your background and how that led you to Lunit?

[0:00:51.3] SP: Yeah, sure. So I have a background in biomechanical engineering. I did my bachelor’s and master’s degree at the University of Minho in Portugal and during my master’s, I got in touch with medical imaging and machine learning. At the time, brain tissue segmentation and magnetic resonance imaging or MRI, and it got me super interested in ML or machine learning and I saw this potential in the technology.

So I wanted to pursue a Ph.D. on this. So I went for a Ph.D. in computer science, still in Portugal, and is still in the medical imaging domain. So my thesis was on brain tumor segmentation in MRI at this time using deep learning. So this is also around late 2013, 2014, so deep learning was already on the rise but not as much on the medical imaging community. So this is were quite early work.

Fortunately, it went fine. I got some good outcomes, I really like my Ph.D. but then I wanted to try the industry. So moved to Lisbon and I move to a corporate startup by Daimler Trucks called TBLX and there I was working with vehicles data, trucks data. After this, I stayed there one year and then you know the classic love story, you know? My wife is Korean, I wanted to try some work abroad, things combined and we decided to come to Korea and at this moment, Lunit was just a natural choice.

You know, it’s coming back to the origins in medical imaging and deep learning but I need to tell you that in fact, I knew Lunit much before this. This was around summer of 2019. I knew Lunit might be from 2016 or so. I found them and I got quite impressed that they were this pioneer startup in the medical domain using deep learning. They were publishing very good papers and top-tier computer vision conferences and sometimes I’ll tell to my wife for fun, “You know, maybe someday I can work for these folks” and it happened. So I joined Lunit in January 2020. I joined as a research scientist in the oncology group and one year later, I became the vice president of AI research of the same department and that’s where I am until now.

[0:03:01.8] HC: So what all does Lunit do and why is this important for fighting cancer?

[0:03:06.2] SP: Our mission at Lunit is to conquer cancer through AI. So we really want to save more lives of cancer patients, improve their quality of life of these patients and we believe that we can do this by leveraging the potential of AI and it’s important towards fighting cancer because we precisely recognize two key moments for these kinds of patients. One is when the diagnosis happened and the second is the treatment planning.

We know that early-stage cancer detection is very important. The earlier we detect cancer, the higher the chances of survival. So we have the cancer screening group that precisely works on detection of cancer from X-ray or mammography images or digital breast tomosynthesis and then there is the oncology group where I am. It’s the second pillar at Lunit and we focus more on the treatment planning at the individual level because each cancer is different, each patient is different. So we want to find the best treatment for the patients and we also put a lot of focus on the immunotherapy. The immunotherapy is a more recent treatment but has a lot of potential. On the down side, it’s hard to find who can receive this patient. So we try to develop biomarkers for this and we do this from histopathology images, both HNE training and IHC training. So in summary, we try to accomplish our mission by two ways, early-stage cancer detection and then effective treatment planning in oncology.

[0:04:34.7] HC: So you already mentioned that AI is a core part of this but how do you specifically apply machine learning for screening and treatment planning? How do you set that up?

[0:04:43.7] SP: Well, we use deep learning, I guess it’s no surprise here and it’s really the core of our technology. So we try to develop models that are the cutting edge of machine learning, we try to learn from the literature and also it’s part of our DNA to solve the problems in our own ways. So we come up with novel research or novel technology that we apply then to our products by using machine learning.

So in the cancer screening, as I said, it’s more about cancer detection from these images, X-ray, mammography. In the oncology side, we have this histopathology processing of the images. You know, this kind of very large images, sometimes these images can be all-slide images. It can be a hundred thousand pixels by a hundred thousand pixels and then we need to divide these images into smaller patches and then we do some cell detection, tissue segmentation, and build some biomarkers by zone disk.

Yeah, but always from a principle of applying machine learning and trying to develop best-in-class products. Actually, we are very proud of this. We put a lot of effort in collecting the best data with large amounts of data. Also on the modeling side, as I said, and we don’t have many products at Lunit, that’s a fact but the ones we have, we believe they are best-in-class.

For example, our mammography solution in cancer screening is being considered quite often the best in the world. Last year, we got this strategy, our strategic partnership with Guardant Health that was also based on the evaluation of our machine learning competency in the oncology domain.

So I think machine learning is really central for us. It’s a really core competence and I would say it’s the main trait of Lunit that is so focused on the machine learning part.

[0:06:32.7] HC: Are medical images also central to what should you be working on at Lunit or are you also working with other forms of data and applying machine learning to that?

[0:06:41.7] SP: Medical images are really core here, for either on the radiology side or the histopathology side so we always start from the images. Then more on the oncology side to my knowledge, we also try to correlate or to bring some genetics into the equation but I’ll say that those are more parallel efforts. In fact, most of the products are based on image. If they are all based on image, then we have all the other directions that are more exploratory. I will say, medical imaging is in fact, the core data we use at the moment.

[0:07:17.3] HC: One of the common challenges with annotating medical images is the annotators don’t always agree. Even for expert clinicians, there’s some subjectiveness in their labels. How much of a problem is this for machine learning?

[0:07:29.9] SP: Well, you quite touch on a very critical point, right? It’s in medical imaging, this is recognized as very, very significant issue. This integrator variability. The experts often don’t agree and this is quite understandable. For example, if we talk about annotating cells, we can have hundreds or thousands of cells in an image and they may look alike, so people may differ in what class is a given cell or not.

For the segmentation of tissues, it’s the same situation. Where is the boundary of cancer area and cancer from the start or normal tissue starts is very hard to define. Sometimes there is no boundary actually. It comes from experience from the pathologist. So there’s a lot of variability. Another example can be you know, in this kind of all slide images, they are so large that no human can inspect the full image. So there is this sampling bias related to the areas that they spend for the image and for us as well because we develop all our models based on patches.

So we need to tackle some of these issues because they can have an impact on the biomarkers we build and some strategies that we use for this are mostly executed during the data collection part. For example, we can collect annotations from multiple experts and then have some strategy to fuse them. We can put another layer on top where we do quality control by a more senior expert to make sure that the annotations are of good quality or just exclude the ones that are very poor quality or in some situations, we may want to make the annotations more stable. So we can use some sort of AI assistant in a patient in those cases but in summary, we also observed that ensuring the high quality of the data is way more effective than letting these problems come in and then try to model them later on. In fact, last year, we established in our department is data-centric AI team precisely to tackle this kind of situations and as you said, they are very serious but we can have some approaches to mitigate them as I mentioned.

[0:09:38.5] HC: Have they found out one of those approaches that you mentioned as superior to the other or is it really case-by-case on which is easier to implement?

[0:09:46.6] SP: They kind of build up on top of each other, right? For sure having multiple annotators helps, you know, instead of relying on a single one and these on several levels, not having the only one person annotating. First, this person cannot annotate much data, right? But having several people annotating the same image is really important for us and then on top of this if we have quality control step, part of their push is the performance. Sometimes we don’t even need many annotators. We can reduce the number one for annotators if we have quality control.

The AI-assisted annotation can be used, for example, like we do point annotation of cells. They tend to be more consistent across annotators. So I cannot pinpoint one except having multiple people annotating the same image but this comes with a cost but I can say that they build on top of each other and they are incrementally empathic.

[0:10:39.6] HC: Lunit has a couple of new papers being presented at CVPR in June this year. The first one is on self-supervised learning for whole slide images. For those not familiar with self-supervised learning, what is this approach?

[0:10:52.9] SP: Let’s quickly just say what this supervised learning, right? So in the case of supervised learning and images, let’s speak on our domain. We have an image and then we show the annotations for tissue. So we know what is the expected answer or the prediction for this image and we feed many examples of human-annotated data to the model and eventually, the model learns to do the task.

But this is expensive, it has a lot of noise and variability as we talk before, it takes a lot of time and it’s hard in the medical domain. Another approach more recent is self-supervised learning that do not require human annotated labels or in other words, you can use the unlabeled data to train a model.

In this case, what we need to come up with is we kind of synthesized task that is derived from the data itself. So it’s very clean, it comes from the data under the assumption that if the model learned to do this task, then it should be able to learn about the content, the context, or features of the image that would be useful later on for example.

When we find the model for cell detection or tissue segmentation, this kind of things, all fabrication. One of these tasks that is quite common these days is called contrasting learning. So imagine one image and then we transform these image in some way, like we convert colors to grayscale or we rotted the image and then we’d mix these transformed image in the middle of other different images.

And the task for the model is to identify which image responds to the original one and in order to do this, the model needs to learn about the content, the context, as I said, good features, and turns out that these features are useful and we can have a model that Is trained with a lot of data and it implied on what I said is that if you use end labeled data, we can have way, way more data for training.

In our case, for this paper for example, I use 30 million images for pre-training but in fact, we needed to reduce the amount of data because it was too much. So we can have way, way more than this, right? So it’s very powerful. I think this summarize self-supervised learning. To summarize further, it’s to learn from unlabeled data under the assumption that the features will be used for whatever tasks we are interested in later on like cell detection or tissue segmentation or classification, whatever.

[0:13:19.1] HC: Are there any lessons from this work that you’ve been doing and publishing the CDPR paper that could be applied more broadly to other types of imagery?

[0:13:26.9] SP: Yes, I believe so. I need to do a disclaimer that the paper is very applied to computational pathology but I still think some lessons are useful or can be generalized to other domains. For example, one of the things we learn is that having domain outline transformations like this kind of when we modify the images, if we take the domain into consideration. In our case, one of the examples was to do some data augmentation based on by changing the staining the sustaining of the images or considering different zoom levels off of the all-slide images and this improve the model in the sense that it learned better features and what comes to mind is, well, I’m no expert.

But I see some similarities for example with remote sensing imagery, in the sense that images are so large. So there’s this hierarchical relationship or zoom levels. So maybe some domain knowledge can be incorporated, I think that’s what I wanted to say, the domain knowledge matters to improve further over just bringing the method from computer vision to mind and then there are other lessons that we studied in the context of histopathology. But I believe that they translate to other domains. For example, pre-training pathology data is better than pre-training or using a model that was pre-trained with a different domain like natural images commonly image net because the gap is much smaller to the final task. Another interesting observation was that self-supervised learning’s a hot topic.

So there are many, many, many techniques for self-supervised learning but when we look at the best ones, they kind of perform more or less on par when we bring them to the real world and this is relevant because someone that is starting maybe doesn’t want to spend too much time on trying them all.

We also studied classification, segmentation task. We observed that the size of the data matters once we increase them to labeled data the amount of unlabeled data performance improved but while this was observing in histopathology, I believe that other domains can benefit as they said, especially when we consider maybe not so much the bigger community that works with natural images but the communities that work with more niche domains like computational pathology, medical imaging can benefit from some of the answers we provide.

At least, benefit from learning what they need to explore more or less. So I think in that sense, it also provides general lessons. [0:15:53.9] HC: I definitely think you’re right there, the lessons in taking a technique that’s been developed for the more common image datasets explored in computer vision, applying it to a very specific task and you know, you need to bring in that domain expertise. You need to adapt it to that type of imagery but once you do that can definitely advance your application and the different ways you do that are very much applicable to remote sensing and other types of imagery, so thank you for sharing all of that.

The other paper your team is presenting at CVPR is connected with the dataset you’ve released and running a challenge on at the medical imaging conference MICCAI in the fall. From a business perspective, what do you hope to get out of publically releasing the dataset and running the challenge?

[0:16:40.4] SP: Yeah, that’s a very good question but let me just give some context to this dataset and the task that is solved in this dataset. So it’s well known that the same type of cells and this is very simple, by the way of explaining this, all right? So the same type of cells that can be aggregated to form tissues, tissues are aggregated to form organs that organs can be aggregated to form living beings in a very simplified way.

So what I want to say is that there hierarchical relationship between cell and tissues and pathologies, they make use of this knowledge when they are observing an image. So typically they start by looking at the zoomed-out the all-slide image and they get a picture of the context and the architecture of the tissues. Then they zoom, they check the cells, they get an intuition of the classes of the cell. Sometimes they zoom out again and see the tissues and then they get to know their idea or refine the intuition they have and they do these zoom in and zoom out to identify the cells more correctly or the tissues. So there is this relationship between cells and tissue and while many papers talk about these relationships, what we couldn’t find was much literature that actually tries to tackle it.

There is some but not much are none to my knowledge that as labeled data for cell detection and tissue segmentation at different zoom levels so that we can learn these relationships in a supervised way and even in terms of data, there is only one dataset but still it’s smaller and not exactly the same as ocelot. Ocelot is the name of our dataset. So it’s a complex problem and our team members came up with this idea of releasing a portion of our dataset. So that the community can also start these complex yet important research direction, develop it further, come up with innovative solutions and in this way, we can scale up the amount of solutions that exists for these problem and we close the loop like they’re on by getting inspired by the what the community is doing, so we can bring back and we believe that these would allow us to push our products further and improve the performance. So there is this benefit to the business.

On another side, we also want to give more of this ability to loan it in the computation of pathology domain, bring this ability to the excellence of the research we are doing and potentially attract talents that comes from the computational pathology domain that comes with this knowledge and by the way, we are hiring at the oncology AI department, so listeners can check on our carrier website. So the benefits are to fall, one, we hope to getting inspired in the future by the solutions that the community might develop, the academic solutions so it’s a win-win situation I believe, and second is to potentially give visibility and attract more talent persons.

[0:19:42.7] HC: Lunit has published many other papers beyond the ones we touched on today. What benefits have you seen from publishing? Is it similar to the business perspective for doing the dataset and challenge or are there other aspects of publishing that have been beneficial for Lunit?

[0:19:58.5] SP: Some aspects are related and we published basically in two main domains. One is the computer vision and the medical domain, they have somewhat different benefits. At Lunit, we conduct research in the scope of some products, so are some problem of the products that we need to solve and as an outcome, we can have a paper. So by solving these problems, we push the boundaries, the frontiers of what is the technology at the time, and then we can publish this and also showcase that at the forefront, we are using state-of-the-art technology. We can show this outside of Lunit. So it is also a way to promote ourselves from the Lunit’s perspective and then from the team member’s perspective, it is also a way for our researchers to grow because, in fact, publishing is recognized by the research community, right?

So they feel extra motivation to do this kind of work and helps them grow and finally, it helps attracting talent to Lunit, people that got impressed by this kind of papers, right? That was my case, for example, I was impressed at the time that Lunit could publish a top-tier conference season that caught my interest. So this is for the computer vision domain. For the medical domain, it is a little bit different.

It is more about establishing evidence that our AI solution really work in the real world and bring benefit to the patients, is to show that our AI solutions can be used in clinical trials or in new settings, and now AI is quite pervasive. I think it’s clear that AI will be used in the medical domain but a few years ago, well, there was some resistance, right? So we need to be able to provide evidence and finally, healthcare is a very critical domain. So mistakes in healthcare can have a very big impact, so we need to be able to show and demonstrate that our products work as we promised.

[0:21:55.4] HC: Is it challenging to get a paper out while simultaneously trying to get a product on the market?

[0:22:01.4] SP: Yeah, of course, because both need time and the priority should be on the product but what we figure is that fortunately, we have many problems with our product in the positive way, interesting problems that we need to solve right in the fundamental way. So the way we make it work is to align the problem we needed to solve with the research problem and so we tackle those at the same time and we solve them in our way. In this way, we can have novel solutions that we can publish while still achieving the goals of the product and in many situations, going beyond the initial goals.

[0:22:40.5] HC: That’s great to hear. So Lunit is based in South Korea but markets its products around the world. How do regulatory processes affect the way you develop machine learning models?

[0:22:51.4] SP: Well, at the moment, I cannot say the facts too much in the way we developed the models or our pipeline or our procedure to develop the models because some of the things that we need to obey they are in any way good practices in machine learning. So for example, when we go for regulation, we need to prove that we have data from multiple sources that cover multiple situations. For example, different cancer types, people from different age range, these kinds of things but in any case, we won’t have this data so that we can prove that our models generalize and then another thing that is very strict is that we need to very well sophistify the training set, the validation set, the best set, and have an external best set that is from a different source from the other ones but again, it’s good to have because it can allow us to test the generalization of our model.

So specifically to computational pathology, we need to go a little bit more into the details. So when we split the data for training, tuning et cetera, we need to do this at the patient level because of the models are developed using patches, and if we use patches from the same most slide image then there is some leakage between training and test for example because also the patches are not the same.

They are correlated because they can be from nearby regions but all of these are as I said, good practices for machine learning and development. If not needed, I prefer not to put too many restrictions on our researchers and let their creativity flow and if something is needed, then we work with our regulation team to figure out how we can solve the regulation problem. Of course, we need to obey the regulation, that’s obvious, right? But it is just from my perspective in practice, I didn’t see anything that was a blocker that comes from the regulation perspective.

[0:24:44.9] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:24:50.2] SP: Yeah. I mean, not sure if it’s advice but things that I really think is important for AI-powered companies and I may be a bit biased to the AI teams, which is the team that I manage. So the number one for me is that the team should be one of the most, if not the most valuable assets of the company. So make sure to hire talented people. But also people that are good people and then after that promote a culture where people can strive, be creative, open-minded, allowed to fail, allowed to speak their minds because this will lead to teams that will be more independent and often will come up with better approaches and solution that for example myself, as the leader of the team, right? I just need to listen to them, this is really important.

For example, in our CVPR papers, of course, we needed to set the direction. As we talked before, align with the product, something that could benefit us but then the whole idea of the papers, the methodologies, the challenge, it came from the team members in a bottom-up approach and you know after a while, you just needed to attend meetings and point stuff out like being reviewer number two. That’s it, they could do a lot by themselves and I am very much appreciative to my team members for that. Of course, there is a lot of listening. Listening is very important as well to people, so team is one.

Another one I think is very important and sometimes overlooked when we are talking about these hot technologies like AI or blockchain a while ago is that people come up to a technology and then use it as a hammer and try to use it for whatever task they think it’s used for. But what I think is important is to align with the business to make sure that what we are building, our projects, our products, they will benefit someone. Someone will find it useful and it will solve someone’s problems not just random tasks. I think this is really important and in the AI side, I think it’s particularly important because people are so excited about new technology, right? But sometimes people may be – end up developing things that no one wants.

Actually, it’s fun because I learned this very early in my master’s degree. So I was working with a friend and she developed a brain tumor segmentation model based on random forest, a lot of deep learning at the time. The model worked pretty well so we wanted to show it to a medical doctor and she developed a graphical interface in knowing MIR, MIR is a 3D image. So we can scroll up and down the slices of the brain in this case.

So we went there to the medical doctor and we showed the results of the segmentation and the doctor was like, “Yeah, you know it’s cool. It works fine. It can be useful” but at some moment, she noticed that my friend was using the mouse wheel to quickly scroll through the slices and she was mind blown by that and the reason is because her software was so old she needed to use page up and page down to go across the slides, it was slow.

So we had something that was fancy, that was the best we could do at the time but what had really solved some of the problem that this person was something as simple as being able to quickly scroll through the slice. So of course, she liked the segmentation by that but there are other problems that could impact her life. So this alignment with the end user is really important. I think that’s what I want to say and then there are other things like investing on the data quality, a month of data on the ML labs infrastructure.

So that we can scale up our products later on, this is really important and then a little bit more cheesy is that the work but I believe it worked on something that we truly believe. We need to believe in the products to keep the motivation going on and unfortunately about this at Lunit I can work with this application on cancer, which has a high impact in the world not only for the patient himself or herself but the surrounding people, the family that also suffers and this gives a lot of good energy.

So in summary, my advice will be to invest in people, of the team, invest on high-quality data, and scalable infrastructure, build something that really matters that is aligned with business, and work on something that brings a good impact to the world to going, you know, AI can be used for good and for bad. Let’s make sure we work on the good part.

[0:29:19.6] HC: Speaking of impact, where do you see the impact of Lunit in three to five years?

[0:29:24.3] SP: We want to conquer cancer through AI, right? So that’s the – you know, this is a never-ending mission but I hope to continue to be recognized as best-in-class products. In the case of the radiology, our cancer screening group, I hope to see an exponential growth of the deployments across the world making those tools pervasive and being part of everyday tools for radiologists.

In the case of oncology, I see Lunit partnering with big players, for example, big pharma, developing our own biomarkers, starting to work on multiple modality approaches for these applications, being used as companion diagnostics for new drugs, and in this way, try to unlock new doors for treatment of cancer patients and finally on the research side, I hope that we can continue working on many interesting challenges. Solve them in our way, in a fundamental way and come up with innovative solutions out of the AI research department and grow the department with more talented people and I think this is it. I would be happy to see these things happening.

[0:30:33.7] HC: This has been great. Sergio, your team at Lunit is doing some really important work to conquer cancer. I expect that the insights you shared will be valuable to other AI companies. Where can people find out more about you online?

[0:30:45.8] SP: LinkedIn, that’s the most straightforward way. Also Google Scholar, you can find my profile there, those are the main or in Lunit’s webpage, I think my profile is there somewhere in the big – in the website but LinkedIn will be the easiest way for sure.

[0:31:01.4] HC: Lunit’s website is lunit.io, is that right?

[0:31:04.5] SP: Yeah, that’s right. That’s right.

[0:31:05.6] HC: Okay, I’ll link to all of that in the show notes. Thank you for joining me today.

[0:31:10.0] SP: No, my pleasure. Thank you for having me. It was fun and interesting and very nice questions.

[0:31:14.4] HC: All right everyone, thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:31:24.2] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share it with a friend and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]