Today’s guest uses AI for the personalized treatment of early-stage lung cancers and other lung diseases that need localized treatment. Eva van Rikxoort, a scientist in medical image analysis and the CEO and Founder of Thirona, started studying AI 20 years ago. Her interest in lung imaging and the translation of it with the help of AI led her to found her company which develops medical image analysis software based on deep learning.

In this episode, she explains more about what Thirona does and the challenges they encounter when working with CT images. You’ll learn about the importance of online learning components for the future of AI applications for medical purposes and how the team at Thirona ensures that the technology it develops provides the right assistance to doctors, patients, and researchers. Tune in to find out more about the role of AI in the future of personalized medicine and lung disease treatments.

Key Points:
  • An introduction to Eva van Rikxoort, her experience in AI, and how she founded Thirona.
  • What Thirona does and why this is important for treating disease.  
  • How the company uses machine learning in multiple ways for different models to predict various outputs. 
  • The types of challenges Thirona encounters when working with CT images.
  • How they deal with situations where clinicians disagree or annotations are not reliable. 
  • How the regulatory process affects the way Thirona develops machine learning models.
  • The importance of online learning components for the future of AI applications for medical purposes. 
  • How Thirona ensures that the technology it develops provides the right assistance to doctors, patients, and researchers. 
  • Approaches to recruiting and onboarding that have been most successful for Eva’s team.
  • Eva’s advice to other leaders of AI-powered startups about trusting your gut and asking for help. 
  • How Eva foresees the impact of Thirona in the future in terms of personalized medicine and lung disease treatments.
Quotes:
“[At Thirona] we don't make software that's aimed at radiology, even though medical image analysis is very much a radiology thing, but we really focus on breakthrough treatments that are being developed both by pharma companies but also by biotech companies.” — Eva van Rikxoort

“If you look technically at AI, we could be learning every single day from what we do. I mean, we as [people] do, but our AI models learn in release cycles instead of on a daily basis.” — Eva van Rikxoort

“Next to making something that's highly innovative, you’re also opening a market to this innovation. It's a twofold thing.” — Eva van Rikxoort

“Don't be afraid to ask anyone for help. For example, opinion leaders, doctors, they are very, very happy to help. They really love the innovations in their field.” — Eva van Rikxoort

“I really think the impact of Thirona will be the personalized medicine, the personalized treatment of early-stage lung cancers or any other lung disease that needs localized treatment.” — Eva van Rikxoort

Links:

Transcript:

[INTRODUCTION]

[00:00:02] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[00:00:32] HC: Today I’m joined by guest, Eva van Rikxoort, CEO and Founder of Thirona, to talk about advancing treatments for lung diseases. Eva, welcome to the show.

[00:00:42] EvR: Thank you. Thank you for having me.

[00:00:43] HC: Eva, could you share a bit about your background and how that led you to create Thirona?

[00:00:47] EvR: Yeah, sure. Actually, my background started around 20 years ago when I started studying AI. When I started AI in a time when, at least in the Netherlands, you were studying just with eight people, where now it’s, of course, a very popular subject. After studying, I became a scientist already in medical image analysis. I started working as a scientist on lung imaging, CT imaging of the lungs, 3D images, and analyzing these to detect any diseases. Although I really liked science, what I missed in science a lot is translation. Translation from – actually, the applied science I was doing to any patient.

I moved to the US, actually to UCLA after my Ph.D. in the Netherlands. There, well, I was actually a postdoc I very quickly started working for a spinoff that was doing medical image analysis in lungs for pharmaceutical companies. Then using all the knowledge that I had to help these companies run their clinical trials and on the CT scans determine, what is the drug I’m developing actually doing? Because you can take out the lung to just look what’s happening. You can only like do a test how much air you can blow out, for example. With this imaging, it would really help them determine what is structurally changing in the lung when I gave a certain medication? What’s actually the mechanism of action of the medication?

I did that for two for a few years, but I moved back to the Netherlands, after that went back to science and to lead a group again on medical image analysis in lungs. I loved it, but again, I missed the translation.

Then in this whole process of moving from a company back to academia, but still having the network. What I started doing is into my evenings using everything that we developed in science to help still people in my network, well, determine the same things. What is actually happening with my patients. That all accumulated to a point where I said, “Hey, there are so many people interested in what we do with science, but we’re not translating it. So let’s start a company that actually translates everything that’s possible in medical image analysis for lungs in the end, patient care and make an impact there. Long story, but –

[00:02:50] HC: What does Thirona do? Why is this important for treating disease?

[00:02:53] EvR: Yeah. Thirona develops medical image analysis software, which is all based on all nowadays on deep learning, machine learning. This software helps to determine which anatomies on the scan, how normal is it, what kind of disease does the patient have.

We actually do that in a slightly different setting than most other AI companies. So we don’t make software that’s aimed at radiology, even though medical image analysis is very much a radiology thing, but we really focus on breakthrough treatments that are, of course, being developed both by pharma companies, but also by biotech companies where with the promised scope in the lungs, you can locally treat diseases.

We really focus on how do you personalize imaging to how do you make AI that helps determine which treatment will work for which patient, because what we see in the world is that in lung diseases, all new treatments that are being developed, it’s not just one pill and it works for everyone. You want to administer something locally where you have the disease, not to harm the healthy parts of the lung. You want to really determine, for each specific patient, where is this location, do you have this specific disease that we’re looking for, and how do I dose the treatment. We’re really focusing on enabling all the new treatments that are being developed to actually select the right patient and be able to come to the clinic. That’s the real impact we are making is, yeah, being an innovator in the treatment space.

[00:04:18] HC: Sounds like using machine learning in multiple ways for different types of models to predict different types of outputs. Is that accurate?

[00:04:27] EvR: Yeah. It has a similar core within the – so if you look what we actually developed, first we have our basic system that can take a CT scans with 3D image of the lungs, determines where the lungs, where your airways, your vasculature, what do you as a person look like. Then based on that, indeed, we have models that can help navigate to the disease, but also determine the severity of the disease, determine the staging of the disease or the follow-up after you’ve been treated, what happened to your lungs. Yeah, we have different models for different types of questions that our partners or the doctors have.

[00:05:01] HC: What kinds of challenges do you encounter in working with CT images?

[00:05:05] EvR: On the one hand, the medical imaging is a lot easier than real good imaging, because you always know what’s in front of you, you’re not determining if it’s a tree or a bird, you know it’s lungs. But at the same time, if you look at the CT scans, there’s a lot of variety. It’s different scanners, but within the scanners, you have like, 20,000 different settings, if you will. Then there’s the patients, there are some patients are larger than others that influences the amount of noise. So you’re really dealing with the things can look tremendously different on different scans from the same patients.

If you scan the same patient twice on the same day on two different machines or with two different settings of one machine, you can. If you don’t take care of that, then your models get completely different answers about, for example, how much disease do they have. So it’s the friability and in combination with it, you’re often looking for very small responses. If you’re looking at the drug and are you responding to the drug, you’re looking for very small differences, that noise and friability in your scans is really harming your ability to do so. I think that’s the biggest challenge you always have with CT scan.

[00:06:08] HC: How do you deal with those types of variations? Is it more through data cleaning and larger and more diverse data sets, or is it more on the algorithmic side?

[00:06:16] EvR: It’s of course a combination. Yeah, of course, first thing is that you try to fix it at the source. So if you put things out there or put things out in clinics, of course, we try to advise people to be as consistent as possible in the scanning, but you also realize in the real world, this is not always possible. We start at the source, then we come to our models and we say we are going to train them with the widest variety we can think of, the widest variety we have in hand, to make sure we are robust.

In addition, we also have to make models that before analyzing a scan, we try to normalize a scan so that they’re always all in the same large space, let’s say they always look similar. They have been taken with different machines with different settings, but we all first overt them to look like one setting before analyzing. That works, but of course by altering a scan, you might alter your signal. We try to fix it at the source and by training with variety of images, but we also have methods to correct it if it’s needed.

[00:07:14] HC: Do you ever have to handle disagreements and annotations? For example, if clinicians disagree on the exact subset of a disease a patient has, or maybe other ways that the your annotations or labels might not be totally reliable?

[00:07:30] EvR: Yes, I think that is a very common theme. I mean, there are things that are very obvious, but there are things indeed that if you ask three humans, you get three different answers. I think that is difficult, because you have to decide indeed what is your AI going to do? On the other hand, I think it also is one of the strengths of AI, right? Of course, you have to make the right decision that what you make is reliable, but then once you’ve made it, it will always give you the same answer, no matter if it’s 8 in the morning or 2 am at night. It’s always the same. It’s one of the strengths that at least, for example, what we do is a lot of clinical trials, so it’s also very important that if you measure a signal, you measure it consistently in the same way, so you can measure the difference of a treatment. It’s also a weakness of humans and the strength of AI.

Then the dealing with variation in input, we sometimes also turn it around. Yes, first you ask three or four or five humans to annotate something and you see a large variation. You’re a bit, “What do I do?” But if we turn it around, make a system based on, for example, the average of the humans or whatever we come up with, and then show all these humans the output of the system and ask them if it’s correct, you very often get a yes. It’s also, if that’s the same person three times to annotate, they will also get three different answers. They leave enough time between so they don’t remember. It’s not that they think something else is wrong, it’s just that it’s not the black or white. Yeah. It’s a difficult problem in itself.

[00:08:53] HC: Yeah, it’s subjective.

[00:08:55] EvR: Yeah.

[00:08:56] HC: If you have to deal with that human variation, even from experts.

[00:08:59] EvR: Yeah. Indeed, in medical imaging, you can encounter that a lot. AI is also helping to standardize things more.

[00:09:06] HC: How does the regulatory process affect the way you develop machine learning models?

[00:09:11] EvR: In the basis, not so much. Maybe it’s because we’ve been doing it for quite a while already, but we are able to develop how we want and then get that regulatory cleared. I think what really is stopping us from, for example, doing online life learning. It’s technically, is possible, technically we can do it. But we don’t do that, because we don’t know how to clear the fact that we are constantly learning, because you’re clearing a model with certain performance measures that’s invalidated as very, very far-eyed and cleared. Then if we then start updating the model, we can get that cleared. We don’t manage to it at the moment.

We are maybe much more working in cycles than, and if you look technically at AI, we could be learning every single day from what we do. I mean, we as persons do, but our AI models learn in release cycles instead of on a daily basis. That is the difference. I don’t know how important that it is, but it’s one of the cooler things of AI that you can learn on the spot. That we don’t do, because of regulatory period.

[00:10:13] HC: Do you think that online learning components is important to the future of AI applications for medical purposes? The regulatory process doesn’t work with that type of setup right now, but do you think it’s important for the future?

[00:10:26] EvR: Yeah. It’s a good question. I joined the field we are in, I think it’s, it could be important, but what I could imagine is that you can then also – there are of course things that happen very sporadically. Then if something is live and something like that happens, and the AI has it wrong the first time, it would be normal, but if it’s correct, it then immediately can take in that knowledge. You’re building like a live knowledge space and not, for example, forgetting this one case, let’s say.

I couldn’t imagine that there’s a space, especially in more rare diseases or that it’s – yeah, it’s nice. What I do think, and I think it will happen, I think it’s also the regulatory space move relatively fast in some ways, most not, that some is that you have this federated learning approach is that of course more and more popular, and then that there is also online learning. So at some point, you start to be able to learn from data without using the actual data. The privacy is not at risk. Then we start to be able to prove that somehow validation is also in those systems that you’re not harming the system, which that’s what it’s all about, that we not making it worse.

I think it will be there and it will be important in some areas. Yeah, but maybe also helps to notice if you have this kind of AI. Of course, if you train it on 10,000 types of scans and then there’s also that has just that one type of scans you haven’t trained on, it could be live-trained relatively quickly is also work on these scans, then you don’t have to go back to the manufacturer and do a new release. It’s just learning online very quickly for these types of scans. So in that sense, it could be a game changer for that variation maybe.

[00:12:01] HC: Yeah. I guess with the current setup with regulations that you’re expected to anticipate that different scan or different variations upfront and make use of that when you collect your training data, when you validate all of that.

[00:12:15] EvR: Yeah. When they update the scanner, they could introduce new settings that you haven’t trained on, because they didn’t exist yet. I could imagine that those things in a live online learning setting could be tackled faster, because otherwise you have to do a new release and collect data and collecting data is of course more difficult than if hospitals without actually sharing the data could update models. I think there are good use cases to do that and I also believe it will happen with also all the federated learning that’s coming up, it’s actually already there, but I don’t know how clinically used it is. Yeah, it will happen.

[00:12:50] HC: How do you ensure that the technology your team develops will fit in with the clinical workflow and provide the right assistance to doctors and patients and researchers?

[00:12:59] EvR: Yeah. I think that’s a good and very important part. What we do, so we have a large network a, of key opinion leaders, who of course help us every time we make something new, how does it fit in the workflow in the clinic. Next to that, well, we are also very much focused on our partners. Yes, we have clinical integrations, but also a lot of partnerships where we make clinical decision-making software for certain treatments. There we really develop the outputs and the way you input together with the doctors that are going to use it.

It’s very often a back and forth that can take a few months. We make something, we test it, something is right, something’s wrong, we update it, just as long until the point that they are, they know how to use it and they are happy that it fits their workflows. It’s really funnily enough, it might take an equal amount of time as making the models. Otherwise if they can’t use it, it would be the best model in the world, but yeah, people have to use it, don’t know how. It’s really made for the process and constant contact with physicians actually.

[00:13:57] HC: Yeah. It’s really important to have that constant feedback and that iteration as you speak of.

[00:14:02] EvR: Yeah.

[00:14:03] HC: One of the large challenges in AI companies right now is hiring for machine learning, because this type of skill set is very high demand. What approaches to recruiting and onboarding have been most successful for your team?

[00:14:16] EvR: Yeah. I think it’s interesting for us actually hiring deep learning engineers who are in our company. The people who make the models, so actually that the machine learning part that has not been difficult. I think, yeah, we’re in location where a lot of these people are actually in education, but also the impact driven nature of the company attracted to people, but we do have in like our pure software development, so we have the part where we make the AI models and then actually about the integration with the clinicians, and how do you present it.

That’s for us, it’s not deep learning engineers, but it’s more software engineering. That’s a very difficult part for us to recruit. I don’t know why, but I do know that they are, of course, in many companies in all fields, they need software engineers. In that sense, it’s the market and maybe the impact that we make has less of an influence, because they’re a little bit further from it. But there, yeah, recruiting is I think that – I don’t know the answer. If anyone does, we’re happy to hear.

Yeah. We recruit worldwide. I think we are currently with 15 nationalities, because we keep recruited, it doesn’t matter what group are, we recruit them, but it is very – I don’t know the answer. If nothing is really successful in that field. We keep searching the – we do it ourselves, we’re just recruiters. We’d go to fairs to attract people. Juniors are okay, it’s too fine, but seniors with experience for us, it’s very, very difficult. I don’t know the answer.

[00:15:41] HC: That’s interesting, though. You find that distinction between deep learning engineers and software engineers, because in most companies I’ve talked to, it’s a machine learning component that’s most challenging to find, but data engineers can be hard and difficult for recruiting as well.

[00:15:54] EvR: I have the feeling about it’s our location and we’re next to a large university with a large AI component. That is nice if we want juniors. Yeah, somehow I think the impact drivenness makes it a little bit easier, but it could also be a purely our location. I don’t know.

[00:16:10] HC: Yeah. I know. I can see the local university component being important there and perhaps the students coming out of there are aware of your company while they were in school. It just makes sense to look locally when they’re looking for a full-time position. Is there any advice you could offer to other leaders of AI powered startups?

[00:16:28] EvR: Yes. Of course, there are many fields of advice, but I think, anyway, if you’re leading any company, but also an AI company, it’s maybe go with your gut.

It’s a tough field, right? Because it’s a field in which you have, of course, you have to make technology that’s very innovative, but you also have a market that is still getting used to the innovation. It’s not a market that’s already there. You’re not selling printers. You’re really— Next to making something that’s highly innovative. You’re also opening a market to this innovation. It’s a twofold thing.

I think what has helped me a lot is, indeed, do something that you really like. Then it doesn’t matter that it takes so much time, but also don’t be afraid to ask anyone for help. For example, the opinion leaders, doctors, they are very, very happy to help. They really love that the innovations in their field. They won’t maybe help you by themselves, because they don’t know if they can help you. I’ve never had that I asked anyone for any help and they said, no. They always say, yes. Then start offering more and more help and advising. I think that for me is a bit learned from having a company like this, is that it don’t be afraid to ask for help with the people in the field, because they really want to help you there. If you don’t ask for help, you’re really lost in getting into hospitals and getting all the contracts you need.

[00:17:40] HC: Finally, where do you see the impact of Thirona in three to five years?

[00:17:45] EvR: World domination, but I see, because we are in the long image analysis space, but that really centered around the treatments. If you look at the lung treatment world, we’re really starting of course to focus around the lung cancer screening, but also the follow up of that in the clinics, the localized treatments. I believe that around that AI will really take care of the personalized medicine part of it. It’s not a one size fits all treatment, never. A human can’t do this. A human, well, next to, they can take eight hours to anything at the scan. They also can see a difference of one millimeter, or follow an airway and [inaudible 00:18:22] all the way out to the periphery of the lung and then nowhere to go. I really think the impact of Thirona will be the personalized medicine, the personalized treatment of early-stage lung cancers or any other lung disease that needs localized treatment.

[00:18:37] HC: This is a great Eva. Your team at Thirona is doing some really interesting work for medical image analysis. I expect that the insights you’ve shared will be valuable to other AI companies. Where can people find out more about you online?

[00:18:49] EvR: Of course, we have our own website, thirona.eu, because we’re in the EU. Our LinkedIn channel as always, yeah, always the latest news is on there. That’s also a nice source of information.

[00:18:59] HC: Perfect. Thanks for joining me today.

[00:19:01] EvR: Thank you.

[00:19:02] HC: All right, everyone. Thanks for listening. I’m Heather Couture. I hope you join me again next time for Impact AI.

[OUTRO]

[00:19:12] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.