If you are working in the life science research space and battling with image recognition issues, firstly, you are far from alone, and secondly, there is a solution! That solution comes in the form of KML Vision, an AI-powered start-up co-founded by today’s guest, Philipp Kainz. In this episode, Philipp explains how he became aware of the image analysis problem and the process that he and his team have gone through to develop machine learning models that provide a range of benefits to a diverse cohort of end users. There is still a large gap between what is technologically possible in a research or lab setting and what is actually out there and what people can use. Through their flagship product, IKOSA, Phillip is on a mission to change that. Listen to this episode to gain an understanding of how machine learning is being used to shape the future of life science research! 


Key Points:
  • The motivation behind the founding of KML Vision.
  • The value that KML Vision’s cloud platform, IKOSA, brings to the life science research space.
  • The diversity of end users of IKOSA.
  • Benefits that IKOSA provides to its users.
  • Examples of some of the most common use cases for IKOSA.
  • The role that machine learning plays at KML Vision.
  • How KML Vision trains their models.
  • The challenges that the KML Vision team have run into when training their models.
  • Philipp explains KML Vision’s approach to developing the machine learning aspects of a new product or feature.
  • How KML Vision helps to solve the problem of reproducibility in the life science research space.
  • Valuable advice for leaders of AI-powered start-ups.

Quotes:

“We basically set out to help people overcome this barrier of using new technologies for image analysis.” — Philipp Kainz

“There is still a big gap between what is technologically possible in a research or lab setting and what is actually out there and what people can use. So, we are actually focusing on bridging that gap.” — Philipp Kainz

“Nobody [really] has time to go into the inner workings of deep learning. They want to use it like we use this smartphone today. This is where we want to be in three to five years.” — Philipp Kainz


Links:

Philipp Kainz on LinkedIn
KML Vision


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[0:00:02] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research and computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:32] HC: Today, I’m joined by guest Philipp Kainz, CEO and Founder of KML Vision, to talk about Microscopy Image Analysis. Philipp, welcome to the show.

[0:00:41] PK: Thanks, Heather, for having me.

[0:00:42] HC: Philipp, could you share a bit about your background and how that led you to create KML Vision?

[0:00:47] PK: Yeah, absolutely. My background is in computer science. I did my PhD at the Medical University in Graz. I was working on creating automated image analysis solutions for mainly histology images. While doing that, I noticed that there are many, many different people struggling with the sheer size and the amount of image data that they are confronted with in their daily work. After finishing my PhD, I created KML Vision with my Co-Founder Michael. We basically set out to help people overcome this barrier of using new technologies for image analysis.

[0:01:29] HC: What are you doing at KML Vision these days? Why is it important for life science research?

[0:01:35] PK: Actually, when we started out, it was just the two of us, and we were thinking about how we can best support those researchers with their questions. Many of them do not have the abilities to apply advanced image analysis technology to solve the use cases efficiently, so they struggle and put a lot of effort into optimizing people rather than the tools and technologies. We noticed, okay, that would be some interesting business for us, but in the end, we wanted to create something scalable.

In 2018, actually, we started to implement our cloud platform, IKOSA. That’s basically our flagship product that allows you to train your own AI solutions for image recognition online. It’s important for life science research, because there is still a big gap between what is technologically possible in a research or lab setting and what is actually out there and what people can use. So, we are actually focusing on bridging that gap, giving the technology and the ability to people who would not have a background in computer science, but they are much more focused on getting the results from the biological research questions.

[0:02:49] HC: Who are these end users? What is their level of experience? Clearly, they’re not very technical in the image analysis space, but what is their background and what are they looking to accomplish?

[0:02:59] PK: That really depends on the use case. We have seen, I mean, from individual research, doing a lot of screening applications. Also histology, looking for resonance imaging, up to transmission electron microscopy or in general, art-structural analysis, everybody has a little bit of a different access to resources on how to work with such a technology. I would say that it’s quite diverse.

Biologists who are really interested in getting certain structures quantified, that would be the ones that are, I think, mostly benefiting from our technology and the level goes up until machine learning researchers, who basically want to accelerate the pace or, yeah, make it faster how they can extract meaningful information from images, even if they’re, for example, located as an expert in a competent center and a co-facility for imaging. They appreciate our solution because it’s super easy to use and creates very fast results and is scalable.

[0:04:05] HC: What life science problems are they tackling? What are they trying to do with the analysis? What do they want to quantify and for what purpose?

[0:04:13] PK: Let’s split it into different areas. The first area would be everything related to tissue imaging, everything related to rather digital pathology in that direction. The other would be the biotech sector, there’s a lot of cell culture screening and so on. Also, sphere rates, the biotech industry is currently really hot on that topic. Depending on which area you’re in. Let’s just first tackle the histology. I mean, a very common use case is just, can you count this type of cells in my tissue? I’m requiring that for a certain readout and to inform my next step in my research pipeline, that can go to segment or measure different thicknesses of layers in histology images.

For example, if it comes to skin or other organs, well, this is relevant. Then there’s the biotech sector where you have much higher volumes in order to get a statistical relevance on a certain drug response or toxicity and so on. Those are two different scenarios where in both cases, the quantification mainly relies on an accurate segmentation of tissue or other structures. The extraction of quantitative shape parameters or intensity parameters from those segmentations.

[0:05:38] HC: On these image analysis problems of segmenting structures or counting individual objects, what role does machine learning play?

[0:05:46] PK: Yeah, a big one. It wouldn’t be possible, I would say, without machine learning, because back in the day, when we started, there was the onset of deep learning, I would say. Before that, there was a huge community around all these decision tree-based methods and so on. When it comes to automating very complex structures or the recognition of complex structures, the human is biased when it comes to assessing those structures and to create this objectivity and reproducibility, we need to capture that consensus of different experts on certain subject matters into a machine, basically, and into an algorithm. That is actually the key to creating this reproducibility, enabling them to be faster and to have a more quantitative basis, evidence-based readouts to go on and continue their research.

[0:06:46] HC: Are you training models based on data provided by the end user or by imagery that you bring into the platform, initially, and train the models for the end user to be able to use?

[0:06:56] PK: There are two folds, both is possible with our platform. We have a set of pre-trained, predefined applications, basically, that are very specific for individual use cases. Then there is this self-trainable version called IKOSA AI. They are used, you can basically upload their own data, annotate the structures that they are interested in, run a training and create an application that they can. Then scale and apply to other images and create the readouts.

[0:07:27] HC: In the case of the first example there, where you’re training models and the users are applying it to many different types of data, how do you go about gathering and annotating training data in order to train these models? What’s involved in that process?

[0:07:42] PK: I think the main point here is to work very closely with the domain experts. I mean, if you look at creating such very specialized applications, probably in niche applications too, you wouldn’t find too many people on the streets who would know what actually is which object in such images, let alone having the experience to discriminate with a high confidence when it comes to very low inter-class distance, for example.

In biology, I mean, everything is continuous and to focus on or make this continuous spectrum discrete is sometimes, you’re subject to bias. We really rely on people who are working with this data who are familiar with the domain, with the subject. They help us to create basically the best possible symbiosis between the computer science world, so everything that you need to prepare so that the machine can understand and learn it with the biology world.

[0:08:42] HC: Then in training models on this data that you have collected and annotated, what challenges do you encounter with microscopy images?

[0:08:51] PK: There are a couple of challenges. I mean, first, I would say it’s critical how you balance your data. I mean, if you have an object or some structure of interest that is super underrepresented, that there is just a lot of not interesting stuff around in the tissue, then you probably need to have a strategy how to annotate that, how to focus the model on certain regions where the relevant information is represented.

Also selecting patches, for example, from big microscopy images that are non-redundant in order to maximize the variability. Those are definitely challenges that could be a problem or could get a problem in many use cases, if you don’t know how to do it right. Regarding the different modalities, there’s definitely a certain domain shift, not only in the pathology domain, but also when it comes to imaging the same structure with different illuminations in a phase contrast microscopy setting or even go ahead and say, okay, exposure times in fluorescent channels, they are so different between two experiments or two devices that you need to normalize your data. That is definitely something which is a challenge.

[0:10:08] HC: Let’s talk a little bit more about that domain shift problem, whether it’s a slightly different type of imagery that an end user brings in or the lighting has changed or something about it has changed. How do you ensure that the models you’ve already trained will continue to perform well on imagery that’s just a little bit different or do you need to make updates in that situation?

[0:10:29] PK: Yeah. The ones we have trained in the platform, they are basically representing, I would say 80% of what we can anticipate. Of course, there is always some border region, some samples that you cannot really see when you start collecting your data and annotating that. I guess then most of the time, people would just call and say, “Hey, this is not going to work on my data. What is the problem?” Then sometimes we just figure out, okay, they have a different setting or maybe their filter is not correctly adjusted, they have a lot of cross talk, they have stuff that you can fix in the image acquisition, so that you can actually create better quality images that can be analyzed much easier rather than adapting the technology for the recognition or the computational path to what’s out there. I think that is one of the main reasons we encounter these domain shifts.

[0:11:27] HC: That’s a really good point about the image acquisition step and if you can get better quality images, reduce the noise, reduce the artifacts, reduce whatever that complication is, that makes the rest of the process more consistent, more robust and definitely easier to work with in a machine learning world.

[0:11:46] PK: Absolutely. Yeah, I agree.

[0:11:48] HC: Thinking back to when you’re developing a new product or a new feature, how does your team plan and develop the machine learning aspects of it? What actions do they take in the beginning of that plan?

[0:12:00] PK: I think it starts with understanding what the problem is and how to formulate it. As I said, most of our applications are segmentation applications. Since we want to have this transparency and interpretability of the results and that is obvious if the model segments something correctly, you see it. If you do classification, you would need to apply some other technologies to make it more tangible what the model is actually doing. The first step would always be to understand what is actually the data, how is it structured, if it is already structured or if we need to bring it into a form that is structured and how would you choose the best feeding annotation strategy?

I think, equipped with those two things you will be having a very good foundation to train a robust model also in a shorter time. Nobody has time to invest eight, ten weeks into creating new applications. Everything is at a super-fast pace, so you need to make sure to have very good foundation to start off. Then of course, they go ahead and test different models, hyper-permit the settings and so on. We do have a large battery of models that we can test and find a suitable candidate to go with. Yeah, from there, it is basically creating the baseline, optimizing the model and then ultimately deploying it to the platform as an application.

[0:13:29] HC: One of the challenges in biomedical research right now and likely in other areas of scientific research as well is the lack of reproducibility. A team publishes a paper and another group may go and try and replicate it and something doesn’t work. In using your platform, how does this improve the reproducibility for life science research?

[0:13:50] PK: Yeah. That’s a very good point. Recently, we actually had an article written about these reproducibility issues. The thing is that once you can capture the knowledge of a specific, let’s say, image evaluation routine into an application with or without machine learning, you will create like a gold standard and creating a gold standard that can be shared is, I think, super crucial, because you save so much time and effort that others need to put in to really create those models.

On our platform, it’s super easy to also share different projects with each other. If that’s wanted, you can even share entire applications you have trained. You can invite people and say, “Hey, take a look at what we have done.” For example, that was published in the paper. You can be part of such an evaluation project, run it on your data, check it out if it’s working, give us feedback. It creates this environment, this platform for conversation, what can be the next steps so that nobody, or I would say in an ideal world, nobody has to do things twice and we just save a lot of time and money in the research community.

[0:15:01] HC: Is there any advice you could offer to other leaders of AI powered startups?

[0:15:06] PK: I can just speak for this biotech and healthcare or biopharma space and of course in the setting of image recognition. I mean, when it comes to speech recognition or text and so on. I’m absolutely no expert so I will focus on the imaging part. I think it’s super interesting to take a look at what others have been doing and find out basically what the gap is between current solutions and the state of the art, let’s say, and understand what your client’s expectations are.

I mean, this is super critical, I would say, if you go for, let’s say – I mean, we made this error in the beginning too, so we were driven by academic paper writing. Yeah, you need to squeeze out the last, let’s replace of some performance on a dataset, but that’s not what’s in reality, what’s practically relevant. What you want to do is to create the value for your customer and if the customer is satisfied with, I don’t know, 0.8 recall and it still solves the problem, you should understand and learn, okay, “This is enough and that is actually solving my client’s problem.” Don’t over optimize according to numbers.

[0:16:24] HC: Well, I know you said at the beginning, you’re stating this related to imaging and biopharma space you’re in, but I’m fairly certain that applies to any other startups in the space as well. Anyone dealing with AI and even outside AI, you need to solve a valuable problem and whether that’s with AI or without, that’s the way to a successful startup, to solve that problem.

[0:16:46] PK: Yeah, I guess my answer was broader than I expected.

[0:16:51] HC: Finally, where do you see the impact of KML Vision in three to five years?

[0:16:55] PK: I get this question a lot. The impact is definitely in a region where we give more and faster access to state-of-the-art technologies to a broad audience. We do have a very low threshold to access our tools. It’s cloud based, so basically, once you sign up on our website, you get a free trial where you can test out all the features for one month. Yeah, take it for a test drive. I think if you manage to provide a good user experience and convince by convenient features, by good performing or well performing algorithms and applications, then there will definitely be an impact on both the academic research and the industry, because in the end, nobody has really time to go into the inner workings of deep learning. They want to use it like we use this smartphone today. This is where we want to be in three to five years.

[0:17:58] HC: This has been great, Philipp. Your team at KML Vision is doing some really interesting work for life science research. I expect that the insights you shared will be valuable to other AI companies. Where can people find out more about you online?

[0:18:10] PK: I will invite everyone to check out our website, the www.kmlvision.com, and to sign up for a free trial at app.ikosa.ai. Yeah, check out what we have in store. Feel free to reach out to us. We’ll be happy to give you a demo. Yeah, take a look at your use cases, help you solve your next image recognition challenge.

[0:18:32] HC: Perfect. Thanks for joining me today.

[0:18:34] PK: My pleasure.

[0:18:35] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:18:45] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d liked to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]