Today, I am joined by Ersin Bayram, the director of AI and data science at Perimeter Medical Imaging AI, to talk about tissue imaging during cancer surgery. This technology provides real-time margin visualization to surgeons intraoperatively vs. waiting days later for the pathology report, which remains the gold standard for confirming margin status. The surgical oncologists’ goal is to achieve clean margins on excised tissue during the initial surgery and reduce the chance of the patient requiring a second surgery or leaving some cancerous tissue behind. The next generation of this device uses AI and big data to speed image interpretation.

Tuning in, you’ll hear about the role of machine learning in this technology, how they gather and annotate data in order to train the system, and the types of challenges encountered when working with OCT imagery. We discuss the role of model explainability, whether or not model accuracy is more critical, and how classic activation maps are used for improving the model. We also talk about regulatory processes as well as Ersin's approach to recruiting and onboarding before he gives his advice to other leaders of AI-powered startups. For all this and more, tune in today!

Key Points:
  • An introduction to Ersin Bayram and his role at Perimeter Medical Imaging AI. 
  • What Perimeter does and why this is important for cancer outcomes. 
  • The role of machine learning in this technology. 
  • How this technology segments out potential areas of concern on the OCT scans and displays this to the surgeon.  
  • How Perimeter Medical Imaging AI gathers and annotates data in order to train the system.
  • The types of challenges they encounter when working with OCT imagery. 
  • The role of model explainability and whether or not model accuracy is more critical. 
  • How classic activation maps are used for improving the model and not shown to the clinician.  
  • How the regulatory process affects the way Ersin and his team develop machine learning models. 
  • Approaches to recruiting and onboarding that have been most successful. 
  • Ersin’s advice to other leaders of AI-powered startups. 
  • How he foresees the impact of Perimeter three to five years from now.
Quotes:
“I can still work on oncology, making an impact on a really deadly disease, and also start focusing entirely on the AI side and the data science aspect. That was an easy decision.” — Ersin Bayram

“The surgeon might be able to look into the images and then they might be able to go back and take extra shaves or there might be also benefits not to carve out healthy tissue more than needed.” — Ersin Bayram

“If you find the talent that has the medical imaging background and they have the foundational skills, technical thinking, and they have basic Python skills, we can train them and we can ramp them up to become good AI scientists.” — Ersin Bayram

Links:

Disclaimer:
Perimeter B-Series OCT is not available for sale in the United States. CAUTION – Investigational device. Limited by U.S. law to investigational use.

Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33.6] HC: Today, I’m joined by guest Ersin Bayram, director of AI and data science at Perimeter Medical Imaging AI, to talk about tissue imaging during cancer surgery. Ersin, welcome to the show.

[0:00:44.4] EB: My pleasure, thank you.

[0:00:46.0] HC: Ersin could you share a bit about your background and how that led you to Perimeter?

[0:00:49.8] EB: My undergrad is in electrical engineering, focusing on signal image processing part and master’s in computer science. I moved towards the computer vision and medical imaging and the Ph.D. in biomedical engineering, focusing on magnific resonance imaging and during my masters and PhD, I started looking into neural networks, self-organizing maps, and some of the pattern recognition-related algorithms. Back then, the accuracy was not there, of course. And then I joined GE Healthcare’s EMR division as an engineer working on developing EMR solutions and 16 years with that company, the last five years had been heavily focused on the AI advances. We start working on the AI and I’m really passionate about the oncology in general. I had family members impacted from cancer that I wanted to continue in the oncologist space.

I was the director of oncology at GE Healthcare and Perimeter opportunity came as the director of AI and data science position. That was in a sense to then come through, I can still work on oncology, making an impact on a really deadly disease and also start focusing entirely on the AI side and the data science aspect. That was an easy decision and it’s been a little bit over a year now since I’ve been the Perimeter Medical Imaging AI.

[0:02:11.7] HC: So what does Perimeter do and why is this important for cancer outcomes?

[0:02:15.8] EB: Perimeter is a medical technology company. We aim to transform the surgical oncology that right now, basically, we are using optical coherence from our working technology. If you go to an eye doctor, if you get your retina scan. So using the similar concept into basically the margin assessment. And the idea is basically provide real-time margin visualization to the surgeons at the time of surgery versus pathology report confirming if that was a clean excision or there was some still positive margins. We wanted to provide that feedback at that time of surgery using AI and big data.

[0:02:57.2] HC: So the goal is to reduce the chance that the patient might need a second operation of some cancer was left behind, is that the idea?

[0:03:03.7] EB: Exactly. The surgeon might be able to look into the images and then they might be able to go back and take extra shaves or there might be also benefits not to carve out in healthy tissue more than needed, so it might also we do some of the subsequent surgeries, reconstructive surgeries for instance.

[0:03:22.2] HC: So what role does machine learning play in this technology?

[0:03:24.9] EB: Our current product does not have the machine learning available but it is a works in progress technology development right now and what we are trying to do is basically, this is innate support system and then they scan a part of their margin. If you think of a tumor and in simple terms like acute, there are six margins and every margin, all cities, the high resolution tend to slice scans.

You might have several hundred imagines to sift through and it is a substantially amount of information for the surgeon to assess that at the time of surgery. AI is there to basically point to potential suspicious findings and help with the productivity aspect. That is what we’re trying to develop right now.

[0:04:08.1] HC: So is this a model that would segment out potential areas of concern on the OCT scans then and then display that to the surgeon?

[0:04:17.1] EB: That’s the idea. We are not segmenting since the OCT doesn’t have a depth of penetration. We have about two millimeters, we just basically crop out an area that we call patch and we basically lag them as thumbnail images to the search and then mark them on the specimen image as this localizer, so they will be able to go to the area of interest.

We don’t need to do a segmentation or measurement in as sense. It’s a binary decision for the surgeon, “Should I take an extra shave or I got everything out? Should I close this patient and then finish the surgery?”

[0:04:49.6] HC: Got you, that makes sense. How do you gather and annotate data in order to train the system like this?

[0:04:56.5] EB: That is one of the challenging parts, the data has to come from our device since we are using our own deposit images as input and our gold standard is the pathology reports. So we need to correlate pathology reports against our OCT images. So we had our own clinical trials, collaborating with academic sites to gather the type of data. So it is time consuming and close to the process to be able to gather that image and then data requires expertise to be able to label that data properly. That’s how we form the basis of our initial AI model.

[0:05:31.8] HC: Are there expert clinicians that are very familiar with OCT and they can look at a scan and say, “This is the area of concern and this one’s not” is that how they annotate?

[0:05:43.7] EB: We have a two-layered labeling system. So we have our own, again, OCT is not a mainstream imaging modality. We have our own subject matter experts who understands the OCT extremely well. So they do the initial reader labeling and then train pathologist, surgeons. So there is a clinical expert, a clinical labeler and they read it and if they disagree with the reader, they alert initial labels then we consolidate that and again, the gold standard is the correlation of the pathology before.

[0:06:14.7] HC: What kinds of challenges do you encounter in working with OCT imagery?

[0:06:19.5] EB: As I mentioned, we need the data. We need to have both our OCT images and the pathology report that corresponds with that. So we cannot just go out and then look for random images. The second piece is I think any medical imaging healthcare related AI solution we’re learning to, the data will be very skewed. You will see a lot of negatives before you see a positive case. So a number of positive cases you can find to build your model accurately. It will take time.

[0:06:49.4] HC: Are there any other unique challenges related to OCT specifically? You know, the OCT imagery that I’ve looked at in the past is fairly noisy but I know it’s just the nature of that modality. Does that change or effect how you… what approach training imagery learning model on it?

[0:07:05.0] EB: We have done our own baseline and collect the data, we have established an SNR. So that is not something that’s user controllable. So we have established the basis in our… that will give us an accurate and reliable results. So that part is not a concern and we have tissue specimen is sitting on the surface. The image quality is I think adequate and as a future technology development, you’re looking into ways they use AI to improve the image quality further as well.

[0:07:35.4] HC: What role does model explainability play? Is it an important part or is model accuracy more critical?

[0:07:41.3] EB: I would say both are important especially in the oncologist space. You need to have a very accurate model to basically have a healthcare based solution but the model has to be stable. There’s this human element as well, the end users have to trust what the AI is doing. So you need to be able to explain what the AI model is thinking. So we’re looking at those activation maps to really visualize what model it’s seeing.

So that really helped us to understand our development reference as well once we understand what the model is seeing. That explain ability I think even our own AI data scientists, once we understand what is tripping the AI model more visually in specific, give it as accurate map pipe visual tools, we were able to feed better data. The creation of the data also improved. I would say both are really critical.

[0:08:35.9] HC: Will the classic activation maps be displayed to the clinician in the end product or is that something more for development and use more as a tool to improve your models?

[0:08:45.7] EB: We are using it right now more for improving the model and again, this is works in development things might change down the road. I cannot really comment on the final version but right now, we are not intending to display them. It might be an information overload in a sense, you need to really give the surgeons the information they are looking at.

[0:09:09.0] HC: So the end result for the surgeon, it is just those binary decisions on each margin?

[0:09:14.5] EB: It is a binary decision and the images. Like if they click on an image, so we kind of created a dashboard of suspicious findings. If they click on an image, it takes them to the exact location on the specimen image or that preview, that idea ultimate decision risk position.

[0:09:30.4] HC: How does the regulatory process affect the way you develop machine learning models? In particular, thinking back towards the early stages of a project and how you develop your product, does it dictate what you do from the beginning knowing that you’ll be going through that process?

[0:09:47.0] EB: Absolutely. So being in a regulated industry versus developing an AI solution in the normal consumer type approach are two different things. You should have data rights to be able to use those data for model building that’s why we needed to have our own clinical studies at the beginning to be able to extract data. So we cannot just go out and get data, you need to have the source and the consent for those data rights for product use as well as regulatory submission.

But that’s one aspect you need to pay attention and the other aspect is when you’re working on a regulated solution, ultimately, you need to have the traceability on many items. So AI is rapidly evolving, you cannot really update your models and just release a new one. You have to go through the approval processes and regulatory processes.

[0:10:37.1] HC: Does the regulatory process, you’re planning for it affect any other aspects of how you train models, how you plan for it? Do you intentionally validate more thoroughly from the beginning or things like that?

[0:10:50.2] EB: Yeah, from the beginning everything. You have to have more trace ability, the data sources, you have to look into the performance with objective criteria and we are having a revision control, so we can go back and roll back the previous models. So everything is more through and you should be able to hold every aspect of your AI development and there are design controls in place. We have to follow the tools we use for our labeling of our data has to be approved tools to use officially. So everything is under strict quality management control.

[0:11:24.8] HC: Another topic I want to head on is hiring, you know, particularly in machine learning field right now, engineers are very high demand. So what approaches to recruiting and onboarding have been most successful for your team?

[0:11:38.2] EB: There is a huge supply demand here for sure. It has been a challenge to compete with some of the more [inaudible 0:11:45.8] companies with specific with these type of resources. So for us I think one of the key items is if you find the talent that has the medical imaging background and they have the foundational skills, technical thinking and they have basic Python skills, we can train them and we can ramp up them to become good AI scientists.

So instead of looking for the full expertise in your AI talent, we can look for the potential of someone becoming that really good AI scientist and nurture them and build their skillset as part of the mentorship. And we’re also utilizing internship opportunities as well that happened to the talent and also develop the potential pipeline for recruiting as well.

[0:12:33.8] HC: So having some medical imaging domain knowledge is helpful going into it by the sounds of it. When someone joins your team, how do they work to learn more about the medical imaging aspect to collaborate with experts in those areas to be sure they can develop better machine learning models based on that knowledge?

[0:12:52.6] EB: Yeah, we are utilizing our interpretation of the HR development and when we have a new hire, so they are a part of the team. If they become an independent contributor, then they are assigned to a mentor and they work on a project. I strongly believe in learning by doing, so we try to build the skillset by coming out with simple projects that we need that will benefit them to learn the Python pipe works better.

Those are maybe the building blocks, it may not be the most critical task to do at that point but it might be critical to ramp up the skillset of the employee and then as they ramp up and pick up the pieces of the puzzle, eventually they become self-sufficient, then they can start leading a project on their own but initially, they’re a part of a team that they can be mentored properly and there are a lot of resources with AI related items online. So it doesn’t take much for someone to start playing pre-trained networks, you can really fail safe, fail early, experiment with it.

[0:13:53.1] HC: Is there any advice you could offer to other leaders of AI powered startups?

[0:13:57.2] EB: Coming to a solution early on is important but at the same time, the leaders need to think through the longer term vision, managing the different iterations of the model, the ML op site. The glue that I think puts everything together, it may not be visible to the end users and the customers as much but having to stretch around that and the data, most of the data management I think is also another critical component.

I feel like the AI ecosystem is developing very quickly. If there third-party companies that they can utilize in certain aspects, I think that’s a good resource that happen through the expertise of others.

[0:14:37.6] HC: Where do you see the impact of Perimeter in three to five years?

[0:14:41.1] EB: I’m hoping basically we will see a positive outcome from, we have our pivotal clinical trial ongoing right now. I hope to see the proof that our AI and our OCT technology will demonstrate its committal value and hopefully we will have it out in the field. They utilized with AI, our current commercial solution is without AI just the OCT technology. So I am hoping we will have one or more AI solutions embedded around the system hopefully by that timeframe.

[0:15:10.5] HC: This has been great. Ersin, your team at Perimeter is doing some really interesting work in medical imaging for cancer surgery. I expect that the insights you shared will be valuable to other AI companies. Where can people find out more about you online?

[0:15:22.9] EB: They can contact our company website and also they can find me on LinkedIn, Ersin Bayram, and then I’ll be happy to get in touch with them directly.

[0:15:33.8] HC: Perfect. Thanks for joining me today.

[0:15:35.7] EB: My pleasure. Thanks again for the invitation.

[0:15:38.1] HC: All right everyone, thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:15:49.0] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. And if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.