Burnout in the medical field is a significant and pervasive problem that affects healthcare professionals across various specialties and levels of experience. In this episode, I explore the impact of artificial intelligence on the field of radiology. My guest, Jeff Chang, the Co-founder and Chief Product Officer of Rad AI, shares his insights into the transformative power of AI in addressing critical challenges in radiology (like burnout) and improving patient care.
Discover Rad AI's groundbreaking products, including Rad AI Omni Impressions and Continuity, driven by powerful machine learning, and learn how they seamlessly integrate into clinical workflows. Jeff also speaks about measuring the impact of their products, exciting future applications for their products, Rad AI's vision for the future of global diagnostic care, and much more! Don't miss this deep dive into AI's potential in revolutionizing radiology and improving patient lives worldwide with Jeff Chang from Rad AI!
Key Points:
- Background on Jeff and the overall mission of Rad AI.
- Insight into the next-generation products that Rad AI offers.
- How machine learning plays a central role in their products.
- Training data and the use of transformer models in Rad AI's solutions.
- Certain limitations and hurdles of working with radiology reports.
- Ensuring product integration with existing clinical workflow.
- Exciting future applications for AI and machine learning in the space.
- Ways that Rad AI measures the impact of its technology.
- Essential advice for leaders of AI-powered startups.
- Upcoming products from Rad AI and the company’s future impact.
Quotes:
“Every one of our products is centered around the latest machine learning transformative work.” — Jeff Chang
“There is absolutely zero change to the existing workflow. That makes it really easy for radiologists to adopt without having to change anything that they currently do.” — Jeff Chang
“The more you streamline both deployment and the training process, getting radiologists or your users used to using the product, the easier it becomes and the more time you save for the radiologist.” — Jeff Chang
“Because of the post-processing, because of the specific training on the radiologist’s historical reports, [our models] are much more accurate than the current state of the art [LLMs] for the applications that we currently provide.” — Jeff Chang
Links:
Jeff Chang on LinkedIn
Rad AI
Rad AI on LinkedIn
LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
[INTRODUCTION]
[0:00:03.3] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.
[INTERVIEW]
[0:00:34.3] HC: Today, I’m joined by guest, Jeff Chang, cofounder and CPO of Rad AI, to talk about reducing radiologist burnout. Jeff, welcome to the show.
[0:00:43.3] JC: Well, thank you, it’s good to be here.
[0:00:45.5] HC: Jeff, could you share about your background and how that led you to create Rad AI?
[0:00:50.0] JC: Sure, so I started really young. I went to college at 13 and med school at 16 at NYU and then became the second youngest doctor and the youngest radiologist on record in the US, ended up practicing as a radiologist in prior practice for about 10 years with Greensboro Radiology in North Carolina and then along the way, I went to business school at UCLA, get some graduate on machine learning at the University of Edinburgh and started working on other startups as well.
So I took a hardware started through YC, made in summer of ’14. So, my cofounder and I have been working together for the past 10 years and we really saw based on all the kind of the organization started doing over a decade that there were a lot of issues with radiologist fatigue and burnout like when I do two weeks or three weeks in night shifts in a row, it’s just like your brain turns to mush. So how do we save radiologists time?
How do we reduce radiologist fatigue and burnout? And so Rad AI, which we founded in 2018 was really focused around that on how best to improve daily work and the life of the radiologist. At the same time, we can also improve the quality of radiology reporting, accuracy, and so on and so forth.
[0:01:56.8] HC: So what services does Rad AI offer? How do you productize that?
[0:02:02.0] JC: Sure, so our first product is Rad AI Omni Impressions that automatically generates what we call the impression of the radiology report. That’s the summarization of conclusions and follow-up recommendations of the radiology report. So, we as radiologists, we look at the images but what we actually spend most of our time doing is dictating our reports. So, our end product is the report.
With that, 75% of all of our time is spent dictating reports using voice recognition and so Rad AI Omni Impressions automatically integrates with all the major voice recognition solutions on the market and then just automatically generates that last part of the report directly from the text. So, it’s a pure generative AI directly from the text of what the radiologist dictates to create that last section of the report in their language, so customized to their exact language and style.
So that, on average, it saves them about an hour per an hour shift. It reduces the fatigue and burnout because it reduces the number of words that a radiologist has to say by about a third and of course, it’s also customized to exactly what they would say, which allows them to save more time because it helps many time editing. So that was our first product. Our second product is continuity.
It automates follow-up care coordination, so making sure that patients come back for their necessary imaging. So, when patients have a new liver lesion or pulmonary nodule that could turn out to be cancer if they don’t come back and have it followed up, annual basis or in three months or six months, then they could come back later on with metastatic disease and so we want to make sure they come back for their imaging.
So here, we automatically detect any follow-ups that are necessary in the report using our neural networks and then determine what the timeframe is, what the appropriate study is, and communicate directly with the patient and the provider, and automate that follow-up cycle. From there, we automated the work list for radiologists, so helping make sure that radiologists get the right study to read at the right time.
And so when a group gets large enough, a radiology group, you have thousands, tens of thousands of studies coming in every day and they have to go to the right radiologist. So based on your credentialing, based on what state you’re in, on the SLAs for different hospital sites, and outpatient imaging sites, you have to make sure that those studies get to the right radiologist at the right time.
A part of that also involves AI-driven demand forecasting, so making sure that the radiologists that are on-site and on coverage are able to read the studies that are about to come in, and then finally, we just announced our new radiology reporting platform, which involves a number of new next-generation AI elements for automatic regenerating more of the report based on what the radiologist says, so how do we save radiologists more time when dictating.
[0:04:24.7] HC: So, you mentioned a few different applications there. How does machine learning come into play in each one?
[0:04:30.0] JC: Sure, so since we have founded in 2018, we have always been transformer model-based. So, starting with our very first product, Omni Impressions, it’s always been using transformer models, which have grown along the way alongside GPT and other LLMs. So, these get larger and larger but at the same time, we’ve also built more and more models and post-processing.
So at this point, we have over 300 additional models in post-processing, models in other operating in the post-processing that help ensure the accuracy and language customization of the results. Every one of our products is centered around the latest machine learning transformative work.
[0:05:01.9] HC: How do you go about gathering training data for these different models and do you need to annotate the data for each one as well?
[0:05:09.0] JC: So by working with, at this point, we work with eight out of the 10 largest radiology groups in the country, as well as many large health systems. We have nearly 400 million radiology reports at this point just from our customer base. Because we need to do individual user, radiologists to use our language customization, that means that in order to get their exact language and style, we have to train the models on their historical reports and so that provides also a very large report dataset from which to work with.
We do some annotation, some name recognition, and a few other labeling projects but for the most part, it doesn’t require too much labeling.
[0:05:42.3] HC: The quantity of data you have, if it requires annotation then that would be a huge effort. So it is good to hear that’s not needed for every piece of it. What kinds of challenges do you encounter in working with radiology reports and then all the associated pieces of data that you have?
[0:05:58.6] JC: I mean, initially, we needed to ensure that we have very accurate identification, right? So handled the identification for the vast majority of our customers and so at this point, we have by far the most accurate identification pipeline for radiology reports in the industry but that required a lot of time to build. There is also a lot of errors in radiology reports and things we might consider as errors in radiology reports.
In terms of how radiologists tend to, say for example, include additional findings that they see last minute in the impression and it’s not in the findings, that’s not an error for the radiologist’s sake in terms of when a radiology report is being read but for – in training a model, that would turn out to be cause hallucinations, right? Because they’re mentioning something in the impression only that wasn’t mentioned in the findings.
So there is a lot of different types of issues that we have to fix along the way to make sure that the training you get is clean, to make sure that both of in the preprocessing and the post-processing. So, ensuring that we have a good quality of data used for training the models but then also ensuring that the results when they come out are accurate in the post-processing.
[0:07:00.1] HC: So, is your focus there largely on cleaning data and less so on algorithms that can robustly handle some of these errors? Obviously, you want the output to be as accurate as possible, so I imagine them cleaning is the essential piece there.
[0:07:14.6] JC: It works on both ends, right? So the more you clean the data, the less likely you are to trigger errors because of issues in the data, right? But at the same time, you still need post-processing to help ensure that you’ve managed to identify the correct clinical concepts, that there aren’t any hallucinations and missed findings, and so on and so forth. So there is a lot of post-processing that also goes into it, so you sort of have to do the work on both ends.
[0:07:38.3] HC: How do you ensure that the technology that your team develops will fit in with the clinical workflow and provide the right kind of assistance to clinicians and then ultimately, to patients? I’m sure your many years of experience as a radiologist comes into play there but things change in the medical world as well. How do you make sure that workflow is going to be the right setup for the clinicians?
[0:08:02.4] JC: It’s always been the focus of Rad AI from the very first product we built, so how do we make sure it’s absolutely seamless for the radiologist, right? That there’s no change in the radiology workflow whatsoever. So, when we first started building products, I was the first user who tested it for months and months before it could be rolled out to additional radiologists and really was making it as close to zero as quick as possible.
So at this point, our first product, Omni, Omni Impression is entirely zero quick, right? So, as soon as you get to the impression section of the report, it automatically triggers it, a text for you are in the report at all times. There is absolutely zero change to the existing workflow and that makes it really easy for radiologists to adopt without having to change anything that they currently do, so it’s embedded directly into the existing voice recognition platforms that they use.
And on the IT deployment side, it also involves very little work, so we provide an MSI product to be automatically deployed with any automatic deployment software. So, there is no on-prem servers, there are no on-prem requirements and OBMs, no VPNs, so on and so forth. So, the more you streamline both deployment and the training process, getting radiologists or your users used to using the product, the easier it becomes and the more time, of course, you save for the radiologist.
[0:09:14.2] HC: Well, that’s much easier to get it adopted in hospitals that need this technology too.
[0:09:19.8] JC: Of course, for hospitals, it depends on what specifically makes the biggest difference for them, right? So you also have to make sure, for example, for continuity, we really focus on, one, it improves the quality of patient care but two, it also drives new imaging revenue. That’s all considered appropriate imaging revenue because it’s follow-ups that need to be done and then three, it reduces the patient liability risk for missed findings.
So, patients that would come back later with metastatic disease, so reducing the patient morbidity and mortality. So, it has to have multiple different value ads that peel across the health system.
[0:09:50.6] HC: Machine learning is advancing quite rapidly now, there are new advancements hitting the headlines more frequently than ever. Are there any new developments in AI that you’re particularly excited about and can see a potential used case for Rad AI?
[0:10:02.9] JC: We’re always incorporating any new research we see in terms of as new papers came out. We test, experiment, and see if they can get applied to our model architecture. We see if any LLMs could be useful and whether internally as tools or for some of our newer products. So, we’re always experimenting with different potential applications, both on the larger LLMs side but also on any new auto architecture, any new improvements that we see in recent papers.
So that happens very frequently, we review our internal Slack channels, the papers. We see multiple papers a week.
[0:10:42.7] HC: Yeah, I’m sure the advancements and large language models earlier this year and late last year have given you a lot to work with there as well.
[0:10:51.7] JC: Yeah, I mean, we’ve also been using large language models since we started the company on transformers but yes, a lot of the newer LLMs have been very interesting. So these are more for internal applications that we’re testing on. For example, for our first product, Rad AI Omni Impressions, there’s been multiple research papers published in the first month or two that are showing that their GPT 4 is actually much less accurate than the radiologist’s in generating even simple impressions for chest X-ray reports.
So, lower accuracy and much lower coherence, whereas Rad AI Omni Impressions is far more accurate. So even in complex CT angiogram reports, we reduce the clinical error rate by 47% compared to the radiologist’s baseline. So our models, because of the post-processing, because of the specific training on the radiologist’s historical reports, they are really much more accurate than the current state of the art for the applications that we currently provide.
[0:11:46.0] HC: Thinking more broadly about what your goals are at Rad AI, how do you measure the impact of your technology and to be sure that you’re on track in achieving what you set out to achieve?
[0:11:55.8] JC: A lot of it is for, depending on the product, like our first product it’s really the time savings for radiologists. So we track how often the product is being used, how much time is being saved per report, and then we can see that across all of the groups and health systems that we work with. We can see the number of radiologists that are using it on a daily basis, we can see the number of reports for which is being used, which really allows us to see how that’s increasing over time.
For continuity, we also can track how often the patient would have not received follow-up and how often that might turn out to be cancer, right? So how many patients we’re helping ensure our diagnosed promptly and treated promptly for new cancers and so, helping them save patient lives. So, for each of our products they impact somewhat different but we really do track this, both internally and then also for our customers.
[0:12:44.8] HC: Is there any advice you could offer to other leaders of AI-powered startups?
[0:12:49.0] JC: I think the key, a few different types of AI or a number of different types of AI companies, right? So on the one hand, you have a lot of AI companies now that are providing the tools for other companies and other AI companies so that they can train models more quickly, so they can improve how they serve models, so on and so forth. So, essentially selling their troubles is always a useful thing as you create a new industry.
For companies that have a specific industry vertical, so for example for us, for radiology or healthcare, it really is about finding the right product-market fit, right? So, designing the products such that it is really useful to your target audience here or specific user base and so, so much of it can come down to the exact small features that really make it matter more to your customer base.
So, exactly how does the UI work? It’s really not just about the models, right? So there can be interesting engineering problems to solve but if it doesn’t provide enough value for your customers then it’s not going to be adopted. So it’s very much about how do you ensure that the product is seamless or as good as possible for your customer base.
[0:13:57.3] HC: And finally, where do you see the impact of Rad AI in three to five years?
[0:14:01.8] JC: We’ll see. So, there’s a lot that we have plans, so our fourth product as I mentioned .AI Omni Reporting, is our next-generation reporting platform. We’re doing a launch day on September 27th. If you’d like, please sign up and see from our website. There is a lot of additional products and ideas that we have planned over time.
We are integrating more seamlessly into providing a platform for other imaging AI vendors to be able to insert text and findings into the report and allows us to provide more of this open platform. So over time, we’re hoping to radiologists with more time savings with reduced fatigue, proving the efficiency of generating radiology reports and making the overall radiology workflow more seamless.
So, each of our products is always focused on how do we save radiologists more time, how do we improve radiology workflow for both radiology groups and health systems, and then over time, expanding overseas to provide more access to imaging diagnostic care across the world. A lot of our long-term mission is really focused on how do we ensure that countries with very few radiologists, very limited access to imaging care can get diagnostic care more promptly.
So, we talk about how, for example, Malawi only has two radiologists for 21 million people. So, how do we make sure that even in countries where there is very little diagnostic imaging access, you could actually also get fast reporting, making sure that the results come back where currently, it doesn’t. So that’s really our goal over the coming years to be able to provide that positive impact on a more global scale.
[0:15:40.4] HC: This has been great. Jeff, your team at Rad AI is doing some really interesting work for healthcare. I expect that the insights you’ve shared will be valuable to other AI companies. Where can people find out more about you online?
[0:15:51.5] JC: So, radai.com. So radia.com and also, follow our news on LinkedIn. So you just search for Rad AI on LinkedIn, there are a lot of updates and new posts and as I mentioned, the launch date of our radiology reporting platform. You can also sign up for our launch day webinar.
[0:16:13.8] HC: Perfect. Thanks for joining me today.
[0:16:16.0] JC: All right, thank you.
[0:16:18.2] HC: All right, everyone, thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.
[END OF INTERVIEW]
[0:16:27.9] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.
[END]