How can we harness medical imaging and artificial intelligence to shift healthcare from reactive to predictive? In this episode, I sit down with Ángel Alberich-Bayarri to discuss how artificial intelligence is revolutionizing radiology and precision medicine. Ángel is the CEO of Quibim, a company recognized globally for its AI-powered tools that turn radiological scans into predictive biomarkers, enabling more precise diagnoses and personalized treatments.

In our conversation, we hear how his early work in radiology and engineering led to the founding of Quibim and how the company’s AI-based technology transforms medical images into predictive biomarkers. We unpack the challenges of data heterogeneity, how Quibim tackles image harmonization using self-supervised learning, and why accounting for regulations is critical when building healthcare AI products. Ángel also shares his perspective on the value of model explainability, the concept of digital twins, and the future of preventative imaging. Join us to discover how AI is disrupting clinical decision-making and preventive healthcare with Ángel Alberich-Bayarri.


Key Points:
  • Hear about Ángel’s background and how his career led to founding Quibim.
  • Find out how Quibim turns radiology images into predictive clinical insights.
  • Different use cases of Quibim’s technology and why biopsy data is important.
  • He explains why Quibim avoids relying solely on radiologist annotations.
  • Challenges of using medical imaging: data fragmentation and scanner variability.
  • Explore Quibim’s self-supervised image learning harmonization techniques.
  • How Quibim increases the explainability of the model while maintaining accuracy.
  • Why understanding clinical workflows and radiologist adoption behavior is critical.
  • Uncover how regulations influence the development of Quibim’s technology.
  • Ángel’s advice for entrepreneurs and leaders of AI-powered startups.
  • Quibim’s plans for predictive modeling, digital twins, and AI for preventative medicine.

Quotes:

“We would like AI to be able to mine all this hidden information we have right now in the images. Our vision is long-term, being able to understand what is happening until this point within the human body.” — Ángel Alberich-Bayarri

“What [Quibim is] investing in is the next frontier that not only detects and diagnoses disease, but also predicts or prognoses what is going to happen.” — Ángel Alberich-Bayarri

“Human behavior has a lot of nuances that need to be appreciated when AI is adopted.” — Ángel Alberich-Bayarri

“The bolder the claims you make, it’s the higher level of evidence you need to achieve.” — Ángel Alberich-Bayarri

“Taking care of health before we have symptoms, it’s just going to be a growing business, and therefore, a lot of AI tools will be needed to understand our inner us.” — Ángel Alberich-Bayarri


Links:

Ángel Alberich-Bayarri on LinkedIn
Ángel Alberich-Bayarri on X
Quibim


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine learning powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33.9] HC: Today, I’m joined by guest Ángel Alberich, co-founder and CEO of Quibim, to talk about radiology tools for precision medicine. Ángel, welcome to the show.

[0:00:43.1] AAB: Hi, welcome, everyone. Thank you. So pleased to be here.

[0:00:47.3] HC: Ángel, could you share a bit about your background and how that led you to create Quibim?

[0:00:51.3] AAB: Sure, I am an electronics and computer science engineer, became increasingly interested in computer vision as I was studying. I’ve always been excited by the fact that the human body generates signals that we can capture, so I started more and more focused in the area of medical applications. So, as soon as I graduated, I was engaged in several projects related to computer vision, meeting KOL, and radiology, and I ended up doing a PhD in that, you know?

And MRI, image processing, and yeah, actually, that led to the creation of Quibim, seven years later, together with these KOL in radiology that I started working with. So, him and I basically founded the company.

[0:01:37.6] HC: So, what does Quibim do and why is it important for healthcare?

[0:01:41.1] AAB: At Quibim, basically, we want to transform these imaging data that we have from sonar of care imaging in any hospital, from MRI, CT, PET images, X-rays, into actionable predictions. So, we would like AI to be able to mine all this hidden information we have right now in the images. Our vision is long-term, being able to understand what is happening until this point within the human body.

I mean, right now, we’ve dedicated a lot of efforts as humans to explore the universe, but there is an equally profound challenge within us, no? A milestone within us it’s understanding what’s going on within the human body, and for doing that, what we actually do is helping doctors, mainly in hospitals or in biopharma companies, by developing pioneering tools based on AI that mine all the imaging data.

Basically, to improve patient outcomes or maximize a lot of programs. These algorithms basically help to detect, diagnose disease, and in the future, that’s what we are working on right now is also being able to provide prognostic or predictive information.

[0:02:59.6] HC: What are some examples of specific models that you used, and what types of things are you trying to predict, and what’s the input to them?

[0:03:06.4] AAB: So yeah, right now, for example, we have really approved AI models or products for, for example, prostate cancer detection and diagnosis, and this is already advanced. We are right now one of the very few companies on earth that is able to take an MRI and claim that we detect and diagnose cancer, and we have capital and clearance for that, which means computer-generated detection, computer-generated diagnosis.

Basically, what we do there is we have trained an AI model using biopsy data that’s able to recognize on the MRI either biopsy-positive cancer cases or biopsy-negative, so healthy, healthy cases, right? So, this is what we do now, but what we are investing in is in the next frontier that not only detect and diagnose disease, but also predict or prognose what is going to happen, no?

For example, not only important if a patient has a lesion of cancer, but what is going to happen with this patient in the following years? Is this going to be a recurrence? Is this patient going to respond to therapy? Can we consider this a high-risk patient or a low-risk patient, no? All these kinds of questions that are more about the future. This is what we are investing in right now, but for that, you need high-quality data, no? And this is another topic as well.

[0:04:30.1] HC: So, these models are all based on radiology images but the labels you’re trying to predict aren’t hand annotated there, coming for example, from the biopsy, positive or negative that you mentioned or from how does this patient respond, what’s the prognosis, that type of thing for the future?

[0:04:47.3] AAB: Exactly. I mean, I’ve always been reluctant to – in the company, I usually make the analogy of a horse, saying, “Okay, if you take radiology annotations, you might be developing faster horses,” right? As we call it. If you want to develop algorithms, and your main focus is a radiology speed interpretation improvement. If you want to make them read the images faster, I mean, that’s okay.

You can take, probably, radiology annotations, you can hire a panel of radiology experts, annotate the exams, and train an AI model, but I always have been guided by what is the clinical problem we want to solve, and here, when it comes to, for example, the area of oncology, in the area of detection and diagnosis, it’s clear that you need to work with multimodal information.

So, what we mean with this is like, we can have the MRI, that’s where we have like, the input data, but definitely, we want to go beyond human annotations. We wanted to go beyond what the eye can see, and for that, we need to choose the best ground rules, and what is the best ground rules? Of course, when you have the capability of having a biopsy, for example, in some diseases in the brain, you don’t have biopsy data, right?

You take all the images of this, for example, or you take multiple sclerosis, you are not biopsying every single lesion, but in areas like oncology where you have actually biopsy data, you can use that as a real truth, saying, “Okay, let me train a model with these multimodal information.” Of course, the input data will be the MR, but the ground tools can be the pathology, the biopsy, or it can be the future evolution of the patient, no?

Like, longitudinal information. So, doing that it allows us to basically go beyond what the radiologist can see and having extra value, no? And this has been what has made the difference. For example, you know, resin clearance in QP-prostate. Actually, we have, among the training data. We had some cases with discrepancies, right? We had a certain degree of cases, a certain percentage of cases where the biopsy was highly discrepant with what the radiologists was saying.

So, the radiologists would say, “Okay, there’s no lesion in this patient, but there was a positive biopsy afterwards.” There were cases where the radiologist was seeing a positive lesion without even a high scale for a radiologist’s eye, but then you would have a negative case in the biopsy, right? And it is when you train the model using this kind of complementary information, that’s not only human annotated, no?

It’s also taking the ground truth out of radiology. It’s when you end up developing a tool that can augment the sensitivity of radiologists to detect cancer, and also, the specificity, no? Trying to keep like a very low false positive rate, no? And this is what we have been doing, and right now, for example, we can say we have high negative predictive values. So that means, when we give as an output, like an MRI image that you can see without lesions, that gives a lot of peace of mind to the radiologist because they actually know that due to our high negative predictive value, that that patient is going to be most likely without any benign lesion, but nothing serious.

[0:08:17.1] HC: I suspect that these more accurate results come from having more accurate labels to begin with, getting those labels from the biopsy results.

[0:08:23.8] AAB: Yeah, exactly.

[0:08:25.5] HC: What kinds of challenges do you encounter in working with and training models based off of radiology images?

[0:08:31.7] AAB: I think that the most relevant challenges we face are the heterogeneity of data. We have, I would say, a data fragmentation scenario, right? So, if you want to collect high-quality data sets in imaging, first, you have different institutions, and you need to access all that data. You need to collect data from different vendors, you know, there are different vendors manufacturing scanners for CT, for MRI, for PET.

When you analyze the signals that are coming out from these scanners, you realize that the image quality has significant differences, right? If you are a patient, then you would go on the same day through four scanners from four different vendors, you would realize how different the images are when you try to measure things, right? So then, that means that it’s not only data fragmentation, it’s also a heterogeneity of the image quality.

And then, of course, having to access the data on different institutions and combining it with the different regulatory requirements that you can find in Europe, the US, all the areas where, of course, you need data from that region or that country, no? That makes things more and more complicated, and in the end, I would say these are the main challenges. As you know, radiology images do not have like a high spatial resolution.

Meaning, cell level of detail of conspicuity, no? You cannot see actual cells in the images; you can see the actual microscopic behavior of some data. For example, you might, on the highest amount of resolution, you might be seeing, for example, molecular behavior, as we see with the efficient way to take images in MRI, what we see is cellularity, but basically, because we can see those areas where water molecules cannot move from a microscopic standpoint, right?

And you say, “Okay, in this area or in these pixels, water seems highly restricted.” This is because there is a high cell density in this area. I know this is the way USD made or you infer cellularity out of macroscopic pixel information in an MRI, no? And I would say the challenges are this data fragmentation. This need for a common image quality, but right now, it’s ready to image use.

And the fact that you will always be limited in the spatial resolution, when we are talking about non-invasive imaging, right? This is not taking, at least a sample, and looking it at the microscope, we are talking about, from a distance, trying to extract the highest spatial and temporal resolution possible to see the human body.

[0:11:13.7] HC: How do you deal with these heterogeneity challenges? How do you harmonize your images across different protocols, scanners, and different variations that come up?

[0:11:21.7] AAB: My background was pretty much interested in that. I started, actually, when I started doing research. I was working at a hospital, and I think one of the good experiences I had is working side by side with the radiographers. So, I was literally sitting on the scanners, like, helping them optimize the image acquisition protocols. So, I clearly got a lot of knowledge on how the configuration parameters of the scanners influence the image quality.

That gave me deep knowledge on how the images are generated, and when I started working with AI more and more at Quibim, I started to feel that we could use some – like a balance techniques. Like right now, for example, we are using self-supervised learning for harmonizing MRI data, and we’d just only file the button for that, for example, where we work not with the images that you can see with your eyes and the pixel data, you know?

Not with the common intensities and the brightness of the images, but we actually transform the images into their frequency domain. Thanks to understanding this frequency composition of the images, we’re able to basically train very robust AI models that are self-supervised, that learn from thousands and thousands of frequency variations. So, we are able to generate thousands of different images with these frequency changes.

And then we have trained models that, from all these frequency variations, have learned to basically generate a standardized image, regardless of the vendor. So that means, basically, at Quibim, we have an internal, let’s say, transformation step, harmonization step, where regardless of the vendor we have at the input, regardless of the image we have of the input, we generate a standardized, harmonized output.

And that proved to improve the performance of all our disease classification models afterwards, right? We will see some publications coming out recently with major biopharma players on this area because some biopharma companies, imagine, they are sitting on a treasure of imaging data that they can be pulling through the years and years of clinical trials, all clinical trials working with drug development.

Basically, you are measuring treatment and response, and a high percentage of them looking at the images, right? And seeing the progress of the disease in patients and the problem with that is that, so far, images for biopharma players were not that comparable, and right now, we’ve already done some great projects with them on how to put together all these imaging data from clinical trials, harmonize them.

And then, you can actually mine thousands of thousands of patients on a row, right? And that was impossible to do like a few years back.

[0:14:21.9] HC: If you want to apply your models to, let’s say, a new type of scanner, you know, a new manufacturer that comes along, something like that, will you then have to learn how to harmonize that new type of scanner into your systems so that those images can be standardized?

[0:14:36.3] AAB: So, we’ve seen that models for harmonization generalized pretty well. So, so far, we’ve seen that even if we have not trained the model with some vendors, the fact that we have been able to generate with our own networks a huge amount of frequency variations make that these frequency variations include potential new machines, right? And that’s where the models were pretty robust.

We are seeing right now, like new emerging vendors, mainly from low-field MRI scanners, also new vendors from other regions that are coming out to the MRI market, and also as well, CT scanner manufacturers, there are a lot right now. So, it’s really difficult to create an AI model that has data from everyone, but we managed to find a way to basically generate enough variety of frequency variations that covers, basically, any potential physical configuration of the scanner, and we are pretty happy on how the model are generalizing so far, yeah.

[0:15:41.2] HC: Is model explainability at all in part in here, or is the accuracy of your model always fundamentally more important?

[0:15:47.8] AAB: When we’ve seen that, when I was probably starting, I had an opinion that was pretty strong towards accuracy, and less towards explainability, right? Basically, my angle was, “Okay, my focus is to deliver the highest accurate model possible, right?” I don’t care about explainability, but we have to change that because I saw as we were entering more and more in medicine, and interacting with more and more doctors, we can actually see that there is a transition period in which explainability is very important.

So, we realized that doctors need to understand how the models operate and take the decision that gives trust, that gives the idea that you can basically control, understand how the AI model is working, and in the end, we’ve seen a heightened adoption of the AI models when you find the adequate balance between explainability and performance, right?

This is not overall performance. Of course, everyone would like, you know, a hundred percent AUC model, but in the end, explainability is very important, and we find that in areas like medicine, you need to be able to define how you train that, what was the cohort that you used, why the model outputting these, what will happen if I, instead of loading a case from a 40-year-old subject, I put the case from a, you know, 12-year-old.

How has the model been trained, how does it operate, and how it behaves? These kinds of questions are extremely important for medicine, rather than just offering like black boxes, and we’ve been correcting our opinion more and more through the years. So, we have been interacting with doctors, and you know the learning curve of AI has been quite tough for doctors because even if we’ve seen that this is, you know, pretty simple.

Okay, you have an AI model that detects and that they know this disease, it just start being used, right? As we deploy it, we’ve seen that, you know, the human behavior has a lot of nuances that need to be appreciated when AI is adopted. For example, we have radiologists that decided to when they are going to report a new exam, and they have our AI tools. We have some radiologists who decided that they will first read the images without looking at the AI.

They will look at the AI only at the end, once they have, like, read the images, they are not biased, and they say, “Okay, what does the AI say?” And we have other doctors who prefer to see the AI first, and based on that, they read and interpret the images. So, we have this kind of variations right now that we are exploring, and we are just understanding on how all this will end up operating, no?

So, yeah, every time we’d put like the area doctors or the doctors already using AI, we see like pretty firm behaviors, and we all need to generate this evidence of, you know, what is the best way forward for AI, right? Is it reading it first, or reading it later? How this is influencing, how this is maybe changing, the doctor’s decision? So, all of this is being closely monitored on our side, and this is practically very interesting.

So, I think that’s the right balance is a model that maximizes performance, and is as explainable as you can.

[0:19:21.1] HC: So, a lot of what you said there, it sounds like the important piece is that you need to really understand the clinical workflow that your solution is going to fit into, because you want the radiologist to use the model. For them to use it, they need to trust it, and you need to figure out you are already looking at the model results first or the image first, and all that for all of this to happen in the most effective way.

[0:19:44.7] AAB: Yes, a hundred percent. If you don’t understand the workflow on how does it works, first of all, you will see kind of resistance to change because the doctors will say, “Okay, this company has really no idea on how the disease works, on how we work, and what is the value of imaging within the disease, no?” And you really need to understand the feed of the models.

You know, actually, the regulatory agencies, at least I think in the US, they care a lot about these kinds of things. You know, what are your claims, and where are you positioning your model? What is the intended use, right? And then you enter into a high level of detail, or when it has to be used, right? There’s a reading, for example, when you take QP-Prostate, right? QP-Prostate is so far approved for a concurrent reading.

This kind of detail is there are other agencies also make the vendors, make a companion like Quibim enter into very specific detail on for what we are applying for a clearance, right? And this is, you know, for the safety of the patient, for the safety of the doctors in the end, no? And as we see AI in medical imaging or in healthcare developing, we will see how we can dive deeper into these claims, no?

So, right now, for example, in Europe, you already have some class-three clear devices by the medical imaging regulation that they are supposed to be autonomous, right? If you are able to prove a high level of performance, and you apply for a class, class weekly rounds, and you can get that, you can claim, “Well, this is not for concurrent readings anymore. This can actually perform a complete reading on a medical image and generate it out to you.”

So, all these details just for the kind of, I would say, for the people that’s listening to this podcast to feel comfortable, and for the patients and doctors, right now, that currently medical landscape is also considering this kind of nuances or details knowing how the tool can be used, what are the claims intended for, and that is greatly safe right now.

[0:21:50.9] HC: How does the regulatory process affect you develop the machine learning models themselves? Does it go way back to the beginning that you need to think carefully about your process?

[0:22:01.2] AAB: Well, actually, the regulatory process for us determines a lot, right? And this is something that entrepreneurs in healthcare and AI need to understand. Sometimes, I’ve seen different behaviors. Right now, I am mentoring entrepreneurs more and more, and I am seeing how we have two behaviors, right? The entrepreneur that is not giving a lot of importance to this thing until they’re already half a model, right?

And then say, “Okay, now, we need to clear this,” right? Or other ones that are starting with the regulatory in mind, and I think that at least the smartest path forward for us is when you start having regulatory in mind because that allows you to basically ave a lot of money when you are designing how you will develop the tool, how you will train it, validate it, in which countries, in which regions, what are the kinds of data you need.

So, for that, the first thing I always start with is okay, when you are developing a product, the first thing you need to think is the intended use of that product, right? More precisely, because that can be a regulatory term, but in the end, is it reminding A, whether you are a medical device or not? Maybe you are developing an AI model that ends up not being a medical device. So, imagine that you are, for example, developing a workflow tool or an algorithm that’s generating new images for image quality, no?

And then you are not generating any information that’s going to be used by a doctor for a diagnosis, or you know, disease management. Probably you are not a medical device. So, the first thing is, if there is a product you are going to build, what is the intended use? Does this intended use fit within the definition of a medical device? If so, okay, where is the algorithm or where the product is going to fit within the clinical workflow?

Which is how the doctors that are going to use it, or if you are in a developing an oncology algorithm, is this going to be used by radiologists, by oncologists, for which purpose? It is for interpreting images, it is for outputting a detection of a disease, the diagnosis of a disease. So, the bolder the claims you make, it’s the higher level of evidence you need to achieve.

For example, we were developing a, you know, one of the first clearances we developed was what we call a tool to help to interpret the MRI exams that was loading prostate MRI exams and was just segmenting the prostate gland, but was not outputting any diagnostic area or detection disease area, and you can basically get a clearance in the US with that using a predicate device and that kind of clearance is called MIMPS, that’s medical image management and processing system.

For this kind of clearance, you don’t need a clinical study, right? But if we want you to claim as we have claimed recently, that okay, this is a tool that helps to detect and diagnose cancer. These claims are so bold that you actually need to prove it with a multi-case, multi-reader study. So, we had to do a study with nine radiologists, 230 patients, having the radiologist read all the exams without AI, with AI, doing a statistical analysis, and proving that there is a statistically significant improvement in the diagnostic accuracy, no?

And of course, these studies are not cheap, right? You’re needing to invest, you need to spend money on proving your claims, and this is why I think it’s very important to start with these claims in mind, no? What is the product they wanted to build? What is the intended use, which are my claims, and how I take it from here?

[0:25:50.6] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:25:54.6] AAB: I would start by basically considering that regulations shouldn’t be overlooked at the very beginning. I think that right now, we are seeing, I would say, a huge ocean of LLMs and generative AI trying to use, or I mean, I’ve seen a lot of posts on social media of trying to use LLMs for helping to diagnose disease, or things like that. But all these are basically medical devices, and either you like it or not, that this will follow that regulatory process for the safety of patients and doctors.

And the sooner you start with that as an entrepreneur, the sooner you take a look of, “Okay, what I am going to build, what is intended use? Where does my product fit within the clinical workflow?” The better. The best capital-efficient investment you are going to do, right? And instead of going back and forth of realizing after you have trained a model that, “Okay, I wanted you to push these to production.”

“I wanted to release my – the version of my product, and I wanted to do a big announcement.” But I realized that, okay, I cannot do these announcements, I cannot put these claims in my webpage because that’s basically not what has been cleared by the regulatory body. So, in the end, starting with these in mind, when you are operating in healthcare and medicine, it is very important because investors, like I’ll tell you, investors only want to join companies where there are high risks, right?

And regulatory risks are very relevant for the future of any startup, and I think that’s great advice I give to entrepreneurs. When you are starting in healthcare, be wary about what you’re building, where does it fit, what is intended use, and how will be your regulatory strategy because if someone is telling me, “Oh no, I’m going to have these diagnostic claims.” And then they come to me with a business plan of a million dollars, I would say, that’s not realistic, right?

You will actually need more money to take this to market. So, these are the kind of things that I think over. Regulatory, it doesn’t have to be overlooked if you’re an entrepreneur.

[0:28:05.2] HC: And finally, where do you see the impact of Quibim in three to five years?

[0:28:09.0] AAB: We will actually see Quibim moving from detecting and diagnosing disease that the level of claims you have right now to entering into more prognostic and predictive models of this is and this is very relevant because we’re going to prove that we can actually influence the way treatments are administered to patients and for that, it means we need to raise the level of imaging to have these prognostic creative value that you know, if you hear that precision medicine right now, it’s really present everywhere, right?

But sometimes, you know, it’s not using the right way as a term. When we talk about precision medicine, we talk about stratifying patients better for the right treatment, and that again is a bold claim, right? Developing tools that can that remind, which is a future treatment for patients. It’s really, really significant and has a high value. So, we see that we will see in the next three to five years the first predictive and prognostic clear tools by Quibim.

That we’ll be able to help indicate some treatments, and right now, when you look at what’s approved out there, you see that mainly from molecular biology. You even see some approvals for pathology, for biopsy data, for microscopy images, but we don’t have yet any peak clearance in the area of oncology, for example, for indicating treatments out of CT scan or an MRI scan, right?

This is the level of products we have to develop more on predictive prognostics, and then we will also see how we elaborate more and more, and we deliver more great news on the digital twin concept. We are actually building algorithms that are able to analyze the health status of our organs, because we clearly see that imaging will move earlier and earlier in the basic journey, even when we are not patients yet, right?

Even impressing traumatic stages, I really think that whole body scans, preventative scans will become something standard in the near future, and we’ll see how AI will be needed to basically analyze what is the age in us, right? What is the age of my liver? How is my health? How is my body composition? Can I improve something in my habits? This kind of information it will just grow, right?

It’s not that trend that we are using these wearables to monitor our sleep, our nutrition, or sport. Taking care of health before we have symptoms, it’s just going to be a growing business, and therefore, a lot of AI tools will be needed to understand our inner us, you know, with these scans, that the only way we have to understand the organs, the health status of our human body, and trying to identify any silent killer we might have, no?

Imagine you identify as more lesion in the lung that you are able to, you know, do a surgery and remove it before it goes to advanced stages, you know, that’s pretty valuable, and I think that’s just going to grow and the fact that Quibim has been there with this long-term vision of not just trying to be a tool that improve radiologist workflow, we go beyond that in trying to understand basically what’s happening in the human body and every single tissue point, at every time, no?

That’s basically why we say that imaging will be one of the catalysts of precision health, right? And we talk about health, and not medicine here, when we say long-term, because we definitely see imaging and AI playing a role more and more in health before medicine.

[0:31:56.6] HC: I look forward to following your progress with this. Ángel, I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:32:06.2] AAB: They can find me definitely, I am quite active on LinkedIn. I have the Quibim webpage. Yeah, so I’m pretty active on social media, so it’s easy to find me.

[0:32:16.5] HC: Perfect. Thanks for joining me today.

[0:32:18.4] AAB: Thanks to you, I really enjoyed the conversation and look forward to making contact. Thank you.

[0:32:24.3] HC: All right, everyone, thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI

[END OF INTERVIEW]

[0:32:34.3] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]