These days, it seems that there are a lot of big problems in the world, especially in healthcare. Our guest today believes that there is massive value in tackling smaller problems, and, sometimes, the smaller problems are the most important to solve.

I welcome to the show today Joe Brew, Co-Founder and CEO of Hyfe, and he is here to talk about detecting and tracking coughing. We hear about what led to the founding of the company Hyfe and why they’ve narrowed their respiratory health innovations down to focus on cough. Joe talks about the role of machine learning, the process of gathering cough examples, and how they train their models. He touches on challenges they’ve faced, navigating model performance in varying environments, and the benefits of publishing their work. To hear more about why Joe believes now is the time to build this type of technology don’t miss out on this episode.

Key Points:
  • How the movement of pathogens through our bodies and communities eventually led to the founding of Hyfe.
  • What Hyfe does with respiratory health and why it’s important in overall healthcare.
  • Why they’ve narrowed their focus down to the cough.
  • The role machine learning plays in their cough-count technology.
  • He explains more about acoustic epidemiology.
  • The process of gathering cough examples and annotating them to train their models.
  • We explore the challenges faced working with and training models on audio data.
  • Navigating model performance in varying environments: working well in the real world.
  • Joe shares thoughts on the benefits of publishing their work.
  • Why now was the time to build this type of technology.
  • How Joe and his team are measuring the impact of their technology
  • Joe offers advice to other leaders of AI startups.
  • We talk about the potential impact of Hyfe in three to five years.


“I realized that there are so many global health problems that are addressable, at least partially by tech. I hesitate to say, solvable, but addressable.” — Joe Brew

“The really big problem that Hyfe is tackling is around respiratory health.” — Joe Brew

“It felt to us that cough is perhaps, the lowest-hanging fruit, the area where the additionality of tech is greatest, because it's so prevalent and because it's currently just the status quo is so poor.” — Joe Brew

“If you really want reliable medical grade annotations, you need reliable medical grade input. Garbage in, garbage out. That's why the only way to really do that is through partnerships with medical professionals.” — Joe Brew

“A method, that if I were to start another company or to do another project, I would absolutely repeat, is to go quickly to the market, start collecting data, real-world data really quickly, and build a feedback loop where you're constantly training, testing, validating on real-world data.” — Joe Brew

“Our aim is not just to get nice comments on the App Store or nice emails. It's to impact the lives of millions. Everybody who breathes has lungs and everybody with lungs coughs. We think cough tracking is for everybody.” — Joe Brew

“Don't be turned off by problems that appear simple. Sometimes the simplest problems are the ones that are the most important to solve.” — Joe Brew

Joe Brew on LinkedIn
Joe Brew on Twitter
Hyfe AI



[0:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at


[0:00:33] HC: Today, I’m joined by guest, Joe Brew, Co-Founder and CEO of Hyfe, to talk about detecting and tracking coughing. Joe, welcome to the show.

[0:00:42] JB: Hey, Heather. Thanks so much. Great to be here.

[0:00:45] HC: Joe, could you share a bit about your background and how that led you to create Hyfe?

[0:00:48] JB: Yeah, absolutely. My background is in global health, largely working in infectious disease. I did my Ph.D. in malaria research in Mozambique. I had worked previously at the Florida Department of Health in infectious disease surveillance, so syndromic surveillance. I’ve always been interested in the movement of pathogens through our bodies and through our communities. After my Ph.D., I discovered tuberculosis, this large respiratory pandemic that’s been around for far longer than COVID, which brought me to work in Nepal and rural Nepal, where we were using drones to fly around sputum samples to increase case finding.

I was at the crossroads of these big problems with global health. At the same time, the cutting edge of tech, right? Flying drones was pretty cool and pretty fun and it’s flashy. It made me realize that, man, there are so many global health problems that are addressable, at least partially by tech. I hesitate to say, solvable, but addressable. Oftentimes, the best tech, the most innovative, cool, advanced machine learning algorithms, for example, being built are not for the purpose of solving these problems. It was frustrating to me to see how much energy was going into, for example, image processing for the purposes of identifying your friends in a group of party picture on social networks and how little was going into, for example, diagnostics, right?

Hyfe was founded out of that frustration. That, man, there’s so much good tech and there’s so many good tools, especially with AI and machine learning, why are we not using them more for some of these really big problems? The really big problem that Hyfe is tackling is around respiratory health.

[0:02:42] HC: What does Hyfe do with respiratory health and why is this important overall for healthcare?

[0:02:47] JB: Yeah, so Hyfe is a cough company. Where we’re 100% focused on coughing. We’ve gone through a few different hats in the past and we’ve looked at the acoustic soundscape and lung sounds more generally. We’ve narrowed down on cough. The reason why is because there’s just so much to do with cough and it’s already so neglected. Think of your own experience. I am almost certain you have been to the doctor for a cough, because almost everybody has, because it’s the world’s most prevalent symptom in primary care. People show up to the doctor and they have a cough.

Were it any other symptom, the doctor would poke and prod you and quantify that symptom, right? If you show up to the doctor with a fever, they don’t just ask you, “How does your temperature feel?” They stick a thermometer in you and they measure your temperature. If you show up with, let’s say, some complication from diabetes, they’re going to measure your blood glucose. If you show up with a, let’s say, a blood pressure issue because you’ve got hypertension, they’re going to put a cuff on you and they’re going to measure it.

With a cough, it’s a really strange interaction. You tell your doctor they have a cough. They don’t usually bother listening to you. They ask you, how bad is your cough? Or how much are you coughing? You have no idea how to answer that. Then they treat you based on this imperfect information. Then the treatment effects, they monitor by, again, asking you like, “So, how’s it going now?” It’s super medieval, right? It felt to us that cough is perhaps, the lowest-hanging fruit, the area where the additionality of tech is greatest, because it’s so prevalent and because it’s currently just the status quo is so poor.

What Hyfe does is we take an input, which is sound and we generate an output, which is meaningful information about your respiratory health. We do that using purely cough. Our main focus is on cough counts, right? There’s a lot you can do with a cough. I mean, you can analyze the acoustic characteristics of a cough and even begin to diagnose, or screen based on those characteristics. Prior to that, there’s an even simpler problem, which is just counting your cough, knowing how much and when you are coughing, right? Just generating cough frequency data is something that’s never really before been seen in medicine and provokes zero skepticism from the medical establishment.

Everybody recognizes that it’s useful. It’s actually fairly, I don’t want to say simple, because it’s not. We work a lot on it. But it’s straightforward, what to do. We want to identify coughs in a continuous stream of audio and then time-stamp those coughs. That’s what Hyfe does. We count coughs from sound.

[0:05:28] HC: What role does machine learning play in this technology?

[0:05:30] JB: I mean, it’s machine learning and signal processing all the way down. We take this continuous input of audio from your device. It can be your phone, or wearable, or one of the wearables we build. Then we use signal processing techniques to identify explosive events. The onset of loud sounds. Because cough is characteristically explosive. That allows us to eliminate most of the audio. We don’t have to do the heavy lifting, or a lot of processing on, let’s say, your quiet conversation, or you’re asleep, or the gentle breeze blowing through an open window.

What we do instead is —[by] using signal processing, identify these peaks, these explosive moments. Then those explosive moments after they’re identified, we chop 0.5-second audio snippets, which then get passed to a machine learning model or classifier, a convolutional neural network, which has been trained on many hundreds of thousands, millions of cough versus non-cough samples. Then depending on that half-second of audio, that half-second, by the way, is long enough to give enough information to see if it’s a cough or not, but not so long that it captures any important identifiable private information. It classifies it as cough or not. Then based on that classification, we are able to timestamp the cough.

Then we use further machine learning models to process those timestamps and to process that longitudinal time series data to predict health outcomes. We do machine learning on the acoustic characteristics of the cough as well. We convert the cough to a melSpectrogram, which is a visual representation of sound. Then we can use, basically, image classification techniques, so as to see whether you have a certain disease, or are likely to have a certain outcome.

You can think of this when you go to a doctor with a potential disease, they will look for, let’s say, a chemical signature of that disease using an assay, a urine test, a nose swab, or a visual signature. If it’s a radiologist, they’ll look at your X-ray. Acoustic characteristics correlate with diseases and with outcomes. There’s this budding field of acoustic epidemiology, which is looking at this. Anybody who’s ever been diagnosed with a pertussis case, or whooping cough knows that sound is important. But it’s a cool new field where people are looking and finding that even diseases such as COVID-19, or tuberculosis, those diseases also have an acoustic signature.

It so happens that we have such a large database of cough sounds that we’re able to harvest this data to try to find that acoustic signature, that acoustic fingerprint, if you will, of different diseases.

[0:08:15] HC: You mentioned you’ve got a million or so examples of cough and no cough. How do you go about gathering those examples and annotating them in order to train your models?

[0:08:25] JB: Yeah, so we’ve actually got northwards of 700 million, I think. It’s the point of diminishing returns, right? At some point, quality begins to matter more than quantity. For the annotations, which distinguish between a cough and not cough, what you need is trained human listeners to process continuous audio and say, “Hey, that was a cough,” versus, “No, that was a throat clear, or a sneeze, or a hiccup, or what have you,” right? We do that at scale, and we have a pretty robust system where we’re using trained human labelers. Two independent labelers listen. They’re blinded to each other. When they agree, we consider, hey, that’s good ground truth. When they disagree, that gets elevated to a Ph.D., MD expert who can then disambiguate, was that a cough, or was that a throw clear? Then that leads to these many millions of samples that we can train for our cough versus not cough classifier.

Then for the other kinds of annotations, we look at medical information. This is, okay, here’s a thousand coughs, but who are these coughs from? This is a cough from a 29-year-old HIV-negative, pregnant, tuberculosis-positive female, versus a 64-year-old male with GERD. For this, it’s really important that we get reliable data, and solicited patient-provided data is often not reliable, particularly when it comes to medical diagnosis.

For this kind of data, what we do is partner with those who are using Hyfe and research studies, and we have a symbiotic relationship with them, where in there, they use our technology, and in exchange, we often get access to some of their anonymized data. We can say, “Hey, patient 123 is COVID-19 positive as of this date.” That allows us, of course, to build a classifier, which can hear, or identify the acoustic fingerprint of COVID-19.” This is important to emphasize. This is a different route than any in the space have taken, and there are a lot of projects out there for crowdsourcing this thing, right?

Please, donate your cough to science, or to our company, and people go to a web form and fill things out. That’s great for building prototypes. If you really want reliable medical grade annotations, you need reliable medical grade input. Garbage in, garbage out. That’s why the only way to really do that is through partnerships with medical professionals.

[0:10:47] HC: What kinds of challenges do you encounter in working with and training models on this audio data?

[0:10:54] JB: Yeah. Audio is tricky. I think that’s one of the reasons why image and text more recently has gotten a lot more attention. You think of a general stream of audio. Oftentimes, we sample at 44,100 samples per second. What that means is you just get a bunch of numbers, right? It’s not always straightforward what methods are best for audio. Whereas, I feel like, image processing has really matured in the last few years. Audio is still a little bit of the wild west.

In addition, because we often use phones as our primary source of data, this means we’re dealing with different microphones, and different on-device processes, such as, let’s say, high-pass filters. Many phones are optimized for human speech. Cough is often at a higher frequency than human speech. Also, optimized for data efficiency, which means even differences in sample rate. Then, of course, there’s the acoustics of the cougher. I mean, I’m sure you know people of different sizes and statures and backgrounds and ages and sexes. This leads to radically different ways that people cough.

Then there’s also the acoustics of the environment. Coughing in your car with the window open. That’s a very different challenge to solve than coughing in a quiet office, or at a, let’s say, rock and roll concert, or disco tech, right? The challenges are there. Cough detection is a very easy problem to solve if you do it in a very pristine, sanitized environment, like a bedroom at night when people are sleeping, and it becomes a very hard problem to solve when you’re looking at the other 16 hours of the day when people take devices out into the wild and live their normal life.

Our challenges are around largely the acoustic environment, the acoustic differences between coughers, and then of course, that wide diversity, especially in Android of just what manufacturers do with audio prior to it actually getting to the CPU.

[0:13:00] HC: Then how do you ensure that your models perform well in a variety of environments, like some of the background noise situations that you mentioned, and that they continue to perform well over time?

[0:13:09] JB: Yeah, it’s a great question. I think I particularly, we made this wonderful mistake when we started in 2020, which was to go to market. To be public and free on the app store and Google Play and stuff. To go to market extremely quickly, right? We had an unfinished rough around the edges, half-baked product. We had a mediocre classifier. Our peak detection was a little too sensitive. I remember early reports of people’s phones heating up, or them getting 10,000 false positives in a day.

I mean, we went to market prematurely. What that meant is that we were able to collect data from the real world very quickly. It’s that feedback loop of having something, collecting data with it, and then using that data to improve, which allowed us to really, really quickly iterate and improve the models, right? As opposed to trying to perfect something before you shift in a laboratory somewhere, in which case, you can tweak out that AUC of 0.999. Then you go to the real world and it falls apart. We went to the real world early and expectedly fell apart, but we harvested so much data so quickly that we were able to improve really quickly.

That means that our clock samples and our non-clock samples are reflective of our users, our tens of thousands of users out there ranging from, say, Malaysian taxi drivers to Russian grandmothers, to French, I don’t know, bicycle mechanics, right? It’s everything. It’s all of their acoustic environments and how they get to and from work, their households and their workplaces, and their unique lung sets.

What that has allowed us to do is build models that work well in the real world. We’re constantly labeling data from the real world. We have tweaked our agreements with users so that people can opt-in to help us improve these models. If you’re a general user, we don’t hear any of your audio. But if you opt-in to help us, then we’re able to listen to some of your audio, so as to improve our models.

That allowed us to get really good accuracy in that first year. It allows us to continue to improve our accuracy now. That’s not just the diversity of people. It’s also a diversity of platforms and microphone types and stuff. As we get more people, we train, and we’re constantly retraining models. I mean, every week we ship a new classifier, and the incremental marginal increases in performance in those classifiers are really small. But it’s taken into account these new people that have joined our community and their unique characteristics. That’s keeping us up to date with the real world. It’s a method that if I were to start another company or to do another project, I would absolutely repeat, is to go quickly to the market, start collecting data, real-world data really quickly, and build a feedback loop where you’re constantly training, testing, validating on real-world data.

[0:16:03] HC: Yeah, that feedback loop to enable you to collect the diverse data, I certainly understand why that’s essential. Your team has published a number of research articles. What benefits have you seen from publishing your work?

[0:16:16] JB: Yeah. We are very much a science-first organization and a science-driven organization. We are largely scientists, with research backgrounds, and we want to be part of the scientific conversation. It doesn’t mean that scientific outputs, like research articles, are our primary driver. We’re not academics. But we think that we don’t want to be one of those groups that is doing AI in a black box, or behind the curtain, trying to leapfrog traditional science. We want to work with the scientists.

With medical and cough, particularly, there’s so much to be done that needs to necessarily go through scientists, researchers, and medical professionals, right? We have tried to be part of that conversation. Part of that means publishing papers ourselves on, for example, our annotation methods, or on our validation trials, specific disease types, methodology for actually counting cough, and methodology for classifying, diagnosing, or screening based on cough characteristics.

More importantly, we partner with scientific researchers who are doing this on their own, and we give them technical support. Sometimes we come into service contracts and then they use our devices or our algorithms. This means that we’re able to scale, or rapidly accelerate the scientific work in this area, without us doing it all ourselves. There’s a symbiosis between new tech, or in this case, a novel biomarker, like cough, and cutting-edge science. We don’t need to be the first authors of papers in order for that symbiosis to happen.

In fact, we prefer to take a backseat and let the scientists do what they do best, which is come up with novel hypotheses and test these. As just a concrete example of this, we put Hyfe on running on Android phones at the bedside of, I think, was 110 COVID-19-infected individuals, hospitalized individuals at the Emerging Pathogens Institute at the University of Florida. They just looked at cough trends. How much these people were coughing during their hospitalization? They found that a decrease in cough led to, or was a leading indicator of a negative outcome, like intubation or death.

This was the first time anybody had ever seen something like this. Then they reproduced a different group, reproduced, replicated the study in Montreal, Canada, and found the same thing. It was like, it was really exciting for us to be part of that conversation to see once you start measuring something, then you can build a body of evidence. I mean, these are still early days, but that evidence will eventually get translated into clinical practice and best practices around managing cough amongst those with a disease, infectious or not.

We could never do all of this on our own, right? We’re a small company. We’re a startup. A lot of scientists out there are working in respiratory and have patients with real problems and are super excited to be able to quantify those patient symptoms for the first time ever. We want to work with those scientists and we want them to get really impactful cutting-edge publications because of course, it’s good for us, but it’s also good for them. It keeps us focused. I mean, so why are we out there publishing work at all? It keeps us focused.

I mean, in addition to signaling credibility, and hey, we’re here and we’re doing cool stuff, it keeps us focused on what really matters, right? Because sometimes we have cool ideas, “Oh, what if we did this? What if we did that?” Then a scientist or a researcher will come back and say, “Yeah, but that doesn’t really matter for my use case, right? In the clinic, I’m looking at this, or I only can use something if it comes this way, or it has to be regulated if you’re going to do that, but I can use something unregulated for this.” And so, by engaging with scientists, clinicians, researchers, and practitioners directly and regularly, keeps us building products that are relevant to them and not building products, which we think might be a good idea, but probably aren’t.

[0:20:10] HC: Why is now the right time to build this technology? Clearly, there’s a need for it in health care, but are there any specific technological advances that made it possible to do this now, when it wouldn’t have been feasible a few years ago?

[0:20:22] JB: Yeah, great question. I think, this is no credit to ourselves, but now is absolutely the best time to be building an acoustic AI. I mean, Hyfe would have been unimaginable, or unfeasible 15 years ago. Because the smartphone came along. What that means is that almost all of us all the time take microphones with us everywhere we go. That microphone is connected to a pretty powerful computer on your device itself and is often, also, internet-connected. I mean, like I said, I’m in rural Guatemala right now, which is a lower-middle-income country. I go out on a run in the woods somewhere and there’s a guy carrying firewood on his back and he’s got a cheap Android, a smartphone with a microphone.

The ability to scale to the billions of people who are at risk of respiratory disease, without having to build billions of devices, it’s just there. The device is already in people’s hands and in their pockets and in their purses. It’s just such a cool time to be building software for that device, because of the scalability question. Then another thing that makes now such a great time is there’s this renaissance in ML and AI and signal processing. That’s I mean, we’ve all seen it, especially this year. There are just a lot of really cool methods coming out there and a lot of really cool ways to process data. It’s fun to have enough data to actually build stuff.

Our flagship model is Allison. We call her Allison because she’s always listening. I’ll listen. Then that’s the cough counting algorithm. We’re building a series of other algorithms that are pretty advanced, looking at whether you can identify what disease somebody has, or what outcome they might have, or whether they’re going to have an exacerbation or some episode based on things like time series data and acoustic data.

A lot of this, you just simply couldn’t fathom 20 years ago. Now it’s these models are out there and oftentimes, open academic models that you can take and begin to prototype an experiment with and see what’s feasible and not. What we’re finding is that if you got enough data and it’s well-curated, high-quality data, you can do a whole lot with machine learning right now, predicting outcomes, predicting episodes, screening for disease, and of course, symptoms monitoring. It’s a great time to be in acoustic AI, and it’s a great time to be looking at cough.

[0:22:57] HC: Thinking more broadly about what you set out to accomplish with Hyfe, how do you measure the impact of your technology to be sure that you’re on track for following through with those goals?

[0:23:07] JB: Yeah. There are two ways, I guess. The qualitative way is, and this is the one that of course, touches the heartstring the most, is just talking to users, and talking to people who are trying to wrap their lives around the conditions they have in Hyfe. We found, for example, that, and this was a surprise to us, that our real bastion of support comes from chronic coughers, people with refractory, or unexplained chronic coughs. Somewhere between 5% and 10% of adults have a chronic cough. What this means is that there’s not an underlying disease that’s causing the cough. It’s that oftentimes, cough itself is the disease. There’s a hypersensitivity in the larynx, which causes cough and more coughing irritates the throat even more. It’s this vicious cycle.

You can think, “Oh. Well, you’re coughing too much. Big deal.” It is a very big deal, right? It can cause all sorts of secondary issues. It causes, I mean, just huge quality of life issues. We’ve heard of people who suffer urinary incontinence due to cough. They can’t go to their kids’ piano recitals. They can’t go out to eat. They get bad looks, especially in the era of COVID, anytime they’re in public. Chronic cough is a really big deal. It’s been really reaffirming to hear stories from chronic coughers who were able to quantify their cough for the first time, show to their doctors for the first time the magnitude of the problem, and also, begin to, in turn, that positive cycle, the virtuous cycle of self-experimentation, right?

People quantify their cough and then they realize that, “Hey, you know what? After I eat something cold, I’m coughing more. Maybe I’m not going to eat that cold thing right before I go to the movie theater.” “Or, when I turned off the fan in my room, I cough less. Maybe that’s going to help me with whatever.” Once you give people a tool, they figure things out on their own. That goes for both scientific researchers, but also, just individual consumer users. That’s been really cool feedback, and it’s really affirmative and really motivating to hear from people who are using Hyfe, and it’s actually making their life better.

Our aim is not just to get nice comments on the App Store or nice emails. It’s to impact the lives of millions. Everybody who breathes has lungs and everybody with lungs coughs. We think cough tracking is for everybody. We want it to be, of course, out there for chronic cough. We want it to be out there with infectious diseases. One of the areas that we work very actively on is infectious diseases, like tuberculosis, because of the huge burden of disease. I mean, it’s somewhere between a half million and a million deaths per year due to TB. The huge role that cough-tracking could have in tuberculosis screening and management, so TB is notoriously underdiagnosed and misdiagnosed. Even once correctly diagnosed, you’ve got a six-month course of chemotherapy to follow once you’ve been diagnosed with TB.

Oftentimes, those patients are not, because most TB patients are in the developing world, they’re often not carefully watched over. There’s a huge opportunity for remote patient monitoring with tuberculosis and cough to make sure that patients are getting better, to make sure that their symptoms are not getting worse, and to monitor unobtrusively.

One of the impacts we have our eyes on is how to get cough monitoring in devices, on the devices of those with tuberculosis, either prior to diagnosis, during screening, with massive screening programs at scale, or after diagnosis for remote patient monitoring.

[0:26:48] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:26:53] JB: Yeah, lots. I think it’s awesome and really exciting that we’re in this moment of energy and momentum around AI. If you’re embarking on that journey, go for it. I really think that AI and ML offer so much. There’s just a ton to do there. We’re at the cusp of really cool things, right? I think they are really at that point between the before and after in human history of AI. That’s really cool.

I guess, my one piece of specific advice would be, don’t be turned off by problems that appear simple. Sometimes the simplest problems are the ones that are the most important to solve. Malaria is a good example of a very simple problem, a disease that we know how to prevent, diagnose and treat. Yet, it kills hundreds of thousands of people per year. Cough is another one. Cough is extremely simple, right? We all know what it is. We all know that it correlates to sickness. It deserves a lot of attention.

There’s a temptation within AI and ML to try to solve the hardest problems first because it’s just a paradigm shift. Wow, maybe we can solve this thing, which was never solved before. I think if your goal is impact, and my goal is impact, and I think Hyfe’s goal is impact. Sometimes it’s worth looking at what are the biggest problems first, regardless of the degree of complexity. Then seeing if ML, AI, and tech, in general, have a place there.

In my suspicion, confirmed by our journey of building Hyfe, is that there’s a lot you can do on some of these “simple, big problems.” The big problem of cough is one of them, but there are so many more out there.

[0:28:32] HC: Finally, where do you see the impact of Hyfe in three to five years?

[0:28:37] JB: Yeah. I’m really optimistic about the future of cough counting. Selfishly, I hope, sorry for the fireworks. Selfishly, I hope that Hyfe will be the leader in cough quantification and in cough analysis in three to five years. Even if we’re not, I’m sure that cough counting is coming, right? The same way that step counting is now everywhere. It’s very normal to say, “Hey, I hit my 10,000 steps today,” or something like this. I think that in three years, it’ll be very normal to say, “Oh, man. My cough frequency has gone from 50 to 200 an hour. Maybe I need to go back to the doctor. Or maybe I’m coming down with something. I coughed 20 times last night.” That’s abnormal. Now it won’t be. It won’t be. I think, very soon, that will be a very normal thing.

I also think with all those coughs that we’re collecting, it will be increasingly normal to screen and diagnose for diseases, conditions, and outcomes based on cough data. I have a vision of, let’s say, a worried mother in Bangladesh, or India, who has a child with a cough and infant, to be able to take out her smartphone and get an immediate risk source for something like pertussis. That’s a life-saving diagnosis there. There are so many other cases, from chronic coffers that have been neglected for decades, to those who suffer from asthma, COPD, or congestive heart failure, to infectious diseases that we’re finding through some of our research partners, which are not traditionally associated with cough.

Once you actually start paying attention, you’d see that they do actually have high rates of cough. I think we’re going to see it first in research. We’re already seeing that in the publications and such. Next, we’re going to see it out there in the wellness and consumer space. People more increasingly counting their own coughs as an indicator of health wellness. Then finally, we’re going to see it with screening and diagnosis. This is probably closer to five years than three, because of the regulatory complexity.

I think in a few years, it’ll be very normal to get a notification on your phone, or smartwatch, or your home speaker that says, “Hey, the acoustic characteristics of your cough are consistent with, let’s say, lung cancer. Maybe it’s time for a screening or something like this.” I’m really optimistic and hopeful for that world because it’s a paradigm shift in terms of not waiting for people to realize that they’re sick before they seek medical care or diagnosis, but instead to help people along that journey and therefore, diagnose more and more early. Those two things will ultimately lead to reductions in morbidity and mortality. That’s Hyfe’s goal.

[0:31:11] HC: This has been great, Joe. Your team at Hyfe is doing some fascinating work for respiratory illnesses. I expect that the insights you’ve shared will be valuable to other AI companies. Where can people find out more about you online?

[0:31:23] JB: Look us up at That’s H-Y-F-E.AI. Just look up cough tracking, cough counting, and cough frequency, and you’ll find us in a lot of academic publications, too. I encourage you to read those papers because I think it’s brand-new science and it’s really exciting. I want to always give as much credit to our research partners as possible because they’re really on the vanguard of a lot of this stuff. Yeah, cough tracking and Hyfe.

[0:31:51] HC: Perfect. Thanks for joining me today.

[0:31:53] JB: Yeah, it was great Heather. Thanks so much.

[0:31:55] HC: All right everyone. Thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.


[0:32:05] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people and planetary health, you can sign up for my newsletter at