Today’s AI-powered company of focus, Envision, has made it its mission to improve the lives of the visually impaired so that they can live more independently. I am joined by Envision’s Co-Founder and CTO, Karthik Kannan, to discuss how he and his team gather data for their unique models, how they develop new products and features, and how they are able to ensure that their technology performs well with multiple users and across various environments. Technological advances and the recent boom in AI mean that now is the perfect time for Envision to thrive, and Karthik explains exactly how he and his team are taking advantage of this unique moment. We end this informative discussion with Karthik’s advice for other leaders of AI-powered startups, and what he hopes Envision will achieve in the next five years. 


Key Points:
  • Introducing Karthik Kannan as he explains what led him to found Envision.
  • What Envision does and how its technology improves the lives of visually impaired people.
  • How Karthik and his team gather data for training their models.
  • His process of planning and developing a new machine learning feature.
  • Diving deeper into the development of a new product or feature.
  • How Karthik ensures that his technology performs across multiple users and environments.
  • Why now is the perfect time for Envision’s technology to thrive.
  • How Karthik measures the impact of his company’s technology.
  • His advice to other AI-powered startup leaders, and his hopes for the future of Envision.

Quotes:

“I was fascinated with games. I had asked my dad how people make games and he said, ‘They write code.’ Then he put me onto a programming class and that's how I got started with writing code.” — Karthik Kannan

“I didn't go on to my master's or anything. I just did my bachelor's, just started working directly, because I was more eager to get my hands dirty into making software and stuff.” — Karthik Kannan

“[Envision’s] overarching theme is to constantly look at how we can translate the advances in computer vision, and broadly AI, into tools that can help a visually impaired person live a more independent life.” — Karthik Kannan

“We mix both data from open datasets plus we throw in a healthy mix of data that's captured from a visually impaired person's perspective — that's what makes the whole data collection and cleaning process quite unique at Envision.” — Karthik Kannan

“That whole process of [user] validation is extremely, extremely important because we're not the direct users of the product ourselves.” — Karthik Kannan

“In the AI space right now, the most important thing is to try and understand where, or have a very clear idea as to what kind of impact AI is making on your customers, and to double down on that.” — Karthik Kannan


Links:

Karthik Kannan on LinkedIn
Karthik Kannan on Twitter
Envision
Envision on Twitter
Envision on YouTube
‘Envisioners Day- Hear from our users!’


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you what you like hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33] HC: Today, I’m joined by guest Karthik Kannan, Co-Founder and CTO of Envision, to talk about tools for the visually impaired. Karthik, welcome to the show.

[0:00:42] KK: Thank you for having me, Heather, and nice to join a lineup of other really cool scientists that you had in your podcast, scientists, founders, people in AI. So, glad to be here.

[0:00:53] HC: Well, I’m definitely happy to talk to you. Karthik, could you share about your background and how that led you to create Envision?

[0:01:01] KK: Sure. I really don’t have a very academic background when it comes to a deep learning or AI in general or ML. I’ve been pretty much, you could say, a hacker/developer in this space since 2015-2016. Before that, I would broadly classify myself as a programmer, not specifically really, I wasn’t into any particular piece of tech. I just started writing code very early, early teens, mostly, like most kids who got into programming at that time was to make games. I just was fascinated with games. I just – I had asked my dad how people make games and he said, “They write code.” Then he put me onto a programming class and that’s how I got started with writing code.

When I finished my college, I started working with startups primarily in backend engineering, backend development. I didn’t go on to my master’s or anything. I just did my bachelor’s, just started working directly, because I was more eager to get my hands dirty into making software and stuff. The way I got into ML, it was in 2014, 2015. I was really into machine learning when I was in college, as well, as this was back in 2011, 2012. Back then machine learning was more manipulating tabular data. I was using scikit. Those were the very rudimentary tools.

I’ve been following the space ever since. It was in 2015, 2016 that I started to get to know about what was happening in deep learning, the whole AlexNet moment that computer vision had was the starting point for me. I was very involved in the early days of machine learning when Theano was the most popular deep learning library and so on.

In 2016, 2017 is when me and my co-founder, we started to really seriously take a look at how computer vision would help people with a visual impairment to live more independently. That was because me and my co-founder, we had actually gone to a blind school in India to talk to kids about what it is to be a designer, what it is to be an engineer. The conversation that I had with the kids then was a direct regarding point for starting Envision. It’s a pretty, I would say, roundabout journey, but somewhere, I’ve always been involved in tech. Machine learning was naturally on my radar from a very early time. Yeah.

[0:03:16] HC: What does Envision do?

[0:03:18] KK: Envision helps people with the visual impairment to live more independently, using computer vision. We built Envision glasses, basically a smart glasses that helps people with the visual impairment by helping them read text around them, recognize faces, recognize objects and so much more. The glasses have a camera on them and we’ve got computer vision models that run directly on the glasses, but also on the cloud. A visually impaired user could just wear the glasses. They could, for example, be sitting at a restaurant and very quickly ask the glasses to scan a menu card for them and the glasses would scan the menu card, go ahead and read it out to them.

Very recently, we started putting together all the cool stuff that’s happening in the space of LLMs and computer vision together. Now, not only can a visually impaired person sit and scan a piece of text; they could also ask questions of that piece of text and so on, so forth with LLM. Essentially, Envision builds tools, like Envision glasses, which has a camera and uses computer vision to translate the visual world into audio.

We also have the Envision app, which is an app that that is available for free for users across the world that does things that are similar to the Envision glasses, but in the form factor of a smartphone app, right? These are the two things that Envision does, but overarching theme is to constantly look at how we can translate the advances in computer vision and broadly AI into tools that can help a visually impaired person live a more independent life.

[0:04:49] HC: It definitely sounds like computer vision and machine learning are essential in these glasses. You mentioned some of the tasks that they perform. How do you gather data for this? How do you set up model training using that data?

[0:05:01] KK: Yeah. It’s a good question, because a lot of times, one thing that I’ve always that has ever since we started working in the computer vision for accessibility space or AI for accessibility space. I’ve always noticed that the open data sets that are available today aren’t inclusive in a sense. They don’t capture data from, say, a visually impaired person’s perspective, right?

If you look at – say, if you pick any image in an open data set like ImageNet or open images or whatever, right? You often find a perfectly framed photograph of the objects that you have, the background and you have the object pretty much in the center of the image. Then you have – it’s a very well taken photograph. Most of the times, which shows that it was taken by a sighted person, but then what we realized was when we take these – when we take models that are clean or these open data sets and then put them to use within our particular use case, it doesn’t often fare well, because we don’t have images captured from a visually impaired person’s perspective.

In the very early days of Envision, when we didn’t have the scale at which Envision operates right now, we used to often go ahead and work with beta testers very closely in the Netherlands. Sometimes we would pay them to go ahead and help us collect data from the images that have been capturing. In the beginning, it was largely a small group of beta testers that used to collect the data for us. Right now, what we do is when someone installs the Envision app or the Envision glasses, we have an opt out or default setup in place where users can explicitly – have to explicitly opt in, in order to go ahead and share data with us. We do use some data that we capture from end users right now to be able to train our models.

The way we set up model training, it’s again, quite basically, a pretty standard data collection pipeline. Over the course of the last couple of years, we’ve actually worked on automating, especially for some things like object detection. We’ve worked on automating the entire training pipeline as well, where we have our own, where we have data that’s collected. We split the training and the validation sets and then automatically. Then we train models. We benchmark them. When we create new private beta builds of new versions of the models and those are sent to a small group of internal testers.

We get both human feedback and benchmarks that we run internally to see if the new versions of the model that we’re pushing are working better or if there are new glasses that we’re adding, we use these benchmarks to ensure that they’re going fine. In terms of the training pipeline, it’s pretty standard training pipeline that we follow, like in most cases of training computer vision models. But the fact that we mix both data from open datasets plus we throw in a healthy mix of data that’s captured from a visually impaired person’s perspective, that’s what makes the whole data collection and cleaning process quite unique in Envision.

[0:07:57] HC: You mentioned a couple of different features on these glasses with reading texts, recognizing objects. How does your team plan and develop a new machine learning feature? What steps do you go through in that process?

[0:08:09] KK: Yeah. The first realization that we had was the fact that we aren’t users of the product. When I say we, it’s most of the people at Envision are sighted. We do have employees who are visually impaired. They have a lot of input into our product development process. In the sense, they tend to suggest features internally, because they’re like heavy users of the product themselves, right?

Understanding from the very beginning, when it was just me and my co-founder was the fact that since we’re not users of the final product, we cannot make assumptions on their behalf and go about building things. In the times when we actually went ahead and did that, it often turned out to be quite incorrect. I mean, in the very beginning, in the early days of Envision, we had two features on the Envision app when it was still a very, very early prototype. One was reading text and the other was providing natural language captions of an image.

Now, as a sighted person, for me, naturally, the more exciting thing to work on was the natural language captioning of images. I was like, “Oh, wow.” If I was someone who was visually impaired, I just want to take pictures of things around me and get descriptions of what the images are. We put inordinate amount of time and obscene amount of time, just trying to improve that particular model.

When we actually started releasing the app to more and more people, we saw that people didn’t give two hoots about the whole captioning stuff. They were really focused on reading text. After talking to a few users, we realized that it makes a lot of sense. Because if you think about it, pretty much everything around you is just text and it’s all in forms of text. Product packaging, stuff that’s on a computer screen, words that are there on a piece of paper. All of them are text and surrounded by text of different font types and sizes and so on, and different types of backgrounds, printed backgrounds, digital backgrounds.

We realized reading text once we started showing it to more people was a more important feature and not the whole captioning stuff, which to them felt more gimmicky, than an actual feature they could use. Since then, the process that we follow, which is right now is quite refined is every couple of months, we have something called the Envisioners Day. It sounds a little bit like a music festival, because it is almost like a festival of sorts where we invite Envision users from different walks of life. Different age groups.

We put out an open call saying, “Hey, we’re going to have an Envisioners Day. We would love to invite you over to the office. If you’re in the Netherlands, if you can make it to the Netherlands, we may be more than happy to have you over.” We try and get around 20, 25 people from different age groups, different levels of tech savviness, and we have them come to the office. It’s an entire day. We spent the entire day with them. What we do is we break these people into little cohorts of like three or four people. We put them together in a room with a machine learning engineer and a designer.

The machine learning guy or person, and then there is the designer. Then there are three or four people. What tends to happen there is that the designer and the ML dev, along with these users, test out the models that we are working on in real time with the user. We hand them a pair of glasses. Let’s say, we are working on a new document detection model to detect documents more easily, even the person who is holding them in front of the glasses, right? At that point in time, we’d be in the room and I’d be sitting around their table, the glasses would be connected to a laptop. Then the ML engineer would in real time see how the model is performing when someone is using the glasses.

The designer will have notes on how they can improve the UX of how the model is implemented. For example, if the users feel that, “Hey, you know what? This is not accurate enough.” Then we have two, three different versions of the model that are trained. Then the engineer just immediately swaps out the model. Then we test with maybe a more accurate one or if it’s too slow, then maybe we try out ways to increase the frame rate.

Almost this kind of – it’s a weird pair programming, you can say, with all of the users, with the designer, with the engineers sitting and everyone’s tweaking the model. We finally come to an understanding of, “Okay, is this something that works for the end user or not?” We also use this time to test out some very experimental features. We’ve been working a lot on visual question answering. We often throw ideas at the testers on that day, just asking them, “Hey, would this be interesting to you or what do you like about this particular feature?” It’s a conversation that we have with a smaller group of people, different walks of life.

Also, broadly, we constantly have this conversation with the community. We have our own internal communities. We are very, very active on Twitter, where a lot of visually impaired people are there. We are active on pretty much all the online forums that you can think of. It’s constantly being in touch with the community and then developing intuitions on where the product should go next and validating those intuitions once again with the community and all these different channels. It’s a pretty elaborate process that we have now. Yeah.

[0:13:09] HC: The users might suggest new features that you haven’t even thought of. Maybe you spend a couple of months developing that and that the next demo day you’re able to show that to them and get some feedback. Is that part of this, as well?

[0:13:21] KK: Yeah. It is part of that as well. Sometimes we ourselves actually think of a feature and be like, “Hey, is this something that would be interesting for you?” Then we usually ask that question to some of the internal like employees of Envision, saying, “Would that be interesting for you?” Yeah, at this point, it’s sometimes when I think of certain features within the Envision glasses, the Envision app.

It’s hard to pin down the origin of those features, but the ones that have stuck the most with the customers and with the users have often been ones that have come directly from them or things that we have thought of that would be interesting, and we validate that. That whole process of validation is extremely, extremely important, because we’re not the direct users of the product ourselves. Yeah.

[0:14:04] HC: Yeah. That continuous feedback. I can certainly see that’s essential. Especially, when you’re not the user of the product. You need that feedback. What comes next in development? You have a feature, you know that it’s useful, you know that there’s a use case for it, but you haven’t developed it much yet. You don’t really know where to start. Maybe it’s something a little different than you’ve done before. What process do you go through with your internal team to figure out how to implement it? What algorithms to use? Is it research involved there?

[0:14:39] KK: Yeah. The way it happens is once we have like an idea of, “Hey, this is how we want to actually go about doing it.” The first step is always the actual design or the user experience, right? So, the design team and also the engineering, like actually, the design team sits down really comes up with what could be a potential set of user experiences, right? Maybe they don’t come up with just one. Maybe they come up with two or three of them.

The next step is, of course, trying to validate those by having conversations with end customers, trying to use our sense of intuition with how we have built the product so far to figure out, “Okay, this could be a possible use case or this could be a possible way to implement this particular feature.” We start discussing that with the customers. Once we know this is the exact way we want to implement it, that’s when the engineering team really steps in the machine learning, the computer vision team really steps in at that point.

What they do is we look at what constraints we have to work with first. For example, does this feature have to run offline on the glasses or can it run online? Is it something that needs to work in the real time or does it just work on static images? What part of the whole feature pipeline is the machine learning model involved in? Then we often ask ourselves this very important question. Can we solve this using traditional computer vision techniques or do we need deep learning to solve this problem?

There are these list of constraints that we come up with, right? We decide based on those constraints, what approach do we actually take? Let’s say, if a model needs to be implemented completely offline and needs to work real time. Then we already have some directions in which the computer vision team will start doing their research in. Then it’s a process of just giving them the space to say, “Okay, you know what? Take a month or two if required, and then let’s look at all the possible ways that we can implement this within these set of constraints.”

This is actually the most exciting part for me, because then we just go crazy and look at all the research that’s happening in the space. It’s also a time of just building simple proof of concepts on, like creating a hugging face basis, for example, or just making a simple, great implementation, just sending the links over to the design team and having them test it out and then give us feedback. Doing a lot of what is called Wizard of Oz testing, where you simulate how the end user experience is going to be with just a simple web interface and so on.

This is a time when we actually build a lot of proof of concepts. We present this proof of concepts. We have like a bi-weekly sprint or bi-weekly catch up with the whole design team and the AI team and the platform team that’s working. It’s like a big group of people just playing around with a bunch of different demos and then deciding that, “Okay, this is the direction that we want to go in.”

The research and the proof of concepts stage comes after that. Then by the time we’re done with the whole proof of concept stage, where we figure out based on the constraints, this is the exact type of model architecture that we’re going to use. This is how we’re going to go about. This is the data that we already have. This is the data that we don’t have. It’s then we go about collecting the data. If there is data that’s missing or then we get into the whole training process, if we already have the data. We just get into training.

Once a training part is done, what we usually do is we have a set of, I would say, skeleton apps for different use cases. For example, if it’s a model that needs to run real time on the glasses, we have some standard skeleton apps. A little internal SDK that we have built to run different types of machine learning models on the glasses and on the app, on device, or on the cloud. Then we go about working to build a first version of the feature itself. From that point on, it’s more of a regular software development process with heavy input from the machine learning team and the design team.

We work together and that usually takes about a few weeks, if everything goes fairly smoothly. Then comes an iterative process of just testing with the users over and over and over again. If it’s a feature that we’re working on from scratch, where there is some amount of, I would say, research already done in that space. If there’s some indications on how we can solve the problem and those indications that are available in open source or if there’s been papers published about it, then it takes us about two or three months to be able to go from just an idea to an implementation.

If it’s something that is really, I would say, if it’s a huge leap ahead. Then it usually takes us a good six to eight months to be able to click, to explore, and to do our research and to be able to deliver that as a final feature. It really depends a lot, the time scales really depend on how solved the problem is in the public domain and how much do we add and contribute to it and where do we need to actually do our research.

[0:19:35] HC: I imagine one of the challenges in creating features of work and creating models that work on this data is just the diversity of things that the glasses need to deal with. How do you ensure that your models perform well, across many different users and environments?

[0:19:50] KK: Yeah. This is actually, yes, the most challenging aspect of it, because if you’re a sighted person you know that, for example, we have something called layout detection on the glasses where it helps you read a document once it detects the layout. If you send it a letter, which is just the usual left to right reading order, then it just reads it from the – it understands it’s a letter and then reads it from left to right. Then if you send it, for example a magazine article where you have columns and you need to read it from one column to another and not just the blind left to right reading method.

These are instances where we don’t know what kind of data we’re going to get. What we do a lot is again, because we’re so, so clued into the community. Anytime we put out any of these new features, it usually takes weeks of testing. When I say testing, it’s like, we have three or four different WhatsApp groups and we have one Telegram group. We have a group for, we have an online forum and we constantly – send feedback from customers. Whenever we encounter a case where the model doesn’t perform well, we usually go ahead and ask users to share data with us or we constantly also benchmark the beta testers data against the model and see where it’s performing well, it’s not performing.

It’s a lot of iterative testing. It’s just pure grunt work. For example, the layout detection feature that I just told you about, that took us almost two and a half months of testing, just this whole feedback – the process of finding outlier documents where it was working, where it’s not working, but even before that, what we did a lot of work on is to talk to users and look at existing data and see what type of documents, what type of languages are people reading. Are people reading more like non-Latin scripts. Then the algorithm needs to be tweaked to work better with non-Latin scripts.

We take a look at the data itself already, but we also do a lot of, I would say conversations with users. It’s a iterative process. Even sometimes after the feature is released, we encounter data, we encounter users saying, “Hey, it’s not working very well in this particular scenario.” Then we actually go ahead and collect data to make the model work a little bit better with that scenario. If we think that a particular scenario is an outlier, then we suggest work around to customers when they’re using the model. It’s a very human feedback intensive process, right now for us.

[0:22:15] HC: Why is now the right time to build this? Are there specific technical challenges that made it possible to do this now? Wouldn’t have been feasible a few years ago?

[0:22:25] KK: Yeah, definitely. I think in 2020 is when we first launched the Envision glasses. Even before that, we have been looking at smart glasses. I think since the time we launched the Envision app in 2018, which we’ve been looking at smart glasses, because for someone who’s visually impaired holding a phone in one hand and a kid or a guide dog in the other hand and pointing their phone around, it’s not a great user experience.

The one particular trend that started, which really helped us a lot, was the fact that mobile hardware was getting pretty powerful, especially for example, the current hardware that is there on the Google Glass, which is the hardware that we use to build the Envision glasses platform. The XR1 chip that was the Snapdragon XR1 chip that got launched in 2018 with all its bells and whistles that really help us run machine learning models very efficiently on device. That particular trend needed to happen before we could actually go ahead and work on the Envision glasses.

I see that trend, hopefully accelerating in the future with the big ones like Apple and Google. Also, jumping into this building more mainstream MIRAR glasses. I’m hoping that the trend of having better processors on these losses continues. That is one reason why I think it’s now is the right moment for us to build Envision. Second key thing has been this huge, huge explosion that’s recently happened in the computer vision space, especially in building more models that are privacy conscious.

Since the whole privacy conscious movement has started within the whole machine learning community, there’s been such great work done to bring more model architectures that work on device. Every year we see some new benchmark being broken, even when it comes to that particular part of the machine learning community, right? The fact that we can actually go ahead and have better models run more efficiently and on device, which is really great for privacy overall. In our use case, it makes so much sense. That particular trend has been really accelerating.

One trend that has started very recently over the last six to eight months has been the growth of large language models and broadly multi model models, because, for example, with something like Images or GPT-4 that we have early access to, we get such incredibly rich captions of an environment and users can actually ask questions of their environment. So, people can just take a picture and then not only do they get a really rich description of the picture, they can also ask questions like, “Oh, can you tell me what does the floor look like, or can you tell me, can you give me a description of the person standing in front of me.”

The whole multi model aspect along with large language model, the stuff that’s been happening there has been really positive. Hopefully, Envision, we’re looking to launch a couple of features that have been in the works for quite some time that has accelerated, because of this whole multi model improvement. Hopefully, we’re going to see some of that come out into the hands of people. Yeah.

[0:25:22] HC: Thinking more broadly about your goals at Envision, how do you measure the impact of your technology?

[0:25:29] KK: There’s no like a clear, I would say – of course, there’s the one KPI is, or one impact. One way we measure impact is just how many people tend to use Envision. The more we’re able to increase that, the more we know there is an impact, because every single user of the Envision app or the Envision glasses is a visually impaired person who’s using our product to enable more independence. I think for me, that’s the high-level numbers way of looking at it, but within the Envision team in our internal Slack, there is a channel called Happy Users. The way I look at that channel is that it needs to get so spammy that our employees have to mute that channel in order to do the – to go about their day.

That’s one personal goal of mine, because every time someone sees a story of a user of Envision sharing – like somewhere on the internet, they go ahead and take a screenshot of it or they copy it and they put it in the Happy User’s channel. It’s a constant stream of Envision employ – is seeing users build it, using the product all over the web and then putting it onto the Slack channel. That’s how we measure impact. It’s just purely with the user stories that we see and the fact that as we continue to grow as a company, as we continue to grow as a product, every single person who is a number on a dashboard is actually a person who’s using the product to improve their independence. It’s as simple as that.

[0:26:56] HC: Is there any advice you could offer to other leaders of – powered startups?

[0:27:01] KK: Yeah. I think there is a lot of chatter right now with companies feeling a little bit off with the fact that, “Hey, with a tool as powerful as GPT. How relevant is the work that I’m going to be doing in the near future or how relevant are my products going to be, if it can eventually be placed by a large language model?” And so on, so forth. I think companies that are working in this space, in the AI space right now, the most important thing is to try and understand where, or have a very clear idea as to what an impact is AI making on your customers, and to double down on that. In our case, the way I see it is that computer vision is quite central to helping a visually impaired person live more independently. Any and all advances that happen in this space, we go ahead and we’re able to utilize or look at it in that direction.

I think another piece of advice would be to have a more, especially if it’s a B2C company that’s been doing this is to have a more, I would say, is to assume less when building these AI features and to work more closely with customers and to involve designers in the process of building these AI features from pretty much the very beginning, because so much of the AI features that we have worked on at Envision has largely been shaped by the designers, who also – that we work with, because they tend to be the ones setting the constraints on how the feature should be implemented and that – and we work backwards from there to the point where how the model gets built or what architectures that we use. Yeah, these are the two broad things that I would say.

[0:28:36] HC: Finally, where do you see the impact of Envision in three to five years?

[0:28:40] KK: I would love for Envision to be hopefully in the hands of millions of people, because the app is free. Hopefully, the glasses will get a much wider adoption. I think for me, is to million people using it across the world would be incredible, but I think broadly speaking, right now we’re starting with – right now our products are built for people with a visual impairment, but what we’re also constantly seeing is how the work we’re doing at Envision can also have an impact on people with other disabilities, other visual disabilities or other cognitive disabilities.

For example, I think Envision could be really good for someone who has dementia. Envision could be really useful for someone with dyslexia. We hopefully would have branched out from the visual impaired community to also help people with other visual disabilities or people who are typically on the whole visual impairment spectrum. That’s another aim of mine. That’s where I hope to see Envision go in the next three to five years.

[0:29:40] HC: This has been great. Karthik, your team at Envision is doing some really interesting work for the visually impaired. I expect that the insights you’ve shared will be valuable to other AI companies. Where can people find out more about you online?

[0:29:52] KK: People can find out about us mostly through our website. You can go to letsenvision.com. That’s L-E-T-S-E-N-V-I-S-I-O-N.com and you can find more information about us there.

[0:30:03] HC: Perfect. Thanks for joining me today.

[0:30:05] KK: Thank you so much, Heather. Thanks for having me.

[0:30:08] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:30:18] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]