The recent destruction of the Pacific Palisades in Los Angeles was a brutal reminder of why we need robust early wildfire detection systems. Joining me today is Shahab Bahrami, the co-founder and CTO at SenseNet – a company that provides advanced AI-powered cameras and sensors to protect communities and valuable assets against wildfires.

Shahab is passionate about using interdisciplinary research to bridge the gap between machine learning and optimization, and he begins today’s conversation by detailing his professional background and how it led him to co-found SenseNet. Then, we unpack SenseNet and how its technology works, how it gathers data for its AI models, the challenges of relying on images and other sensor data to train machine learning models, and how SenseNet uses multiple sources to detect or define any one problem. To end, we learn why and how SenseNet uses various AI models in a single sensor, how it measures the overall impact of its tech, where the company plans to be in the next five years, and Shahab’s valuable advice for other leaders of AI-powered startups.     


Key Points:
  • Shahab Bahrami walks us through his professional background and how it led to SenseNet.
  • The ins and outs of SenseNet and how its technology works.
  • How machine learning fits into SenseNet’s offerings, and how it gathers the necessary data.
  • The challenges of working with images and other sensor data to train models.
  • How SenseNet integrates information from different sources to zero in on a single anomaly.
  • Understanding how it uses multiple AI models to adapt to variations post-installation.
  • How the system chooses which AI model to apply and when.
  • Shahab describes how his company measures the overall impact of its technology.
  • His advice to other leaders of AI-powered startups, and his five-year vision for SenseNet.

Quotes:

“We have one of the most comprehensive wildfire detection solutions in the world, and it is proven by multiple, real-world projects.” — Shahab Bahrami

“Having separate AI models is the solution that we are now implementing.” — Shahab Bahrami

“For the sensor’s AI – because it is a semi-supervised AI, it automatically adapts itself to local conditions. It learns gradually what is normal and what is abnormal, and it is a continuous learning. It won’t stop.” — Shahab Bahrami

“AI changes fast. Every day we have a new AI engine, we have a new model, and leaders, I believe, need to stay updated and make sure their teams have the support and also the resources to keep innovating.” — Shahab Bahrami


Links:

Shahab Bahrami 
Shahab Bahrami on LinkedIn
SenseNet


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:32] HC: Today, I’m joined by guest, Shahab Bahrami, CTO and co-founder of SenseNet, to talk about early wildfire detection. Shahab, welcome to the show.

[0:00:42] SB: Hello, Heather. Thank you very much for inviting me.

[0:00:45] HC: Shahab, could you share a bit about your background and how that led you to create SenseNet?

[0:00:49] SB: Sure. Thanks. Yeah, I’m from engineering work. I have PhD in electrical and computer engineering from the University of British Columbia in the city of Vancouver, Canada. After I graduated in 2017, I didn’t enter the industrial sectors. I continued working on my research. In that time, I was working on AI and optimization techniques for smart cities. But you know in 2017, 2018, I remember how wildfires were getting worse and affecting the daily lives of everyone in Vancouver and other Canadian cities.

In that time, I started working with my colleague and my friend on it. We decided to do something, to do something about this problem, the wildfires. We launched SenseNet in that time. We launched SenseNet in 2019 and started working specifically on the engineering parts. We both were from the engineering world and we started working on the device. A device, a sensor that can detect wildfires in early search, earlier than human beings.

That was a very difficult task, but we did our best. We started working on the project. We made several mistakes. At the end of the day, we developed a device, a sensor that now can detect a wildfire in ultra early search when there is no sign of the smoke, but we have a gas from wall forest. We have a gas from the fire in the grass. Today, we have one of the most comprehensive wall forest detection solutions in the world. It is proven by multiple real-world projects.

[0:02:45] HC: This is a physical device that’s installed in the forest?

[0:02:49] SB: Yes. It’s like electric device with antenna and also a filter that has customized sensors engineered for detecting the smoke coming from wildfires. We installed these devices inside the forest, like on top of the trees.

[0:03:09] HC: How many of these devices do you need to install in order to be able to detect fires early enough?

[0:03:14] SB: It depends on the situation and the conditions, but usually we install these devices around the communities that are prone to the wildfires. It would be a kind of protecting built around the community and those critical infrastructures like rainy days and power transmission lines, but you know this is not just about sensors. Sensors are part of the sense net solution. We have also smoked deflection cameras that can detect wildfires in wider areas. They are also integrated with sensors, so these 2D voices at the same time are working together, smoke detection cameras, gas sensors to tackle the wildfire problem, especially those that are dangerous for communities.

[0:04:08] HC: How do you use machine learning with these sensors and cameras in order to automatically pick up wildfires might have started?

[0:04:16] SB: Yeah. Machine learning and in general AI, artificial intelligence is to be honest, is actually the brain of houses. It is something that the sensors, which are just electrical devices and the cameras, just take photos or take videos and distinguish between a fire or a smoke from other normal activities in the area.

This system that we have is kind of a brain and it can detect anomalies from the sensors data, the streams of the sensors that are coming, the measurements that are coming from the inside the forest, like CO2 level, CO level, nitrogen dioxide level, hydrogen level, methane levels, all these data are coming to our servers and processed by our machine learning algorithms, AI algorithms, to detect any anomalies. For the cameras, we have image process, advanced image processing system, which is, again, based on the AI and can detect a small sign of fires or smoke in the snapshots from the cameras.

[0:05:31] HC: To train machine learning models to do this, do you need images and other sensing data? How do you gather enough data in order to train your models and do you need to annotate it in order to predict whether a fire might be start learning?

[0:05:44] SB: Very good question. Yes, for sensors, we have an AI that is semi-supervised, you know. When the sensors installed in a location inside the forest, the environment can change day and night in different weeks or months, so we cannot use one specific algorithm. The sense net is a pattern that AI engine that can learn from the environment, gradually learn what is normal and then if something abnormal happens, like a smoke from a wildfire, it detects and it understands from the patterns of the changes in the data from the sensors.

Usually, we have a normal environment, a normal situation. So, learning normal situation can be done by our AI and then if something happens, which is not normal, then it will be detected and sent to our next stage of post-processing to make sure that the source or the root cause is actually a wildfire. This is how the AI system in sensors work. It is semi-supervised, but for the camera is different.

For the cameras, we have a basic AI engine, which is good in general for all cameras and it works pretty well, but whenever we install the camera in a location, we find that basic and fundamental image processing algorithm for ourselves and make it for that specific camera to make it good enough and accurate for that specific viewport of that camera in a location. This is how we work. We find our fundamental algorithm for each camera.

[0:07:37] HC: In that case, when you’re fine tuning the models learning what’s normal for that location in order to detect what the anomalies are and send those off for further checking.

[0:07:47] SB: Yes, exactly. All of this are done per camera over the first few days then the camera is in a solving application.

[0:07:57] HC: What kinds of challenges do you encounter in working with images and other sensor data and training models based off of it?

[0:08:03] SB: Yeah. Usually, AI engineers will have a daily basis with several challenges. For example, gathering the data, collecting all the data that are useful for fine tuning the image the cameras AI and also make the sensor AI accurate and understand the anomaly is challenging. For sensors specifically, the challenge is having a false alarm. So, for sensors, we should avoid any false alarms, because there are many other activities surrounding the sensors that can resembles a fault fire and generate a smoke.

It can be a smoke from a campfire. It can be smoked from a vehicle. It can be smoked from a factory around that sensor. So, how to make sure that the source of the smoke is a fault fire or other normal activities is a challenge. This is what we have in SenseNet as another important part of our AI engine to root cause, to find the cause of the anomaly based on the not only the patterns of the changes of the sensor’s paper, also to understand which sensor is it a CO sensor or CO2 sensor or particle matter? What kind of particle is going up or going down? How fast or how slow is changing? All of these factors are processed by our AI algorithm to find whether it is the smoke from a wood, or it is a chemical, or it is from the car engine or is a factory.

This is how the system works and distinguish between the real level fire or something that is normal. For cameras, we have another issue missing the alarm, because usually it is hard for the cameras, especially the image processing to detect the small or tiny signs of an object or tiniest objects like small. This is a big challenge for our AI team to make sure that we detect every wild fire, every fire is in the surrounding area in the long distance and it is in the earliest stage. It is small to be controlled by the firefighters. The AI engine is now very accurate and SenseNet to make sure that we have the capability of detecting fires when they are very small and controllable.

[0:10:38] HC: How do you integrate information from different sources? Does your model take into account all those signals or do they operate independently and let you know when there’s anomaly just based on one signal at a time?

[0:10:49] SB: Very good question. We have multiple source of sensors. As I mentioned, different gases. Also, for the cameras, we have two kinds of streams, thermal imaging and also optical imaging. For sensor, we usually consider a multivariate anomaly detection. This is kind of a very unique feature that we have. Not just from one sensor, one device, but from multiple devices. It’s a multidimensional problem. That is the core and the innovation of the system in SenseNet.

We have an advanced graph neural network technology here to address this problem. For cameras, we have a post processing to make sure that the, for example the – whenever we detected the object or instead of the sign of a small, we can integrate it with other source of detections like thermal images from that camera or other cameras or other source of images from this camera, which is installed in another location and looking to that fire from another point of view. This is another layer to integrate and making the false positive or missing alarms as new as possible.

[0:12:17] HC: You mentioned that when you install a new camera, there’s a fine-tuning phase, so the model can learn what’s normal for that location. I imagine that would accommodate things like different geographies, different lighting conditions. What about other variables? Seasonal weather changes? That might not be accommodated when you first install the camera, so how do you ensure that your models are robust to that type of change?

[0:12:42] SB: Yes. For cameras, we have usually, we have multiple models for different conditions. For example, for after the installations, we fine tune the camera, the basic algorithm, the dimension for daylight and also for night. We have two separate AI models for day and night. Also, it depending on the situation, usually, especially during the winters, because it’s usually cloudy and also there is fog around. We have another, we train or finding another model that specifically can distinguish between smoke and fog or cloud. This approach is now implemented in our system and it’s working fine. Very good. We are having separate AI models is the solution that we are now implement.

[0:13:38] HC: In order to figure out which model to apply at a different time lighting could be fairly easy detected based on time a day, but I whether I guess you might need to detect whether this is a foggy weather model, or if it’s a clear weather day and apply that model?

[0:13:52] SB: Yeah. That’s a good question. We have two options. It depends on the situation. If we have sensors in that location, we consider the data from humidity sensor that we have a temperature sensor that we have. If you don’t have a sensor in the location, the cameras are integrated with the third-party APIs of the weather stations to provide information about the cloudiness of the area and also the temperature and humidity in the area and whether it is raining or not.

Sometimes we also consider the human interaction. For example, if a camera is in a situation that is very special and it is not defined by any third-party APIs or any sensors, we, for example, assume that that camera is should be considered a new finding or having another kind of specific conditions. For example, during the smoky days, when there is a megafire happening around and we have air pollution. In that case, nothing can be visible from the camera. In that case, we set the camera in the mode that is fine-tuned for that specific condition.

I can add more information about the sensors and how the sensors are getting ready for different conditions. For the sensors, I can mention this, because it is a semi-supervised AI, it automatically adapts itself to local conditions. It learns gradually what is normal and what is abnormal and it is a continuous learning. It won’t stop.

If something changes in the environment, for example, the weather changes, or we have a megafire around for this, so that the air becomes polluted, the sensor knows, and provides a new benchmark for itself. For example, if your smoke is around, if we have another fire close by, it can detect. It is sensitive to the changes. AI dictates the changes from what is normal at this time of the year, at this time of the day. Then can detect a new fire, even though we have another megafire and we have a smoke in the air.

[0:16:22] HC: It sounds like the network aspect of this is very important, bringing in information from nearby sensors, bringing in weather information if it’s not available on the sensors. All of that in order to decide which model to use and perhaps what to do about any anomalies that are picked up.

[0:16:39] SB: Yeah. Absolutely, correct. Yes, that’s right.

[0:16:42] HC: Thinking more broadly about what you’re trying to achieve at SenseNet, how do you measure the impact of your technology?

[0:16:47] SB: Yeah. This is a very good question. This is kind of a business side of the SenseNet. We usually have discussion with fire departments about the measurements of the impact. This is something that everyone wants to know how much SenseNet is helping them. We have three layers for this one. One is a detection number and after, for example, a few months of passing a fire signal, these usually provide reports about the detection number of fires that are detected by the SenseNet system in an area.

Also, one of them, another important actually, KPI is the detection time. Time is very important, very critical for fire departments and for first responders. We provide information how quickly they start the fires and compare it with historic reports about the time of detection for the wildfires. For example, for the installation that we have in the city of Vernon in 2024, the fire signal, the average detection time of the SenseNet system was less than three minutes after the fire started. But it was more than 45 minutes during the previous fire system. It means better response time and having time to attack a wildfire before it becomes unmanageable for the fire departments.

The last feature that we usually use to measure the impact is cost savings. Although the system has its own cost for the fire department, the communities like other systems, but cost saving is a lot, because, for example, for communities, we have human lives, we have resources, a receptive offer getting to the community and burning out the houses. Also, for example, utility companies, when a wildfire happens, it can burn out the power transmission life.

More importantly, we have another cost that we usually use to measure the cost saving is the value of lost load, which is something from the consumer side, whenever a power transmission lines burn out, it can cause a blackout. The consumers won’t have electricity and this affect their lives. We calculate the approximate costs of this blackout. If we can detect a fire before this issue, it means you’re saving a lot of money for the customers and clients.

[0:19:36] HC: Those are a lot of different ways that you guys can impact, not just fire, not just get to the fire early, but prevent a whole lot of other losses on top of that, as you mentioned.

[0:19:47] SB: Yeah. Exactly.

[0:19:48] HC: Is there any advice you could offer to other leaders of AI powered startups?

[0:19:52] SB: Yes, of course, yeah. As an AI engineer and also a leader for a team of AI engineers here is SenseNet, I can say AI changes fast. Every day, if you have a new AI engine, if you have a new model, and leaders, I believe need to, say, update it and make sure their teams, the support and also the resources to keep innovating. My suggestion is to be ready to pivot or improve the methods. We did it a lot in a SenseNet. For example, at this time of year in 2025, a model can be the best in the market or a team can be the best in the market, but two months later, it can be not the best.

We have to be ready for a pivot improvement and also, it’s important to keep an eye on the data quality. It’s very, very important when we are developing AI engines, especially for what we call problems, like wild fire detections that is kind of live-saving. So, we have to be very careful about the quality of the data that you’re using at the same time of having the best model when we are developing them, right, like what we have in SenseNet.

[0:21:16] HC: Finally, where do you see the impact of SenseNet in three to five years?

[0:21:20] SB: Okay, very good question. We envision SenseNet to be in every community in Canada, firstly, and in other countries like United States, Brazil, Chile, Peru, South Africa, Australia, Europe. Now, we are partnering with the others to install SenseNet system on every base stations across DC and Alberta. Maybe next year, we installed SenseNet system in other provinces on countries. This is how SenseNet wants to be as we’re life-saving and also as a system that makes communities safer than they are tackling with the wildfires and kind of being a part of this journey of making every community safe.

[0:22:12] HC: This has been great, Shahab. I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:22:20] SB: Thank you. Thank you very much for having me. I hope that we see the wildfires controllable and we don’t see kind of any disaster during this fire season and next future fire season.

[0:22:33] HC: Thanks for joining me today.

[0:22:35] SB: Thank you very much.

[0:22:37] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[END OF INTERVIEW]

[0:22:47] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people and planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]