Advanced weather forecasts are the new frontier in meteorology. Long-term forecasting has garnered significant attention due to its potential to provide valuable insights to various sectors of society and the economy. In today’s episode, Sam Levang, Chief Scientist at Salient, joins me to discuss Salient’s innovative approach to weather forecasting. Salient specializes in providing highly accurate subseasonal-to-seasonal weather forecasts ranging from 2 to 52 weeks in advance.
In our conversation, we discuss the ins and outs of the company’s innovative approach to weather forecasting. We delve into the hurdles of subseasonal-to-seasonal forecasting, how machine learning is replacing traditional weather modeling approaches, and the various inputs it uses. Discover the value of machine learning for post-processing of data, the type of data the company utilizes, and why it uses probabilistic models in its approach. Gain insights into how Salient is catering to the impacts of climate change in its weather predictions, the company’s approach to validation, how AI has made it all possible, and much more!
Key Points:
- Sam's background in science and the creation of Salient.
- Hear how Salient is revolutionizing weather forecasting and why.
- How Salient is utilizing machine learning in its forecasting models.
- Examples of the data and models the company uses.
- The challenges of working with weather data to build models.
- Explore why Salient also uses probabilistic models in its approach.
- Salient’s approach to validation and how it deals with data uncertainty.
- Ways AI has made the company’s approach to forecasting possible.
- He shares advice for leaders of other AI-powered startups.
Quotes:
“Salient produces weather forecasts that extend further into the future than most people are used to seeing. We go up to a year in advance.” — Sam Levang
“ML (Machine Learning) models have proved to be actually a very effective replacement for the traditional approach to weather modeling.” — Sam Levang
“The only difference about making forecasts longer timescales of weeks and months ahead is that there are some differences in the particular parts of the climate system that provide the most predictability.” — Sam Levang
“While ML and AI are extremely powerful tools, they are still just tools and there's so much else that goes into building a really valuable product, or a service, or a company.” — Sam Levang
Links:
Sam Levang on LinkedIn
Salient
LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
[INTRODUCTION]
[0:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.
[EPISODE]
[0:00:33] HC: Today, I’m joined by guest, Sam Levang, Co-founder and Chief Scientist at Salient Predictions, to talk about weather forecasting. Sam, welcome to the show.
[0:00:41] SL: Hi, Heather. Thanks for having me.
[0:00:44] HC: Sam, could you share a bit about your background? And how that led you to create Salient?
[0:00:47] SL: Sure. My general background is in science and technology. Academically, I trained as a physicist. Originally, wanted to build a career in environmental applications. I went on to do graduate work at MIT in climate and ocean modeling. Near the end of my time there, I saw the incredible progress that ML was making across many different domains. It was really only just starting to touch the weather and climate world at that point. But I got interested in the potential applications there and went on to co-found Salient.
[0:01:21] HC: What does Salient do? Why is this weather forecasting so important?
[0:01:23] SL: So, Salient produces weather forecasts that extend further into the future than most people are used to seeing. We go up to a year in advance. Normally, when you look at a weather forecast, you’re used to seeing a prediction of what’s going to happen on a particular day to maybe 10, perhaps 15 days. Beyond that horizon, which is what we call medium-range forecasting in the weather world, the predictability of weather becomes more limited, because the atmosphere is chaotic. It’s the butterfly effect, very small differences in current conditions can lead to very different future outcomes.
Now, there are many sectors of our economy that are very sensitive to the weather. A week warning, there’s only so much you can do to adapt, say farming strategies, or energy production, or manufacturing operations, or even humanitarian relief. Some more advanced warnings of damaging weather events, even have very general sort of forecasts for the conditions of the upcoming season, is extremely helpful to all these groups. But beyond about two weeks, so we call this the sub-seasonal-to-seasonal window of weather, pushing forecasts out into this period with any sort of usable accuracy is basically a really hard problem.
[0:02:40] HC: In solving this, what role does machine learning play?
[0:02:45] SL: Across pretty much all aspects of our R&D work. First, ML models have proved to be actually a very effective replacement for the traditional sort of approach to weather modeling. How we’ve done weather modeling and weather forecasting in the past is to essentially take the physical equations that explained how the atmosphere moves, how it’s heated, and cooled. You kind of divvy up the Earth into a grid that we can solve these equations on over and over, and then, evolve the system forward to get a forecast.
It’s very computationally intensive. These traditional models are run on big supercomputers. They take hours to make a single forecast. It turns out, we can basically replicate this system with an ML model that works in a similar way but doesn’t have any of the hard-coded assumptions about the physics, and the ML algorithms learn these relationships themselves. They can actually do it more efficiently and even somewhat more accurately than these traditional approaches.
That’s the first. The other place, especially in our particular domain of seasonal forecasting, where ML is really useful is in post-processing sort of tools. There’s a lot of subtlety in how we get to a final output forecast, given what one, or multiple models say is going to happen. ML systems are great for tuning to optimize to whatever these particular properties or quantities, we really care about, are.
[0:04:13] HC: What are some examples of some models you train? Not looking for the technical details, but maybe some examples of what the inputs are and what the outputs are?
[0:04:22] SL: Yes. The general approach to building a full end-to-end ML weather forecasting model is to take historical weather data and then build an ML architecture that can learn and replicate the relationships of how the system evolves, how predictability exists in the system, and then train the model based on those historical samples. That has techniques similar to, for example, image processing, where you’re dealing with kind of a grid of data and spatial relationships within it.
That’s one example. Another in the post-processing sort of demand, for example, is how do you take the output of multiple models and combine them into one final forecast? In that case, the inputs are a variety of different forecasting systems, model outputs, and you’re trying to consolidate them into a single optimal forecast.
[0:05:18] HC: So, are some of these variables for input and output things like, a layperson might think about weather, temperature, humidity, wind speed, that kind of thing? Or is this formulated in a different way?
[0:05:27] SL: Yes. That’s exactly right. The only difference about making forecasts longer timescales of weeks and months ahead is that there are some differences in the particular parts of the climate system that provide the most predictability. So, out of weeks and months ahead, the atmosphere, because it becomes so chaotic, has less and less sort of predictive signal left, and what is more relevant is things like the ocean and lie on surface, which just evolve a bit more slowly than the atmosphere. So, we do pull in data sources from across the different elements of the Earth system.
[0:06:06] HC: Then, developing these models, what kinds of challenges do you encounter in building them and then working with weather data?
[0:06:14] SL: One is simply size, because the Earth is three-dimensional. We need fine-grained detail in both space and time in order to model it. This grid of data quickly becomes large, maybe not to the extent of a general-purpose large language model where the goal might be to train on every written word of text you can possibly get your hands on. But it is a large dataset, nonetheless, we’re talking terabytes, or petabytes quickly gets into that range.
The flip side of that is that while we do have a big volume of data is actually pretty marginal as far as having enough data to train these kinds of models on, especially for seasonal forecasts. So, we only have good global estimates of weather going back to about the late 1970s, which is when the first weather satellites were introduced. Some less reliable global estimates going back into the say, forties or fifties.
So, that means 40 years of good data, maybe 80, if you stretch it to the less good data. If you’re trying to, for example, make a forecast of what’s going to happen in the fall from the current summer conditions at a given location on the planet, that means you only have actually 40 to 80 examples of this, and the historical dataset we use for training. So, that introduces a lot of the challenges typical of a very small dataset when it comes to the machine learning systems we use.
[0:07:41] HC: One thing I saw mentioned on your website is probabilistic models. Now, the average machine learning model is not probabilistic. So, why do you use probabilistic models? What does it mean for them to be probabilistic?
[0:07:54] SL: Yes. So, a probabilistic view of the weather for these forecasts beyond a week or two is really essential, simply because the future can’t be predicted with 100% certainty due to the chaos in the system. What we can do is quantify how certain factors in the current climate state and move that probability of outcomes in one direction or another, and then translate that into a range of potential scenarios for groups that need to actually like decisions based on weather data. That means we’re producing basically a plausible set of outcomes that takes into account the current state of the climate in order to give not a precise outcome of what’s going to happen but to understand the full range of scenarios is possible, given the current conditions.
[0:08:45] HC: How do you go about validating your models? I imagine these are models that need to work accurately globally, and there’s parts of the world that you probably don’t have the same granularity of weather data as you do in other parts. So, how do you go about validating them? [0:09:01] SL: Yes, that’s true. There is varying quality of weather data across the globe. For a given user of forecasts, they have their own sites and datasets that they’re interested in validating. If they actually have an asset or say, a farm, or whatever it is, that’s close to a weather station, then they can directly use that as a reference point. If that doesn’t exist, there are pretty effective ways of filling in what’s happened in the gaps between the observed weather data that’s out there. So, we make use of many of those systems. Because we make probabilistic predictions, there’s very particular metrics that we use to quantitatively validate these. A perfect forecast would be both accurate about what’s going to happen and also confident about that. There’s multiple dimensions to the performance of this type of forecast.
Of course, the best test of the model is to just deploy it in production in real time, see how does. We do that. We track how our forecasts are performing every week, every month, every year. But for seasonal forecasts, especially that takes a long time because you only get one chance to predict what the season ahead temperature or rainfall is going to be for a given year. So, we have to supplement that real-time validation approach with backtesting too, where we train and run the models on historical data to see how they perform, and being very careful to properly withhold the data we’re testing on from our training processes, and then use that to give a much more sort of detailed view of how the model performs across all potential conditions that it could encounter.
[0:10:44] HC: Well, with the climate changing, your model may not have seen everything that’s going to happen into the future. Are there things that you’re doing to ensure that your models do continue to perform well, even as these things, as the climate changes?
[0:10:56] SL: This is a big challenge. AI models learn from a training data set, which is what happened in the past. There’s no easy way around that and we can just get data from the future about what’s going to happen. But what we can do is carefully build the models in a way that they’re able to learn how the system has evolved over that historical period, and understand that time is not a fixed notion, and that’s another component of the input data that should be taken into account. If it’s able to learn those sorts of relationships, then it can extrapolate to some extent, at least, about how the system evolves in the future.
For this kind of weeks to months ahead timescale, the nice thing is we can actually validate every year how this is working out in practice. We do get new data about the forecasts we’re trying to make. Whereas for something like a decadal, tens to hundreds years ahead, climate forecast, there’s really no knowing how the system actually performs until that outcome is realized decades ahead. So, we do get real-time feedback and continuously understand how to improve these representations of trends and climate change within the models themselves.
[0:12:16] HC: Are there any specific AI advancements that made it possible to build this technology now when it wouldn’t have been feasible even a few years ago?
[0:12:23] SL: Yes, absolutely. A few years is almost an eternity in AI timescales these days. A few years ago, we only saw the very first attempts to apply modern ML-type systems to weather forecasting. So, the progress has been pretty incredible, and AI methods are now feeding these traditional approaches at lower computational cost. I’d say climate and geospatial data, as I mentioned, has a lot of resemblance to image and video processing type applications. So, some of the techniques that were pioneered there have been pretty easily adapted to these applications. But really, progress across the whole ML/AI setup systems has certainly boosted its ability to work in new applications like weather here.
[0:13:13] HC: I’m sure it’ll continue to advance things over the next few years. And beyond that, I can’t even see that far ahead. Is there any advice you could offer to other leaders of AI-powered startups?
[0:13:23] SL: I would say that, while ML and AI are extremely powerful tools, they are still just tools and there’s so much else that goes into building a really valuable product, or a service, or a company. So, I think it’s important for ML-focused companies to not lose sight of those other aspects of building out whatever it is they’re building. We’ve certainly seen lots of interesting challenges and progress we’ve made on those pieces, not just in the machine learning, even though we are an ML focus company.
[0:13:57] HC: So, for you, what are some of those most important aspects been, even if they’re not machine learning related?
[0:14:03] SL: We’ve seen that you can make a weather model as good as you possibly can. But there’s still a lot of challenges in knowing how to make decisions based on the data that you get out, the forecasts that you have, especially when they contain uncertainty. So, there’s a huge amount of effort we also put into trying to make the data we produce actionable, make it usable, and just come up with ways to increase the value that users of the data can actually get out of it, which is also improved by making the models themselves better. But you really have to do both at the same time, I think.
[0:14:43] HC: Yes. That’s a good point. Really focusing on the end users and what their needs are because that’s the problem that you’re trying to solve. They don’t have something that’s going to make their efforts better then your technology is not going to get used.
[0:14:55] SL: Exactly.
[0:14:56] HC: Finally, where do you see the impact of Salient in three to five years?
[0:14:59] SL: Our mission is to continue expanding the possibilities of who can actually use these longer-range weather forecasts, just about everyone would like to know what the weather is going to do weeks or months from now. But like I said, it’s hard to actually make decisions around those forecasts when they contain uncertainty. So, our goal is really twofold. It’s to, one, continue building out the modeling to make forecasts as accurate as they possibly can be, build the best models we can. But at the same time, there’s this usability piece, that’s also extremely important. Because probabilistic thinking is just hard, and you have to get used to the fact that even if you do it perfectly, you’re going to make what seems like the wrong decision sometimes. So, there’s a lot of effort we’re putting into thinking about how to build out those decision frameworks, and like I said, just make this data as usable and approachable as possible.
[0:15:56] HC: This has been great. Sam, I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?
[0:16:04] SL: You can check out our website at salientpredictions.com, or reach out to me or anyone else from the company on LinkedIn or elsewhere.
[0:16:12] HC: Perfect. Thanks for joining me today.
[0:16:13] SL: Thanks for having me, Heather.
[0:16:14] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.
[OUTRO]
[0:16:25] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.
[END]