Climate change poses significant risks and uncertainties, with far-reaching impacts on business operations, supply chains, and financial performance. By conducting a climate risk analysis, businesses can mitigate risks, develop savvy strategies, and masterfully manage and mitigate them.

Joining me today is Josh Hacker, an atmospheric scientist whose career has spanned diverse research and science management roles. Josh is also the Co-Founder and Chief Science Officer at Jupiter Intelligence, the go-to expert for organizations seeking to strengthen their climate resilience through climate risk analytics. Josh has made his mark in both academic and laboratory settings and is helping to meet private sector demands for comprehensive and accurate information on the costs of climate change for individual companies and market sectors.

In our conversation, we discuss why climate change is relevant for companies, what they can do about it, and how Jupiter Intelligence is leading the way. We unpack the various types of climate risks, the role of machine learning, and the validation process. Learn about the various uncertainties and errors in modeling, how to correct them, the role reanalysis plays, why a multidisciplinary team of experts is essential, and more.

Key Points:
  • Background about Josh and why he decided to start Jupiter Intelligence.
  • The work Jupiter Intelligence does and how it relates to climate change adaptation.
  • Why climate risks are being taken seriously by the private sector.
  • Discover the role of machine learning tools in assessing climate risks.
  • An outline of the various challenges faced when working with climate models.
  • Learn about his approach to model validation and why it is crucial. 
  • Keeping a balance between model accuracy and explainability.
  • Find out how model uncertainty and model errors are quantified.
  • How bias manifests in climate models, and how to identify it.
  • Josh shares advice for other leaders of AI-powered startups.
  • What to expect from Jupiter Intelligence in the future.

Quotes:
“The vast majority of capital that we're using, and we will be using to adapt to climate change, is locked up in the private sector. But on the other hand, the government has a role in policy. These two things have an interplay that then feeds into the broader community.” — Josh Hacker

“The reality is there's no one way. [Validation] is a complicated process that you have to build on to make sure that things are working right all along the way.” — Josh Hacker

“The game in climate modeling is to actually pull the signal out from that noise. We want to pull out the slow stuff, how the climate changes, how the climate is changing relative to all the weather patterns that are going on underneath it.” — Josh Hacker

“Because of that historical period and the existence of these reanalyses, we have something we can do to correct the historical statistics of the climate models.” — Josh Hacker

Links:

Transcript:

[INTRODUCTION]

[0:00:02] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research and computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[0:00:33] HC: Today, I’m joined by guest, Josh Hacker, Co-Founder and Chief Science Officer at Jupiter Intelligence, to talk about climate risk. Josh, welcome to the show.

[0:00:42] JH: Thanks, Heather.

[0:00:43] HC: Josh, could you share a bit about your background and how that led you to create Jupiter Intelligence?

[0:00:47] JH: Sure. My formal education background is really in Atmospheric Science, or sometimes we call it Atmospheric Physics, especially the area that I’ve focused on in my past research life in the public sector. I spent most of my career at the National Center for Atmospheric Research, and was some time on faculty at Naval Postgraduate School in Monterey, California.

What led us to start Jupiter Intelligence, besides the fact that we believe that there was a new market poised to open. It’s really the three fundamental pillars of how we’re developing our products. One is the maturity of cloud computing. Then, of course, there are associated machine learning pipelines on that, really cloud computing by itself.

The second is the maturity of geophysical models that are really the domain of the public sector. Think of large labs with contributions from the university communities. These are things like weather models, and ocean models that are derived from first principles of physics and solve large sets of differential equations, and improvements, and these are the backbone of our environmental prediction. They’re also mature enough that improvements in those are incremental. They’re very, very useful to do things, I train machine learning models.

The third thing is the maturity of the machine learning field, AI field, really, the beginning, in some sense, the beginnings of its large-scale application to these kinds of science problems. AI has been applied for decades of weather prediction, but in a very, very limited way, and times are evolving really quickly. Those three pillars combined again with the market. We thought it was a good time to get things going.

[0:02:21] HC: Great. What does Jupiter Intelligence do? Why is this important for climate change adaptation?

[0:02:27] JH: Sure. Our focus, and really our course strength area, is on physical risk, which is one part of the risks associated with climate change. There’s a range of things within that. We typically think of chronic perils and acute perils, where acute are extreme events, and things like that. An easy one for people to think about is tropical cyclones which can cause a lot of damage at once. There are also a lot of other things, just like windstorms driven from any number of different sources, for example.

Then there are chronic perils that drive a different risk, and these are things like increasing heat waves, and nights that don’t cool off, as much. We’re having to spend more money on air conditioning. Those kinds of things. All of that put together is really our strength. That then feeds into other kinds of analytics that we typically think of in the economic space, so things like financial risk, where you might have probably like a default, some mortgages, for example, or just a loss, a loss from a large event from an acute event, which is the kinds of things that insurance companies have worried about for a long time.

The difference here is that we’re thinking about what that looks like, 10, 20, 30, or 40 years into the future. There are a few areas where this can be important. One is where we see a lot about the news right now, it’s around disclosure. In the disclosure world, in terms of the corporate world that needs to disclose their own risks from climate change. There are two big buckets, one is called transition risk, and that is transitioning to a green economy and how that can impact the business.

We’re not really talking about that today, that’s not our strength at Jupiter. We partner with companies that do that. For us, we focus on the physical risk on the other side, and both of those are part of the disclosure that is becoming more and more required. It’s been part of guidelines for several years, but these are the new requirements that are coming online all over the developed world, are focusing on those two things. That’s one area is disclosure.

The second is the leading uses of the kinds of data that we provide. The leading users, I should say, businesses that are ahead of the curve are really looking for ways to use our information to derive business advantages. They’re bringing our information into different business units, and they’re committed to figuring out how to use it. It is new data for them. They’re learning, but they’re making progress, and they’ll be the leaders. We believe they’ll be the ones that are winning in the economic market over the long term.

Then the third is the broader impacts. This is something we also hear a bit about in the news, just climate change impacts on underserved communities. Other ways that companies like us can support government efforts, so for more broader policy-based adaptation decisions. Those are, I think of them as broader impacts, because the vast majority of capital that we’re using, and we will be using to adapt to climate change is really locked up in the private sector. But on the other hand, the government has a role in policy. These two things have an interplay that then feeds into the broader community or broader population benefits.

Then related to that, but we see it in our customer base is that companies do care about their workforces, that’s part of the way that they think about their risks going into the future. Am I going to have a workforce? Are they going to be able to get to job sites or into the office? Are they going to be stressed out more, because they’re working outside? Those kinds of things. Those three areas, I think, are where we’re seeing impacts from what we do.

[0:05:48] HC: What role does machine learning play in your technology to assess these types of risks?

[0:05:53] JH: There are a number of areas, but I want to focus on just a couple maybe. One is I’ll just call it loosely, error modeling. Everything we do starts from global climate models, which are the domain of large labs. They’re integrated closely with the IPCC process. Those are imperfect like any geophysical model, they’re imperfect, they still have errors. If we want to make things useful, we need to build a model that corrects for the error. Often people think of just bias, but there are more components to the error than that.

Then the second area is what we call downscaling, which is a common term in my field but it may be a less common term in some of the other fields of listeners here. Really what that is, is taking this global climate in our context, it’s taking global climate models as inputs, which are very large scale in terms of what they can see or what they can simulate around the globe. They’re typically a one-degree type of grid. If you think about a grid all over the entire planet, that means things smaller than that, that they can’t really see. They can’t see tropical cyclones, for example.

Downscaling is about taking that large-scale information, which is also, in our world, also slow. Large and slow go together, small and fast go together, and taking the large and slow information and building in information that’s small and fast. The reason that’s important is, because that’s where the impacts are. Things like again, tropical cyclones, thunderstorms, storm surges that affect the coastline, intense rainfall of other kinds, windstorms of all kinds. All these things are fast and small relative to one degree, which is 100 kilometers by 100 kilometers. Downscaling is a big part of what we do, combined with air modeling. Those are the two areas where we get the most use out of machine learning.

[0:07:42] HC: The downscaling is really to take the larger scale, low granularity predictions and try and get something more granular in time and space. Is that right?

[0:07:51] JH: That’s exactly right.

[0:07:52] HC: What challenges do you encounter in working with data from climate models like these?

[0:07:57] JH: Yeah. There are a number not to mention a couple here, and I want to talk about data, mostly in terms of bits and bytes, as opposed to data in the classic sense, which at least in my field is observational-based data. Thinking back to both the climate models, but also what we derive from those based by bringing in other information, sometimes observational data, sometimes other model-based data, right? It’s bringing in a bunch of different sources of data, combining them in a way that gives us our air models and our downscaling models in a way that is as accurate as we can make them.

There are a couple – I think one of the main challenges here is at the end of the process, right? Well, how do we know we’ve done something right? I know we’ll talk a little bit more, I think about validation and verification, but just to mention it here, this is one of the big challenges because we’re looking into the future over time scales that we can’t say for sure, is correct, right? Whether forecast by tomorrow, I know if my forecast was correct, in our case, I have to wait 10 years or 20 years to know if it’s correct.

What we have to do is look at all our different modeling components and the chain of models that gets us our final output, and build in steps to verify each step along the way. It ends up looking like a big body of circumstantial evidence, in a sense. That’s one key challenge. Then the other key, maybe it’s not the key challenge, but an opportunity is having subject matter experts in the loop is a really critical part of this, because sometimes it just doesn’t even pass the smell test, right? You need to know that it’s got to pass the smell test before you even get into more quantitative approaches. Somebody who knows something about atmospheric physics, climate change, and climate modeling, those kinds of things, are critical for us.

[0:09:40] HC: Validation, you mentioned there already, but let’s go into that in a bit more detail, because you can’t see into the future, of course. You mentioned you’re really validating each step along the way. Is that how you approach validation for both the area and the downscaling models?

[0:09:56] JH: It is. It is. In the pure AI or machine learning world, we can think of classic ways to validate, right? Cross-validation and the other usual ways we would do something like this. We do that, right, that’s a critical piece of it. If you think of that process, that machine learning process being embedded in maybe along a chain of other machine learning or simpler statistical-based approaches, for example, then we need to validate all along the way.

Then we also want to validate the inputs and the outputs. There’s every step along the way. There’s more to do. Just to give an example, on the outputs, all kinds of things we need to do to make sure things are working. You’re doing spatial consistency checks. You’re doing spatial gradient checks. You’re checking to make sure that the trends are what you expect. We know this, of course, are automated, but many, many steps. We at Jupiter, we spend as much time validating as we do actually building models, just to give you an idea. I am talking around your question a little bit, Heather, but the reality is there’s no one way. It’s a complicated process that you have to build in to make sure that things are working right all along the way.

[0:11:03] HC: It definitely, makes sense that it’s complicated when you can’t see the future and you have to do your best to validate anyway.

[0:11:09] JH: Right.

[0:11:10] HC: This might come back to the validation piece, as well. How do you think about the balance between model accuracy and explainability in your setting? Are they equally important? Is one more important than the other?

[0:11:19] JH: Yeah. That’s a great question. It does come back into the question of validation, for us because of the way our data is getting used. I mentioned three ways or three areas and importance before one is around disclosure. Disclosure is, it’s a regulatory thing. In the US, for example, banks are heavily regulated. For them, the explainability of the models that they use is critical. That essentially passes down to us, right? It’s really, really important in this regulatory environment.

As a science in our field XAI, explainable artificial intelligence, as a quantitative science is, I would say, in its relative infancy. The one thing we have for us, which is not generally true, is we have physics, right? We have physics-based models that we can rely on. For example, a weather prediction model can see a lot more things than a climate model, there are some of the big labs and universities are running long simulations at very, very fine scales that cover part of the globe. Maybe they cover the United States or something like that.

That gives you another source of information you can use to both build in or to build or train your machine learning model. It also gives you something you can validate against. It also gives you something in a sense and this is related to validation of those physics-based models, they’re telling us about a relationship between physical variables in space or trends in time or those kinds of things that are much easier to explain than something that comes out of an AI model that might appear as a black box in many ways.

We have other sources of data we can use to get explainability, even if it’s qualitative, right? The quantitative part as I said, is in its infancy, but we have a lot we can do qualitatively and say, “Hey, that adheres to the physics of the problem, right, so we believe it.” Now, on the other hand, if we can’t qualitatively explain it, we just won’t use it. That’s our approach. I think it remains to be seen whether that’s what the market wants or not.

[0:13:20] HC: Really, it comes down to that explainability piece, because of the physical models you have, that’s part of your process, that’s part of your model development. That’s definitely not an afterthought.

[0:13:30] JH: It’s definitely not an afterthought. Yeah. There’s a whole growing field of physics-constrained AI that uses AI in very different ways from how we are, but that’s just another area that over time I think we’ll be leaning on.

[0:13:44] HC: Let’s come back to the error modeling that you mentioned earlier. How do you quantify model uncertainty? How do you quantify these errors?

[0:13:52] JH: I’ll revisit error briefly. Then I’ll get into uncertainty. I know those two words are often used interchangeably. I think of them as distinct things, right? Error is something you can measure in our field. We have something to measure against. Typically, people think of bias. It’s the simplest way, it’s the first moment of the error. That’s fine. A big part of what we do is make sure that we take the global climate models and we correct for those biases, right? We actually correct the full distributions, but so there’s a bias in every quantile, if you will, or in the parameters of your historical distribution. That’s one part of it.

If we’re thinking about the future, it’s now, it’s uncertain, right? We could assume that future errors are the same as historical errors, and that’s a pretty good assumption that gets you a long way, but there are other sources of uncertainty in climate modeling. A lot of the differences between climate modeling and weather modeling may help provide some context here. For weather prediction, we want to know the weather tomorrow or next week. We want the small, fast stuff. We want to know what it looks like when the storms are going to hit, how much it’s going to rain, that thing.

We know we can’t say anything about that in 10 years from now or 20 or 30 years from now. The game in climate modeling is to actually pull the signal out from that noise. We want to pull out the slow stuff, how the climate changes, how the climate is changing relative to all the weather patterns that are going on underneath it. Just to give you an example, two examples. One is, in 2030, some frequency of tropical cyclones that hits Houston might change. The frequency and the intensity, for example. Those are going to be different. We want to try to say how that frequency and intensity are different. We’re never trying to predict an individual tropical cyclone, right? Or even necessarily saying how many will hit in that individual time.

Then another place that comes up is that any individual storm simulated at that time is part of the uncertainty in climate change, right? It’s like noise that goes on top of it. What we’re trying to do is get the signal out of that, okay? The other example is, all over the atmosphere, there are modes, we call them modes of variability. The one that most people are familiar with is El Nino-Southern Oscillation, which has an imprint on global weather all over the planet, okay?

Think of it just as it has different phases, a positive phase, and a negative phase, one is El Niño and the other is La Niña, it turns out there are a bunch of other things like this around the planet, there’s a Pacific Decadal Oscillation (PDO), for example, there’s the Atlantic Multidecadal Oscillation (AMO), and these things all have an effect on the weather in different regions around the globe. When we’re running climate models going forward, say we’re going to look at 10 different climate model simulations from one climate model, right? It’s an ensemble of simulations. We look at 2040, and we want to say, okay, what’s the climate at 2040? We can think of the mean and the variability.

Each one of those climate simulations is going to be in some phase of all those different oscillations that are going on. Ten model simulations or a 100 or whatever, you can’t average it out perfectly, right? You can start to average it out, and then the rest becomes, goes into the uncertainty, right? There’s that. It’s a big piece of the uncertainty and understanding how climate is changing. There’s also uncertainty around – oh, and I guess the other thing to close that thought is that’s a snapshot at 2040, then if you go to 2041, for example, all those phases are changing, so that uncertainty can change. What you want to do is try to build up large samples around a particular year, so that you can try to both quantify that uncertainty and then beat down the error and the mean, if that makes sense.

[0:17:27] HC: You run a whole bunch of different simulations. Then from that extract the larger trend and the uncertainty around that trend.

[0:17:36] JH: Exactly. The global climate modeling community runs all the simulations and then it’s how we use them to extract the trend, is exactly right. Then you still have sampling error, it’s another big piece of the uncertainty, especially when you’re talking about extremes, right, acute events, we want to say estimate 500-year rainfall, even after we’ve downscaled it you have an order of hundreds of simulations, that’s pretty small to estimate a 500-year rainfall because they’re rare.

We use extreme value theory and we fit the distributions, which by the way, there’s an AI problem in there where you can use AI to rapidly fit these things, but I haven’t really talked about that today. Okay. What you can do, what you want to do is really fit that a bunch, a bunch of times – almost like in Monte Carlo. It’s not a true Monte Carlo, but almost a Monte Carlo Simulation where you fit it over and over and over and over and over again. That gives you uncertainty — because every fit is a little bit different, it’ll give you uncertainty in your fitting, right? That’s another part of it.

Then the final big part of the error, sorry, the uncertainty in this process is the climate models themselves. You have the global climate modeling community. There are dozens of modeling centers that contribute models that ultimately make it up into the IPCC process. All of them are different. Some of them share some histories, and some of them are pretty different from each other, but they all simulate their own climate, because none of them are the real climate. The differences between those is part of uncertainty, as well. When we do our calibration or our error correction, that takes care of a lot of that, but it doesn’t take care of all of that. There’s residual uncertainty that goes in there. Okay.

All of that’s in there, then most relevant to this podcast, building a machine learning model to do the error modeling and to do the downscaling. That’s another source of uncertainty. I think the big question is, is that big? Is that uncertainty big relative to the other component of the uncertainty? I think on average, no. But of course, when you’re doing these kinds of modeling, you get distributions, so you have information you can use to get uncertainty about it. There’s a lot of uncertainty that goes into what we do. We do our best to try to capture every component of it. It’s one of these things where you can always get better.

[0:19:49] HC: Yeah. Well, with so many sources, you have to model it as best you can and try to understand it as best you can, but you can’t control for everything.

[0:19:57] JH: That’s right.

[0:19:59] HC: Well, it is the weather and climate after all, so.

[0:20:00] JH: Yeah. I mean, the big picture here is, yes, we know the models aren’t perfect. Are they good enough to make decisions on? I think the answer is a clear, yes, right? I think that academics would back me up on that.

[0:20:13] HC: You touched on bias a little bit, but I want to dig into that a little bit more. How might bias manifest in these climate risk models and how do you even identify it?

[0:20:23] JH: Because the global climate models are a key input. One of the things that happens is, well, so we can estimate the bias and then the higher moments of the error from that, because all the contributing centers are required to run historical period simulations. We have from the weather prediction world, we have histories of four-dimensional weather if you will. Three dimensions of space and one dimension of time. Typically, the weather prediction centers, large weather prediction centers, provide something based on a weather prediction system that’s called the reanalysis. Okay.

They go back in time and they reproduce the state of the earth or the atmosphere, the land surface, the ocean, and whatever, depending on how complex they get it. The antennas, I have a recent generation weather prediction model, a recent generation data, data simulation system, which essentially is initialization for that model that brings in all the observations. I can go back and without making true long weather predictions, I can just reproduce the history of the states of the earth system.

Those are the best, so their combined model and observations and the benefit of it is you have the model itself is spreading in space and time, the observational information in a way that’s self-consistent. Those are the best sources of information to do that first bias correction. Not just us, other players in the field are doing something like that, in using this reanalysis to do that. It’s funny, if you look in the climate literature, it goes so far as to when researchers are just looking at the GCMs themselves and trying to validate in them, which is not a big field, but they do some of that. They sometimes call all those reanalysis observations, which they most definitely are not.

They’re one of the best sources of information we have. You can think of it as the best truth we have, even though they have their own biases, for example. Because of that historical period and the existence of this reanalysis, we have something we can do to correct the historical statistics of the climate models. For every climate model we bring in, for example, we do this correction process. We correct the whole distribution on the inputs that we need them to do the downscaling. That’s how we deal with bias from that.

Then the other part of bias I mentioned before is from sampling, right? I mean, I think many of the listeners here are probably very familiar with uncertainty and distribution fitting. We use methods based in approaches to get it at the uncertainty that arises from the sampling, which can also be biased if you really don’t have a good enough sample.

[0:22:58] HC: Sounds like, there’s a lot of complexity there, but it’s a very important part of what you’re working on.

[0:23:04] JH: Oh, it’s essential. I mean, if you don’t correct the biases, you’re not going to be close.

[0:23:08] HC: Yeah.

[0:23:08] JH: Yeah. These are not small biases. I mean, we’re used to day-to-day, we used to again weather prediction, where a weather prediction, people might not believe this, but typically say a weather prediction, maybe it’s easy to think about temperature for tomorrow. The biases in those are the same order of error of the observations themselves, right? They’re like a degree, right? That’s not true of a climate model. Climate models can be way off.

[0:23:31] HC: Got you. That makes sense. Is there any advice you could offer to other leaders of AI-powered startups?

[0:23:38] JH: For me, and this is of course biased, based on my time at Jupiter and how I see the market. It’s a focus on environmental startups in particular that I want to address, pharma or something like that is make sure you have subject matter experts, right? Critical for verification, validation, explainability, and those kinds of things. One of the challenges is that — what it ends up meaning is in a pure tech world, for example, the R&D process might look a little different from say developing something that’s, and I don’t mean this in a negative way, but an app, right? There’s a little bit more investment needed to do that. I think that’s the main thing that people need to be aware of.

[0:24:15] HC: Yeah. I’d echo that domain expertise is as critical to most machine learning applications, but especially anything that is trying to create an impact, certainly in the environmental domain, healthcare it’s essential, as well, having the clinical point of view.

[0:24:30] JH: Yeah. That’s a great point. I mean, we are trying to empower the world with things that have big impacts. We want to have integrity in what we’re doing and also do the best job we can. [0:24:40] HC: The integrity and just the multidisciplinary nature, there’s information that no one person in one field can understand about the full problem. The machine learning engineer definitely doesn’t understand at all that they need the perspective from those who understand the data and where it came from and how it’s going to be used and that whole collaboration come together.

[0:25:01] JH: Yeah. That’s a great point. It is very multidisciplinary. I mean, we have machine learning engineers. We also have hydrologists, atmospheric scientists, coastal engineers, the list goes on and on.

[0:25:11] HC: Where do you see the impact of Jupiter Intelligence in three to five years?

[0:25:15] JH: Yeah. I mean, we’re really focused on building out a company that’s doing the right things and helping the global economy adapt. The market’s still moving. It’s hard to be super precise about this from a personal standpoint, and I speak for my colleagues here, we want to continue to be the leader in physical risk. That’s really to build the company as that – with that as a foundational capability. We have no intention of giving that up. Because the market’s still moving, there’s some timeline and it’s probably a long three to five years before things really settled down. We want to make sure we’re in that spot at that time.

Then in terms of machine learning and AI, in particular, I think the growth area for us is in the other kinds of modeling that we’ll be doing. We can always continue to get better in the physical risk side by applying more sophisticated AI where that demands it. It doesn’t always demand it. Sometimes the support regression is enough. I think there’s a big opportunity area for machine learning AI on the analytics side. The financial modeling, the loss modeling, that kind of thing. It already happens in companies that are focused more on that or in other non-environmental kinds of things. I think for us, that’s a big growth area and in three to five years, we will be applying a lot more machine learning in AI in that context.

[0:26:25] HC: This has been great, Josh. Your team at Jupiter is doing some really interesting work for climate risk analysis. I expect that the insights you shared will be valuable to other AI companies. Where can people find out more about you online?

[0:26:37] JH: Well, first of all, let me thank you, Heather, for having me. It’s been an enjoyable conversation and I hope that listeners can find a nugget or two in there. I mean, our website’s pretty easy to find. It’s actually, relaunched recently. It’s www.jupiterintel.com. Then myself, of course, I’m on LinkedIn, but somewhat not the most proactive in terms of social media, but I’m getting better and look forward to hearing from people who listen and have follow-up questions.

[0:27:02] HC: Perfect. Thanks for joining me today.

[0:27:04] JH: Thanks, Heather. Have a great one.

[0:27:05] HC: All right, everyone. Thanks for listening. I’m Heather Couture. I hope you join me again next time for Impact AI.

[OUTRO]

[0:27:15] ANNOUNCER: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.