Creating a lasting impact on a large scale requires collaboration, and globally reducing emissions is one of the most impactful, large-scale goals one could have at this point in time.

My guest on today’s show is Gavin McCormick, an economist by training who has immersed himself in the world of machine learning through the founding of WattTime, a company fighting climate change by automating devices to optimize energy in real-time in order to lower their carbon footprints. Currently, there are one billion devices making use of WattTime's technology, and in the next few years, Gavin hopes to increase that number to 30 billion!

Tune in today to hear about the uncommon way that Gavin and his team approach their goals, the Climate TRACE project that has come about as a result of WattTime’s success, and why they don’t have any competitors! 

Key Points:
  • Gavin’s educational background.
  • The discovery that led Gavin to found WattTime.
  • An explanation of what WattTime does.
  • The motivation behind the founding of Climate TRACE.
  • The role that machine learning plays at WattTime. 
  • The single objective function of WattTime.
  • Methods that Gavin and his team use to ensure their actions are having the greatest impact.
  • How the WattTime approach differs from the approach utilized by many other machine learning organizations.
  • The importance of interdisciplinary collaboration.
  • How WattTime measures their impact.
  • One of WattTime’s major weaknesses.
  • Advice for leaders of AI-powered organizations.
  • What Gavin hopes WattTime will achieve in the next three to five years.  

Quotes:
“Renewable energy could have more impact if it could be cited in just the right locations and run at just the right times.” — Gavin McCormick

“You can really substantially reduce the carbon footprint of a power grid by shifting all of the load to moments when there's surplus clean energy instead of just random times.” — Gavin McCormick

“In any case where another organization, be they non-profit, university, for-profit, whatever, is rowing in a direction that is consistent with our mission and frankly doing it well, we would rather not spend the time and resources trying to recreate them or beat them. We would rather help them out.” — Gavin McCormick

“If you're really serious about impact, then someone else's success is not a threat to you. It's a benefit.” — Gavin McCormick

Links Mentioned in Today’s Episode:

Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people and planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[00:00:34] HC: Today, I’m joined by guest Gavin McCormick, cofounder of WattTime and cofounder of Climate TRACE, to talk about technology that enables anyone anywhere to reduce greenhouse gas emissions.

Gavin, welcome to the show.

[00:00:45] JM: Thanks, Heather. Happy to be here.

[00:00:47] HC: WattTime is the current and long-term client of mine. They have a lot of exciting work going on both with the Power Plant Emissions Project that I’ve been involved with and other initiatives. I’m excited to talk about all that here today.

Gavin, could you start by sharing a bit about your background and how that led you to create WattTime?

[00:01:04] GM: Sure. It’s interesting, I have no background in AI or even in software. I’m an economist by training. My journey here started when I was in a PhD program at UC Berkeley working on an algorithm to measure where wind and solar plants would do the most good. It turns out, in different power grids, adding a kilowatt hour of clean energy will have a different effect on the environment. Because essentially, in some cases, you add a windmill and that knocks out a coal plant. Other places, you add a windmill and it just knocks out another windmill or a gas plant. And so, it was this interesting algorithm realizing that renewable energy could have more impact if it could be cited in just the right locations and run at just the right times.

And I was kind of frustrated that economists had discovered this very interesting result, that there was a simple easy way to make renewable energy much more effective by kind of optimizing it, but it was just bouncing around in academic circles. Nobody’s doing anything with it. And I showed up at a hackathon one day kind of unrelated just for fun, but I mentioned the idea to a couple software engineers and they said, “Do you know what we could do with that? We could automate devices in real-time to run to areas where using energy could kind of have these better carbon footprints.”

And so, within eight hours, we had built a prototype that could have a smart plug shift, say, your dishwasher, to using energy at cleaner times. And I was hooked by how easy it was to take impact out of academia and into the real world. And I’ve been doing this ever since.

[00:02:29] HC: What does WattTime do? And why is this important for reducing emissions?

[00:02:34] GM: Yeah. We now focus on this algorithm, it’s called marginal emissions, that for any given time stamp, at the level of five minutes and for any given location on the power grid, figures out if you add or subtract a kilowatt hour of electricity, what does that actually do in terms of which power plants will respond and the carbon footprint?

And it turns out this matters because the variation in the carbon footprint of electricity is rapidly growing. Over the last year or two, California has reached a point that one-third of the time, when you plug into the wall, you’re actually – even if you use as much electricity as you want, you’re having no carbon footprint at all. They’re literally throwing away surplus renewable energies at certain times. But other times they’re not. And so, you plug that exact same thing to the wall and you’re causing a significant carbon footprint.

We focus on what time is energy cleaner? And we work with companies like Google Nest’s smart thermostats, companies like Honda and Toyota electric vehicles, to take their electricity consumption and shift it to moments that have a lower carbon footprint. It’s similar to work that’s been going on for many years to shift electricity times that are cheaper. But it’s the first application to do this for environmental purposes. And it turns out you can really substantially reduce the carbon footprint of a power grid by shifting all of the load to moments when there’s surplus clean energy instead of just kind of random times.

[00:04:00] HC: And your efforts to reduce emissions led to the creation of a coalition called Climate TRACE. What is Climate TRACE and how did it come to be?

[00:04:07] GM: Yeah. The way this works is that the initial algorithm from WattTime, we trained it on US government data from the EPA because the United States happens to have remarkable open high-quality data about the emissions of every power plant every hour. But the vast majority of countries worldwide do not.

And so, as WattTime was getting bigger and companies were starting to ask us, “How would you use WattTime in Italy? How would you use WattTime in China?” We hit up on this idea of, would it be possible to measure power plant emissions from space instead of getting access to open government data sets? Could we use the computer vision work that you yourself have been involved in, of course, Heather, to monitor power plants and track ourselves what is their total carbon footprint? Which we could then feed into WattTime’s algorithms of what is the marginal carbon footprint and the change over time. And we realized that that would be an enormous undertaking to do that just to power our own tech.

We applied for a Google Grant. Kind of teamed up with some other non-profits who wanted the exact same emissions data about power plants for other reasons. And we realized it’s kind of crazy to solve the problem ourselves. So, we teamed up with other groups like TransitionZero, World Resource Institute, to kind of do it together. And we quickly heard from a number of people like Al Gore that this was an interesting play, but climate change is a lot bigger than power plants. Would it be possible to apply similar computer vision, satellite imagery combinations to measure all emissions, not just power plants?

And we thought that was too big for us to take on. We were a small nonprofit at the time. But we thought by just continuing to team up with more and more organizations, we could pull it off. And so, Climate TRACE became this kind of like pickup team that feels a little bit like Wikipedia but with a lot more AI where many different authors were volunteering to break off many different sectors of the global economy and each have their own computer vision algorithms to measure emissions from that sector.

OceanMind measures the shipping. TransitionZero is measuring industry. We specialize in power plants. And it kind of became this giant team effort to put open transparent emissions data online for literally everything in the world, which is just too big for WattTime to have ever attempted alone.

[00:06:21] HC: And the Climate TRACE of course are focused on using machine learning to estimate emissions. But thinking more towards WattTime and your technology, what role does machine learning play?

[00:06:32] GM: So, two different rules. One is, when we’re estimating the initial machine learning algorithms – Or, excuse me, initial marginal emissions algorithms themselves, we do use some machine learning in that. Surprisingly less than you’d think. We’ve had the interesting result that when you’re doing causal inference on data that you know does have potential for significant bias, often, ML can help but it’s not the core technology.

And another area where we use it as kind of the primary technology is in forecasting. When we’re forecasting, we do have the historical training data set of what have the marginal emissions rates been in the past? But if you’re optimizing the timing of things, you really want to know what are the emission rates going to be in the future?

And so, we ingest information about, for example, weather forecasts, power grid forecasts and we predict what our own marginal emissions results are going to be in future to make it possible for devices that need to trade-off over time what their electricity conception is going to be to run on as much renewable energy as possible.

[00:07:32] HC: It sounds like machine learning comes into a bunch of different areas there for different models and different goals but all towards reducing emissions.

[00:07:40] GM: Yeah. And it’s been interesting to think about wildly different scientific problems when you have a single objective function, which is reducing emissions. And so, it’s been fun to think about things like, for example, if we increase the quality of our forecasting model, say 10%, or we go produce a new computer algorithm that will let us measure marginal missions in, I don’t know, Pakistan, which one do we predict with our kind of whole integrated system will cause more emissions reductions in the real world? It’s a cool way to trade off against features, and models and projects that are otherwise very unrelated problems if you reduce everything to tons of CO2 you can take out of the atmosphere.

[00:08:27] HC: One of the things I’ve observed in working with you and your team is that you have a different strategy for planning. I’m sure it’s partly because you’re a non-profit. But I suspect it’s also the way you’ve learned to approach a problem. How do you decide which projects or features are worth pursuing?

[00:08:41] GM: Yeah. Actually, the strategy predates WattTime, which is interesting. When we were thinking about founding WattTime, we had no priors because we just weren’t startup people. We had no assumption about whether a non-profit or a for-profit was kind of more normal, which I think is kind of shockingly naive for most people who work in this space, but [inaudible 00:09:01] academics. And so, we said, “Well, heck, what is our goal?”

And if our goal, we decided the only reason we were doing this project was to fight climate change as quickly as possible, then we wanted to minimize emissions at a scale as much as possible. And so, you can factor in things like risk.

But we actually use this algorithm to determine. I call it an algorithm because there’s clear steps, but it’s real straightforward. When you actually think about determining even questions like, “Should you be a non-profit or for-profit?” We literally worked out what is the emissions reduction we think we could achieve with different technologies with different levels of investment? What do we think is the user base we would get? And how much do we think it would keep us on track or off track?

And so, we use sort of mathematical methods with a decent amount of guesswork in there, but at least a clear theory of change on if your objective function is to reduce as much emissions as possible, working backwards from there, what decisions could you make?

And would I have just become fascinated by is this is just how companies theoretically think about money. But a lot of companies actually don’t take it this seriously to literally optimize in a single framework across individual machine learning features and business decisions. We just use a straightforward optimization on having a sense of what all our programs can do. Having a sense of what is the emissions reduction of a specific program? And if we add a new feature to a model or we do anything else, what will that do in terms of emissions reductions, which is very different than something like having an accuracy metric and a vague goal that you want to achieve a certain outcome without a serious sense of how you’re going to get there, which is how I’ve observed a lot of companies really making these decisions behind the scenes.

[00:10:50] HC: How do you think about the different parts of an individual machine learning project then? I’m thinking mostly about predicting power plant emissions at the moment. But a similar thought process might apply to other applications. There’s the accuracy of the model, which there can be competing objectives for. The coverage. In this example, what fraction of power plants you’re able to make predictions for, or how certain or uncertain you are about your predictions, how granular your predictions are in time or in space. You can’t tackle all of these goals at once. How do you decipher which aspects are most important?

[00:11:23] GM: Yeah. Our method, which is kind of cartoonishly simple. But also, we have gotten really good results from it, is you reduce it to, “Okay, if you’re talking about a quantifiable reduction in emissions, there must be some real-world state of things that you would or would not successfully cause.” And that real-world state of things has a difference in emissions.

First of all, we get very concrete. An example would be, “We are going to shift the timing of an electric vehicle to a time when there’s surplus renewable energy, or we will fail to do that.” Compared with a very different theory of change, “We are going to cite a wind farm in a location where it will displace new coal plants or we’ll fail to do that.” We get really concrete.

And then we think about what is the minimum viable product to successfully cause that? And what features are actually extraneous to the problem? And so, a good example would be if you want to know the timing for an electric vehicle to charge and you have model bias where you are overestimating emissions in all times, but you are overestimating them equally in all times, that’s actually irrelevant to your problem. But if you are rank ordering the times incorrectly, that’s hugely important to your problem.

And so, the more that we find that our impact is coming from timing-based things, rank ordering the timing of marginal missions is most important. And in other applications, for example, it’s really important that we rank order the locations of wind farms. And then another thing we learned along the way is that rank ordering a real-world choice, typically, if you have a more accurate algorithm or if you have more coverage, that will help the rank ordering but not at a level of one-to-one.

We actually switched off, in some cases, we literally abandoned accuracy as a training metric and we instead directly embedded a simulation of using this feature and what the impact of using the feature would be in our objective function. And so, an example of that would be, we find that, in many cases, an algorithm that is less accurate at predicting when there’s going to be surplus wind energy will be better at predicting within a window of which you actually could shift load which of those times is most likely to have the surplus clean energy. And by directly going after the thing that we know is the function that causes more impact, we’ve discovered we can often radically improve results or bare minimum eliminate a great deal of work and save money they can be plowed back into working on the feature that really matters. It’s really a matter of targeting.

[00:14:05] HC: That’s a really interesting thought process. And there’s a lot of machine learning projects that don’t put that extensive thought into what the result is and thinking backwards to what it is they need to optimize or what they need to focus on and improving their models.

[00:14:22] GM: Yeah. I’ve been surprised to learn as kind of an outsider to this field that this type of thinking is not very common. And I think one of my big learnings is because people don’t cross fields enough. Every economist would know exactly what I’m talking about. But there are very few of them in AI. And I think that it’s a rare area where another field kind of has a few tricks to share. But there’s nothing about climate change in this story. You could apply the same method to any optimization function you want.

[00:14:49] HC: Yeah, that’s definitely true. And the one key takeaway that I’ve learned in my years in machine learning is that this interdisciplinary collaboration is essential. To train a good model, you need somebody with domain experience. And what you’re talking about actually goes beyond that to think about what type of model you should even be training, what your overall objective is and what the purpose of it is. But that domain knowledge and larger picture thinking is very important.

[00:15:17] GM: I like that. I never thought of it as domain knowledge. But I suppose that’s what it is. Yeah.

[00:15:21] HC: I think if we think back towards Climate TRACE, I think a large part of the success of it is due to collaboration. Each team is focused on a different sector of the economy working with data from different sources and creating different models. But the resulting emissions data sets are joined to share with the world. And knowledge is shared across teams. Whereas, with for-profit organizations, there’s a lot of competition and less collaboration. What benefits have you seen from the Climate TRACE coalition and other collaborations you’ve been involved in? And how important is collaboration to WattTime’s success?

[00:15:55] GM: Yeah, I would say it’s everything to us. And it’s interesting, the way we came at collaboration was kind of another rude awakening in discovering that actually, most nonprofits in my experience, don’t collaborate very much. And we just kept coming back to our one objective function, kind of a one-trick pony. And we realized that, “Wait a minute. If our goal is to make sure that emissions go down, what does it mean that another non-profit is our competitor? Because we don’t actually care who does the work. We care about the result.”

And so, if there’s another non-profit that can achieve the same goal that we were going to spend time achieving, they are our ally, not our competitor. And we basically went back to the diagrams we use on theory of change to work out our goals. And we realized that in any case where another organization, be they non-profit, university, for-profit, whatever, is rowing in a direction that is consistent with our mission and frankly doing it well, we would rather not spend the time and resources trying to recreate them, or beat them, or anything. We would rather help them out.

We, for example, actually made the unusual choice to donate our sort of most sensitive IP to intentionally create WattTime competitors because we saw organizations that are essentially offering to do our mission for us for free. They were going to get the fundraising themselves. And that really changed the way we think about collaboration. And so, we have seen cases where there are organizations that they’re so incompetent or they say that their mission is one thing but really, it’s about making sure they get the biggest grant or they have some other agenda.

We’ve certainly seen cases where you can’t just collaborate with anyone. But we have discovered that the more you view it as anybody with the same goal who you can work with is on your team, the more we’ve discovered these amazing opportunities to scale much faster.

Climate TRACE now has about 100 organizations collaborating. And we could never achieve what the coalition has achieved together on our own. And most of them say the same thing. But what has kind of changed this is the realization that if you’re serious about impact, which again not every non-profit is, if you’re really serious about impact, then someone else’s success is not a threat to you. It’s a benefit.

And we’ve concluded that on any AI project with too many interdisciplinary fields, no one organization can be an expert in everything. And the more you can divide and conquer with similar projects that have the same goal but don’t require all the details to be in one person’s head, the faster you can move.

[00:18:33] HC: Yeah. And like you said, when you have a shared goal in mind, you can definitely accomplish a whole lot more. But thinking about the overarching goal of WattTime and reducing emissions, you’re tackling this in a few different ways. But how do you measure the impact of your technology to be sure that you’re accomplishing what you set out to do?

[00:18:51] GM: Yeah. There’s one part we do well and one part I think we do badly. The part we do well is we quantify basically everything including impact. Kind of bringing it back to how we make decisions, the way I would say the impact of our timing technology, for example, would work, is that we’ve quantified how many kilowatt hours of load are being shifted from which times to which other times. And what is the marginal emissions, the literal drop in emissions per kilowatt hour, spread between each of those two times?

And so, that means kind of like the guts of our technology, because they’re about measuring how real-world total emissions will be different if you do A or B, it actually means we’re uniquely placed to have very quantitative impact measurements. And so, everything we do, our chart is, how many users have we got from a certain theory of change? What is the impact per unit for that theory of change? And then we just kind of, if you need to do that as a different number in different grids or times, we use that matrix.

We’re really proud of how that means that all our direct impact is really quantifiable. The big catch of course is that there’s a lot of impact you have that’s not quantifiable. I suspect that the majority, the great majority of WattTime impact, has been inspiring groups to do things on their own that we don’t know about. And so, I would say our big weakness as an organization is we only talk about the impact we can measure. And the way we’ve tried to solve that is saying there are probably other people who are better than us at talking about how do you optimize for impact when you can’t measure it? And we try to, anytime we can, kind of support groups who are better at that and just kind of stay out of it and stick to only things that could be quantified.

[00:20:28] HC: Is there any advice you could offer to other leaders of AI-powered organizations?

[00:20:33] GM: Yeah. My big advice would be, if you haven’t tried this exercise already, just literally starting with the goal you want to achieve and working backwards instead of working forwards. My experience is you’ll be shocked by how many resources you could save or how much you can improve the actual outcome metrics by thinking in very concrete literal terms about the final use case and working backwards. This ain’t rocket science but also, it’s amazing often what kind of multipliers you can get on your resources.

[00:21:02] HC: And finally, where do you see the impact of WattTime in three to five years?

[00:21:07] GM: Our timing technology, which we call AER, is applicable in a three to five-year time frame. There are about 30 billion devices that it could be used on. We are very pleased we just crossed one billion devices using it. I’d love in three to five years if AER was like the Y2K fix. Like, everybody has it and nobody’s talking about it because it’s just a commodity. It’s just done.

We’re truly trying to make the AER technology kind of a free simple plug-and-play solution and we’re hoping it’ll be universal. And then we’re also highly committed to leaning into these stories where kind of AER-like wins that are peer organizations or, in some cases, we ourselves are starting to figure out could find what’s the low hanging fruit of the next solution that could have a lot of impact at scale with data by just thinking about leverage.

[00:21:58] HC: This has been great. Gavin, your team at WattTime is doing some really interesting and impactful work to reduce emissions. I expect that the insights you’ve shared will be valuable to other AI organizations. Where can people find out more about you online?

[00:22:10] GM: Thanks, Heather. And by the way, thanks for being a part of it and making this all possible. We’re at watttime.org. There’s three T’s in WattTime. Sorry. So, it’s W-A-T-T-T-I-M-E.org. If you’re interested in climate TRACE, you can also check it out as climatetrace.org.

Thanks for having me.

[00:22:27] HC: Perfect. I’ll link to those in the show notes. Thanks for joining me today.

All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[OUTRO]

[00:22:42] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. And if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.