What if there was a way to revolutionize image-based AI, eliminating the need for extensive prework? In this episode, I sit down with Corey Jaskolski, Founder and President of Synthetaic, to talk about finding objects in images and video quickly. Synthetaic is redefining the landscape of data analysis with its groundbreaking technology that eliminates the need for time-consuming human labeling or pre-built models. It specializes in the rapid analysis of large, unlabeled video and image datasets.

In our conversation, we delve into the groundbreaking technology behind Synthetaic's flagship product and how it is revolutionizing image and video processing. Explore how it utilizes an unsupervised backend to swiftly analyze and interpret data, how it is able to work with any kind of image data, and the process behind ingesting and embedding image objects. Discover how Synthetaic navigates biased data and leverages domain expertise to ensure accurate and ethical AI solutions. Gain insights into the gaps holding AI’s application to images back, the different ways the company’s technology can be applied, the future development of Synthetaic, and more!


Key Points:
  • Corey’s background in AI and ML and what led to the creation of Synthetaic.
  • Why Synthetaic focuses on processing images and videos quickly.
  • How the company leverages ML in its approach.
  • Details about image ingestion and embedding processes.
  • How the definition of potential objects varies depending on the type of imagery used.
  • Explore the role of domain expertise in addressing challenges.
  • Hear examples of the technology’s diverse range of applications.
  • Recommendations to leaders of AI-powered startups.
  • His hope for the future trajectory of Synthetaic.

Quotes:

“We think about the machine learning problems a little bit differently, because we're not labeling data to go ahead and build a bespoke frozen traditional AI model.” — Corey Jaskolski

“We take this very broad view of objects where anything that could be discrete from anything else in the imagery gets called an object, at the risk of basically finding, if you will, too many objects.” — Corey Jaskolski

“We think of RAIC as something that solves the cold start problem really well.” — Corey Jaskolski

“By and large, we're training image and video-based AIs the same way. We need a paradigm shift that really allows AI to be the force multiplier that it can be.” — Corey Jaskolski


Links:

Corey Jaskolski on LinkedIn
Corey Jaskolski on X
Synthetaic


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRODUCTION]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[EPISODE]

[0:00:33] HC: Today, I’m joined by guest Corey Jaskolski, Founder and President of Synthetaic, to talk about finding things in images and video quickly. Corey, welcome to the show.

[0:00:43] CJ: Thanks, Heather. Glad to be here.

[0:00:44] HC: Cory, could you share a bit about your background, and how that led you to create Synthetaic?

[0:00:48] CJ: You bet. Yes, my background was AI and ML from MIT. One of the challenges that I really faced while I was in grad school was this sort of disillusionment with how hard it really was to build image and video-based AI models, how much data labeling you needed to do, and how much basically prework. That kind of haunted me throughout my career, I started a small company called Hydro Technologies that I successfully exited. Then, I joined National Geographic as the director of all the exploration technologies. I ran exploration and technology groups all over the world, everywhere, from physically working on Mount Everest, doing 3D LiDAR scanning of Everest, to working under the ice in Antarctica, to in heavy poaching environments in Sub-Saharan Africa, looking for a poacher activity. All sorts of things like that around the world, from cultural heritage preservation to conservation, to climate.

There are two we were very often building: AI models often, age-based AI models. The same problems came up. It was, man, you sure need to go ahead and label a lot of data, do a lot of data science before you even start to build an AI model. That always felt like this really inhibited way to look at AI. You couldn’t really experiment quickly, and fail quickly, and evolve quickly. So, I started Synthetaic to answer the question, what if you didn’t need to do vision and image-based AI that way? What if you could take a different approach to it that didn’t require all that prework?

[0:02:04] HC: What all does Synthetaic do? What services do you offer?

[0:02:07] CJ: Yes. So, Synthetaic has one product, our product is called RAIC. It only works in one area; it works on images. That can be still images, like cellphone images, or the like, or photographs. It can be full motion video, whether from drone, or security cameras, or anything like that. Or, space-based imagery, or overhead imagery, like satellite imagery and the like. We were pretty much equally across those. We consider ourselves imageagnostic. As long as you can represent it as pixels, the RAIC tool can process that data and derive insights from it.

[0:02:36] HC: How does machine learning work within those? Obviously, there’s the bulk of the powerhouse in order to do this search, but how do you train models in order to do this? How to you build it into your product?

[0:02:48] CJ: Yes. The way that RAIC works, and the way that we think about models, is actually – a good analogy that’s a little more recent is ChatGPT. With ChatGPT, there’s no label data per se, they take, in the case of ChatGPT, just all the unstructured text from the Internet. That is mapped into a mathematical space or tokenized. In that tokenized mathematical space, you then ask a question or a query, and that query also gets mapped into that same mathematical space, and it can pull together an answer that very often is, seems stunningly good.

What we’re doing is sort of akin to that, but we’re doing that in the image and pixel space. What we’re doing is, when we get a large image set, say satellite data, or the entire earth, for example, we will ingest that into RAIC, which is a similar process of mapping into a mathematical space. Not in a large language model sense, just literally in an image sense. But we’re basically, to do that, we’re doing the same similar thing that’s happening in the large language models, in that, we’re reducing the information density from raw pixel space into a smaller mathematical space. That is, we can do functions on like distance searches, like similarity searches, all that sort of stuff.

We kind of think about the machine learning problems a little bit differently, because we’re not labeling data to go ahead and build a bespoke frozen traditional AI model. We’re looking at it as more as using the machine itself with the human machine collaborator in the loop. That’s part of what I can get into more later, but a really important part of RAIC in order to actually directly answer those questions, to directly find all the things. Whether it’s dumpsters and satellite data, or whether it’s bird species in drone data, or whatever it might be, directly without actually having to build an a priori AI model.

[0:04:32] HC: Once you’re able to encode an image, and your whole data set of images, or a new image that somebody wants to search for an object, that you can just match it with, with the embeddings that you’ve already created. Is that right?

[0:04:44] CJ: Yes, that’s right. We do a lot of stuff a priori to just – so it’s not a full image embedding that would work very well for doing like – imagine you have a real world example, we have 50-megapixel drone images. If we had an embedding for that whole space, we would be basically trying to match 50-megapixel drone images, which isn’t really useful for anybody. You want to find – in the case of The Nature Conservancy, they wanted to find different bird species in that data.

We’ve got some pretty cool and sophisticated algorithms that run a priori to that indexing, and embedding, to basically rip the image into object-like pieces that it represents objects sort of in the same way that large language models tokenized based on words at a more granular level than a whole sentence or paragraph. We’re doing that similarly, effectively, anything that eye can see as an object. In the data set, even, it has no idea what those objects are. It just looks for object nature of them. But that’s right, and then we bring them into that embedding space, and then can search, and do a whole lot of other statistical and mathematical functions other than just search on there to allow us to really – to allow a user to really elucidate things we’re looking for in the data set.

[0:05:47] HC: Could you talk some more about how that ingestion process works. It sounds like there’s two pieces. One is to identify potential objects, and the other is to create an embedding for them.

[0:05:57] CJ: You kind of nailed it. That’s exactly how it works. The idea, again, being that, for images, a lot of our customers want to find things in those images that there might not even be existing a priori trained AI models for. So, it’s not like we can just run a model that say something trained on COCO or something like that, that can find 80 different objects. That wouldn’t be a really great utility to our customers, because they’re often looking for things that aren’t in a pre-trained AI model. Otherwise, they might just use that.

When we do that, we take a very wide view of objects that we then map into this space. It’s a lot harder than the large language model space. Because in large language models, the English language has punctuation. It has spaces between words, it has commas, and semicolons, and things like that. So, it has these built-in points of division, where you can break up the words into – sentences into words, paragraphs into sentences. It’s very friendly like that, and very algorithmic. But in the image space, is really hard to say what is an object. Is an entire cat an object? Or is the years of the cat, are those also objects? If you want to train an animal in your detection model, I’m not sure anybody’s ever done that, but you’d very much want those ears to be separate objects.

We take this very broad view of objects where anything that could be discrete from anything else in the imagery gets called an object, at the risk of basically finding, if you will, too many objects. But again, once we have it mapped to that embedding space, it’s all searchable. It’s okay if we end up with a really dense object set per image.

[0:07:26] HC: Does the definition of a potential object change depending on the type of imagery you’re working with?

[0:07:30] CJ: In general, it does. There are definitely different domains that we work in. Let’s say, in satellite data, for example, in RGB data, or EO data, as they call it, everything – objects look like objects. You can hire enough resolution satellite data, you can see trucks, and you can see swimming pools, and whatever you’re looking for. They look like what any person would say, “Yes, that’s an object.” But in other modalities, like synthetic aperture radar, which is truly radar from space, where you’re getting a radar image, which is great. It works through clouds; it works at night. So, you basically get satellite imagery in those conditions. But the drawback, of course, is that, it sort of looks like a radar image, it doesn’t really look like something that you’d say, “Oh, that’s clearly a shipping container” or something like that. They’re often very complex. A lot of sort of blurry images with lots of layers, and things going on that you would you would associate sort of with radar returns off of off of objects. In those cases, then it is a little bit more domain-specific, what gets considered an object and what doesn’t. In those cases, because RAIC works on an unsupervised backend, we can just retrain in different domains without necessarily needing to go back to the well, and go ahead, and label anything. So, really easy to get up to speed.

We think of RAIC as something that solves the cold start problem really well, like to build a net new AI model, maybe even in a different image domain, and to do that in a meaningful amount of time, in a sort of meaningful timeframe is really, really hard. That was my point earlier about – you can’t fail quickly in AI right now. If you, I think the last Gartner report said something to the effect of, AI models generally take many months to a year or more, and cost millions of dollars, and have something like 83% failure rate to get into production. It’s just really all – in my mind, that is holding AI back in the image and video space. That, again, large language models are a different beast, and I think we’ve kind of broken through that. Like everything you see in large language models is unsupervised and unlabeled. These days, it’s a little bit different a problem set again, because of the punctuation and the large body of available data from the Internet that can be scraped.

But similar problem, I think the rest of AI in the image-based domain, and even in time series domains for other sensors is eventually going to go that way of the large language models. Where it’s not this idea of label data, build AI model, run AI model, stops working, go back to the well, and add another class, or add other positive or negative examples. I think we’re going to find that in the coming years, it’s going to become a lot more of this really meaning on the machine to learn, instead of this concept that seems so anti-theoretical, that AI to me is that – right now, we sometimes, in large projects, thousands or even tens of thousands of people work on labeling a data set, so we can teach the machine to understand what the things are in that data set. But it seems sort of like, I don’t know, for me, AI was always the thing that’s supposed to take out the mundane error-prone work, and make us all smarter, and make our jobs easier. But in reality, what you’re doing a lot of, most of an AI project is labeling and data science cleanup of the data, which is this mundane error-prone work that feels like AI should have already solved, of course. That’s why I think, in all of these domains, AI is going to go in the direction of large language models, going to completely unsupervised spaces, and finding ways to do this without label data.

[0:10:42] HC: For each domain of imagery that you’re working with, a particular modality type of thing, you do ingest a separate data set for that and create a separate indexing in which to search, right?

[0:10:54] CJ: Not necessarily. A lot of the data is, the way that we built this, we call it RAIC space, the mathematical space that everything’s mapped into, is actually pretty flexible. We wouldn’t necessarily retrain anything between domains that were similar enough that like, again, when I when I give an example of high-resolution satellite data. When you’re looking at the high-resolution satellite data, that sort of the same problem set as a security camera. I mean, it’s reasonably good, RGB imagery. Objects don’t look completely foreign. It’s not like a spectrogram or something that doesn’t look like anything that physically exists in the world.

They all look similar enough, that in a lot of cases, we get excellent performance and only very small incremental gains, by necessarily retraining into those new domains. Which is actually, I think one of the really, really powerful points of how we built this RAIC space is that flexibility means that we can even cold start on problems where the data sets are too small to do unsupervised retraining. We can get going right on those right away. When they’re very, very different, if we were going to do something like an acoustic spectrogram, that is something in which there’s no analog really as much to objects, you can look a little spectral spike, and stuff like that. But they’re so messy, that it’s the object nature of it, unless you sort of rethink about training that again in that acoustic spectrogram visual space, you probably wouldn’t get very good performance. But yes, so we try to avoid doing retraining as often as we can because that’s still an expensive GPU exercise.

We do that only when there’s enough domain shift from sort of the more traditional domains that we’ve done, that we need to do that.

[0:12:24] HC: When you do come across a domain where you do need to build something from scratch, does domain knowledge at all come into play? Or is it the same algorithms, the same processing that can be applied just with a different set of data?

[0:12:36] CJ: In bulk, it’s the same algorithms that are applied with just the different data. To go back to the synthetic aperture radar example, the satellite-based synthetic aperture radar, the tactic there was to pour in as much data as we could, and retrain another unsupervised model. Now, the way that the domain expertise still does come in, is being able to recognize bad data. Imagine, we were trying to do synthetic aperture radar, satellite imagery over cities. Half of our data was from Antarctica, or something like that. It would clearly be really biased toward those areas with no buildings and anything on it.

Part of it is, still the domain knowledge of okay, well, that’s probably going to be more appropriate, better data and this is similar resolutions to what we’re looking at, and those sorts of things. So yes, it’s still definitely that play, but not necessarily directly in the creation of the of the model.

[0:13:23] HC: We’ve talked about some of the different types of imagery that you work with, but what types of problems are you able to solve with this? Could you share some details about a couple of the more interesting or impactful applications that you’ve worked on?

[0:13:34] CJ: Yes. There’s been some really interesting ones that I think will highlight the challenges that there are in AI right now. Also, RAIC as a tool. I’ll give you three of them. One was with The Nature Conservancy, they were looking to count the number of birds of five different species over the Palmyra Islands, which is a pretty long line island chain in Pacific. The challenge was, is they had really high-resolution drone imagery. They incredibly flew drones over the entirety of the line island, as far as I understand in really high resolution. I mean, like, centimeter-level resolution was the highest-resolution drone imagery I’ve ever seen. They were going to use it to count birds.

They very quickly realized that with many, many, many tens of thousands of these 50-megapixel drone images, it was intractable that people go through it. There was no way – technically you could have people label that, but that might be – if you send that out to a labeling outsourcer, that might be a thousand people for three months labeling that data, and a really, really big challenge.

They realized they weren’t going to be able to do it with just the brute force human labeling. So, what they did is, they instead used RAIC, and they ran RAIC across it to look for these bird species. They found that over that massive data set, they were able to use RAIC in a guided human-machine collaboration to find the bird species that they were looking for and get that count. But more compellingly to us was, and this is a really common challenge in AI. What about the unknown unknowns? What about the things, great, maybe you could build a frozen model that would count those five birds but that’s really all it would do then.

But with The Nature Conservancy, what they able to do as well, they were looking to count these five bird species, they actually found two bird species they thought were gone from the island, they thought they were – they used to be endemic to the island, and now, they were effectively locally extinct since World War II. These were both ground-dwelling birds, that when the ships came in, in World War Two, the rats came in as well, and rats absolutely decimate ground-dwelling bird eggs. Both of these birds that they found were not on the ground, they were actually in the trees now. It appears that these birds – I think they found two of one of the species and one of the other species. So, really, sparse count of these birds in this huge island full of trees that they counted many thousands of the other birds in. But they were even able to pull those very unique individuals out of that data set using RAIC. It set the fact that these birds have adapted to say, “Well, I guess the rats are going to eat us on the ground, the eggs in the ground.” So, they moved into the trees is what it looks like.

There’s a really good blog post on The Nature Conservancy’s website about using Synthetaic, using RAIC for that activity. That was, I guess, the reason I think that was interesting is what it really does, it speaks to this bigger problem in AI, where you train an AI model to let’s say, it’s a self-driving car. You train an AI model and say, “Okay. I got to avoid pedestrians and bicycles, and all this sort of stuff.” But if it’s never seen a horse that runs out in the middle of the road, how does it respond to that? It’s not necessarily a model that you’ve trained it on, so what is a horse? Is it even a thing? It doesn’t even register, right?

With a tool like RAIC, what you can do is, you’re not locked in, because it’s not a frozen model. You’re not locked into just the things that you thought you’re looking for. You’re also able to go pretty lateral from that and finds other things, even if they’re in very small quantity. That was one that we’re really proud of. We have an AI for impact group here at the company, whereby we do low-cost or no-cost work with nonprofits that are doing work that we think will benefit from RAIC and be impactful.

There’s two other examples of that, that that I’ll share. One, is we worked at the United Nations World Food Program Search and Rescue Group. They started using drones instead of just using helicopters for search and rescue. Because helicopters are super expensive, they’re not on station all the time. When there’s a disaster, it’s easier to get a bunch of drones up, than it is to get helicopters up. But the challenge is, in a helicopter, you have people sitting in the helicopter, you can look with your eyeballs or binoculars. You get a pretty good view of where there might be people that need rescue. With drones, you’re looking through screen the size of an iPhone, or an iPad, or something like that. And you’re looking at really high-resolution imagery, 4k imagery as the drones fly over. If you’re flying high enough, they’re getting a good broad area coverage. You can’t often actually see the people that you’re looking for, on that pilot screen.

What the UN World Food Programme came to us to ask about is, could we use RAIC, first, in a trial mode, and then in an exercise in the field to try to find people in a search and rescue environment. Our first exercise was in Mozambique, where there was a, I believe a typhoon that came in, and it destroyed a village. You can imagine the terrain, palm trees are down, there’s lots of colorful debris and plastic, and blown apart houses and huts, and things like that. There’s people carrying buckets with their belongings, and water, and things like that on their heads. It’s a very unusual sort of scene for an AI model, which they had, which was trained on people from aerial imagery. But think like, I think a lot of the data from that model is, say, from Southern California.

You got this model that you trained to find people from drone imagery, it does great. You now switch domains even slightly, now you’re in Mozambique, and there’s debris everywhere, which there wasn’t in your training set, people are wearing brightly colored clothing in that area quite commonly, in a way that wasn’t the case as much in Southern California. They’re obscured partly by things on their heads, and things like that, not something that you saw in that other domain.

What we found in there is, in order to get their existing AI model to perform on that data set, it ended up taking 12 people, I think four to six weeks of labeling just to get that model to a decent performance on that data set. Conversely, they used RAIC, and we’re able to find the lion’s share of all the people in that drone data set, or a large body of drone data sets immediately within a few minutes of pulling that data set in.

Again, in another case where — the reason I think that’s an interesting problem is it speaks to this domain drift problem. Frozen AI models are just that. Frozen means, it doesn’t adapt to a new domain. It’s not – the way I think about this domain problem is this. If you have a kid – my daughter just had her first her first baby, and so now, I have a grandson. He’s still only about six months old, so we’re not looking at too many picture books or the like yet. But, imagine a baby that has only seen cartoon pictures, and coloring books, or something like that of monkeys. If you take that kid to the zoo, and you show the kid a monkey at the zoo, that kid isn’t going to be like, “What on earth is this? I have no idea.”

They’re going to, from that instance of seeing these cartoons, by the images of monkeys. They’re going to right away know that’s a monkey. We’re super great as humans, of being able to abstract, and then pull that back away from an abstract to a very concrete representation. The thing is, is AI sucks at that, and that is a really big problem because the domain changes all the time. In this case, an AI model that was great, and it was excellent at finding people in the domain, it was trained and was an absolute failure at finding people in the domain where they needed it.

I’ll me mention, we also did a follow-up exercise with UN folks in England, where we were doing search and rescue, but it was staged, so people were actually staged as rescue victims in different scenarios. Not only did we find all the people, even the people that we’re staging things that were all dressed in camouflage, we also found them with RAIC as well. Again, instantly, keying in on the drone operator as the first image of the person, and then run that looking for similarity across all the data in RAIC space.

Yes, another one that we did last year was a program called HETDEX. This is a program by the University of Austin. They’re building the largest map of the cosmos ever. Doing that to try to uncover the nature of dark energy. Dark energy, I think everybody’s heard of dark matter, which we think – the world thinks makes up most of the universe, this mask that we can’t really see or visualize. But there’s another thing called dark energy, and dark energy is basically the driving force behind the expansion of the universe. We don’t understand it, nobody understands it, but we do know the universe is expanding, and it needs to be understood in order to correlate that expansion rate to really what’s going on.

They’re doing the largest experiment to try to figure that out. However, their sensor data has a whole bunch of different types of noise in it. These noise artifacts can basically throw off their measurements, and confuse the situation. What they were able to do is – and the noise, it’s very unusual. There’s these different characteristics of noise that look enough like the signal that they’re looking for that it wasn’t – wasn’t really viable for them to build an AI model to try to extract that. What they did instead is use RAIC, and showed some examples, some couple dozen examples of what that noise looks like. RAIC was now able to go through that space and effectively carve out regions in the RAIC mathematical space for what are noise or high likelihood of noise. They were able to more than double the amount of data that they could actually use that was now usable, and it’s gone a long way to fuel their continued investigations. There’s a good blog post on the Synthetaic website in collaboration with the HETDEX folks about that one as well.

[0:22:18] HC: Those all very interesting and very diverse applications too. That’s very cool to see.

[0:22:22] CJ: Thanks.

[0:22:24] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:22:27] CJ: Yes, a few. I mean, I think, one is, is that – I mean, the first is, challenged the normal assumptions. We’ve all been doing AI the same way for a very long time, and really – in a lot of ways, if you think about it, AI was – in a way, the name itself is one of the best marketing wins ever. Right back in the fifties, and even earlier, people started calling it artificial intelligence. When really, it was a bunch of linear algebra and matrix math. When I was going to grad school at MIT, in electrical engineering, and computer science and studying AI, I almost dropped out of the AI program. Because I thought like, “Well, I mean, really, we’re just doing linear algebra. I mean, that’s really what this is, right?” I mean, I almost just totally stopped taking AI courses and went into pure mathematics.

It’s that realization when you, kind of as an AI leader, if you take the humility to kind of step back and say, “Yes, we’re kind of still really doing – except for convolutions, and transformers, there are some good new add-ons in the last few years.” By and large, we’re training image and video-based AIs the same way. We need a paradigm shift that really allows AI to be the force multiplier that it can be. It’s still really expensive to build image and video AI models. A lot of the groups that need this, that can use it as a force multiplier, whether industry, whether defense intelligence, or whether impact0based organizations that we’re working with as well. They can’t afford the traditional way to do it. It’s our responsibility as AI practitioners to try to find other ways to steal to get those things done. RAIC is one of them, but there’s other tools in the toolbox that other people are going to come up with.

My challenging goal for all the AI industry is, let’s not have image-based AI with an awful lot the same object detection, segmentation, and classification. Images look pretty much the same five years ago. Now, we’re getting better, we’ve got a couple new algorithms that are better, transformers are better, so on, and so forth. I hope the next five years doesn’t look at all like that gap. I hope it looks like a much bigger leap, where we’re kind of rethinking how we can do AI.

One other point that I think I’d make to other AI leaders is that these days, there’s a lot of talk and concern about legislation around AI. I think, people kind of fall in two camps. One is either, A, “Hey, don’t legislate anything until we get this figured out.” Or B, “Legislate the heck out of everything and put all these rules in place.” I mean, it’s an interesting place to be. I think that as AI leaders, everyone needs to be engaging in this conversation. It’s a tricky one. You don’t want to legislate things so heavily that China and Russia are the leaders in AI, because they’re not going to regulate it. But it is a genie that you can’t put back in the bottle, either. These AI models that we’re building, their new capabilities like large languages models and generative AI that can generate images, and now videos, that has a meaningful impact on society. I think we owe it to all of our other partners, and AI legislators, and like to be really super engaged in those conversations.

[0:25:13] HC: Finally, where do you see the impact of Synthetaic in three to five years?

[0:25:16] CJ: Oh, good question. In three to five years ago, as you noted, when I talked about some of the different projects that we worked on about the diversity of them. We don’t actually see RAIC as a single domain, or even a two- or three-domain tool. What we see it as is, the internal combustion engine that can fuel lots of different kinds of vehicles. Where I see Synthetaic in the next three to five years is really planting the flag on a new way of doing AI that’s not right for every AI project, but in the ones it is, it can really move things forward. And really, expand what’s possible in artificial intelligence in the sense that it makes it more affordable, makes it quicker, you can find answers on a tactical timeframe. Hope that Synthetaic is a key part of growing that domain, and making it more accessible, and more available to more customers, and nonprofits, and organizations that need it.

[0:26:04] HC: This has been great, Cory. I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:26:11] CJ: Well, for starters, you can go to synthetaic.com, and check us out, and see what we’re doing there. We got a pretty good blog and a list of a lot of the press around things like when we found and traced the Chinese balloon back to its launch site on an island, and some other applications like that. Then, also, we’ve had some really good partner blog posts on The Nature Conservancy, and the UN, and places like that as well. So, checking out Synthetaic there would be a great place.

[0:26:33] HC: Perfect. Thanks for joining me today.

[0:26:35] CJ: You bet. Well, thanks for having me, Heather. Appreciate the conversation.

[0:26:37] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[OUTRO]

[0:26:47] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe, and share it with a friend. If you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]