In this episode, I talk with Gershom Kutliroff, CTO of Taranis, about precision agriculture. Taranis uses computer vision to monitor fields, providing critical insights to growers. Gershom and I talked about how they gather and annotate data, the challenges they encounter in working with aerial imagery, and how they validate their models and accommodate data drift with continuous learning.

Quotes:
“Taranis is using drone technology to capture imagery and then use AI to process that imagery to understand what's happening in grower's field.”

“It becomes increasingly difficult to maintain consistent quality levels if you're working with tens or even hundreds of annotators. But when you have AI models, then you have the ability to control the quality of the insights that you're generating.”

“There's a lot of discussion in the last few years in the AI space about data-centric versus model-centric. Model centric would be the case where in your development you invest a lot in  choosing the right architecture that optimizes your performance, gives you the best results for your models, or spending a lot of time with hyper parameters and that type of work. And data-centric is you spend a lot more time making sure that your data set is clean, that you've got that it's balanced, you've got the right amount of classes for the problem that you're trying to solve.”

“We struggle with the problem of long tail distributions. If I take diseases as an example, there are some diseases that can cause a lot of damage to the crops. But they're very rare in terms of how often they actually occur in grower's fields.”

“Because we're running our own operations and so we're flying our own drones, we've also  invested in the software that's running on the drones when we're flying. So the images the drone pilot captures in the field are validated in the field. We have algorithms running on the edge to be able to check the quality of those images. And then if the images are not the quality that we expect them to get, the pilot knows while he's still there at the field and he can fly again.”

“For a lot of the models that we use you really need domain experts. You really need trained agronomists who can look at these images.”

“A certain percentage of all of the missions that we've flown are sent for review by our in-house agronomists before we release them to customers. So that's a really critical piece of how we do validation, and that also gives us a high level of confidence internally that the product that we're releasing to our customers stands by the quality that we expect it to.”

“We do suffer from this type of data drift where the data that we're seeing in production is not exactly in the same distribution as the data that we used to train. So the most effective technique that we've seen is to implement some kind of a continuous learning type of framework whereby we are able to take data that we're capturing in production, so when we're actually live   and servicing our customers' fields. And then the data that doesn't have a good correspondence with the distribution of the training data that was used for the models, we can then filter that data out. We can extract that data and use it to quickly retrain the models, to adapt the models, and then deploy those models back into production.”

“The company started by offering a product based on manual tagging, which didn't have any AI technology at the beginning, which allows it to offer products and service customers and start building this very rich database that we leverage now.”

Links:

Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.