I am excited to welcome the Head of AI at Precision AI, Amr Omar. Precision AI has recently taken to agriculture by using drone technology to deliver precise herbicide doses to crops that are in need.

In today’s episode, Amr explains why this technology is crucial for the future of farming and how machine learning factors into the process. From being at the mercy of the weather to not being able to distinguish between good and bad crops when they are seedlings, there are many challenges involved with using drones and AI technology for farming, and my guest lays them all out whilst explaining the solutions that he and his team have come up with.

You’ll learn about Amr’s process for developing new machine learning products and features, the non-negotiables he prioritizes in state-of-the-art reviews, what he looks for when building a successful team, his advice for other AI startup leaders, and so much more!

Key Points:
  • Introducing Amr Omar as he explains how he ended up as Head of AI at Precision AI.
  • What Precision AI does and why this work is so important for farming. 
  • The role of machine learning in Precision AI’s technology. 
  • Challenges that arise when using drones for farming, and how Amr’s team overcomes them. 
  • How Amr makes the drone models generalizable without sacrificing other restrictions. 
  • A look at his process for developing a new machine-learning product or feature. 
  • What Amr and his team look for and prioritize when doing state-of-the-art reviews.
  • The approaches to recruiting and onboarding that have been successful in building his team. 
  • How he measures the impact of his drone technology: the field test. 
  • Amr shares some advice for other AI-powered startup leaders.  
  • How he sees Precision AI impacting the market in the next three to five years. 

Quotes:
“What we offer here at Precision AI is putting only what needs to be sprayed in real-time speed, using drones to kill those that are unwanted, which eventually saves a lot of money. At the same time, it increases the value of the crops coming out of that process.” — Amr Omar

“The flexibility to pivot within the development of a certain feature is what empowers any team that's developing AI-driven applications or products to scale and succeed without facing any unexpected challenges.” — Amr Omar

“Most [problems have] solutions. It's just about how much you are willing to invest in that solution versus the value you're going to get out of that.” — Amr Omar

“We all have this dream big mentality at Precision AI, from the leadership to the junior engineers. We all wish to make something big happen with what we are doing. To be able to achieve that, you need a team of believers.” — Amr Omar

“Bet on the process [and] not on the product while working in machine learning or in AI-driven teams. The process is way more important than the product. The product will come at the end of the day.” — Amr Omar

Links:

Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRO]

[00:00:03] HC: Welcome to Impact AI, brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven machine learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[INTERVIEW]

[00:00:34] HC: Today, I’m joined by guest Amr Omar, Head of AI at Precision AI, to talk about drones for farming. Amr, welcome to the show.

[00:00:42] AO: Thank you, Heather. Thank you for having me.

[00:00:45] HC: Amr, could you share a bit about your background and how that led you to Precision AI?

[00:00:48] AO: Yes, sure. So my name is Amr Omar. I’m originally from Egypt. I studied computer and system engineering and graduated in 2013. It was almost 10 years in the industry. I think it all started when I was in college. I had a passion for AI since then. AI was still not as famous as it is right now. In 2013, it was just starting I would say. So, yes, since then, I started to work on projects related to AI. I actually worked for some research centers that are also trying to create some code that’s based on AI.

Yes, I was lucky enough actually to get some exposure of different stacks and different technologies, starting from products coming from startups to mid-sized companies to big players in the field like Siemens and Microsoft. They were also my previous employers. Eventually, I joined Precision AI around three years.

[00:01:49] HC: So what does Precision AI do, and why is this important for farming?

[00:01:53] AO: So pretty much, what we are doing, in a nutshell, is that we fly drones that spot unwanted vegetation and spray them with herbicides on the spot in a real-time manner. This is like the age we are having right now. The value proposition we are providing here is that by relying on precision spraying, we are basically cutting costs and enhancing the quality of the crops being cultivated by a big magnitude.

The process of herbicide spraying right now is simply broadcasting, which means that in a big farm especially, large acre farms, what happens is that you have to go and spray all the farm soil and crop and unwanted vegetation all the same, which costs a lot of money and a lot of unnecessary spraying for the herbicides. What we are offering here at Precision AI is putting only what needs to be sprayed in real-time speed using drones to kill that unwanted, which eventually would save a lot of money. At the same time, it increases the value of the crops coming out of that process.

[00:03:03] HC: What role does machine learning play in this technology?

[00:03:06] AO: Yes. Simply, our system is relying on machine learning to do the vegetation detection, which we call the green-on-green and the green-on-brown. So what happens is that we rely on an artificial intelligence model to spot the weed and discriminate it from the crop. By weed here, I mean any unwanted vegetation. In the old farming, weed means any unwanted vegetation in a certain crop.

So the AI model is what – let’s say the AI system because it’s a bigger system than the model. The model is just one element. So AI system is what empowered the whole processes of the drone seeing and spotting the unwanted vegetation. Then it drives the decisions for what to be sprayed and what not to be sprayed.

This is all happening in real-time, which means that the main important aspect here is to be able to process all of this large amount of data, high-resolution images, and make very, very fast decisions without the need to even communicate to the internet or rely on cloud. That’s what we call it accelerated AI or AI on the edge because it’s happening in real-time without the need to connect to the internet.

[00:04:22] HC: So based on the machine learning that you’re applying to this drone imagery, what kind of challenges do you run into? You mentioned the high resolution and needing to run on the edge. But are there other challenges that come up, and how do you handle them?

[00:04:34] AO: Yes. There are a lot of challenges actually. Most of the time, people who are not experienced in the vegetation detection, they look at that problem from the perspective of how that compares to autonomous cars, for example, because this is a very famous example of computer vision in real-time in our real world right now. But to have a similar real-time detection for vegetation and make these kinds of decisions we are making right now, to be able to fly autonomous machines at the speed we are flying right now, there are a lot of challenges.

One of them is the effect of the wind on the drone and the weather in general. Because when a drone flies in a certain field, it’s actually affected by a lot of parameters that are completely out of control. The drone is an autonomous machine. So it itself requires some sort of intelligence to be able to maneuver obstacles and do that terrain-following safely without causing any damage to the land or people who might be there, actually. The effect of the weather on the drone is really challenging, and it means a lot of work to guarantee the safety of the process and the efficiency of the overall flight session that will end up leading to these spraying decisions happening in real-time.

Another challenge we face usually is that when you are trying to capture images from a drone, looking down at the land, if the crops that are being cultivated, the wetland, are at the very early growth stage, they wouldn’t really be very distinguishable in their physical characteristic if that drone is flying very high. So to be able to adapt to these kinds of different levels and different growth stages and, on the other hand, empower the machine learning model to be able to generalize on different growth stages.

Even in challenging conditions and challenging image quality, for example, like when the weather is very dark or when the sun is extremely bright, or when there is a shadow at a certain time coming from a cloud passing or something like that. All of these are the challenges we are facing with the drone imagery, and we need to deal them from the perspective of both the computer vision input itself, like the image itself, and how it’s fed to the rest of the system. On the other hand, we need to deal with these kinds of changes from the model itself to be able to generalize and adapt and perform almost consistently in all kinds of environments.

[00:07:12] HC: So generalizability is a really big challenge, as you mentioned. This comes up in a lot of different areas of machine learning and computer vision. The ones you mentioned about are as the weather conditions change, does the lighting change, as the crops grow different, things like that.

[00:07:28] AO: Yes.

[00:07:29] HC: How do you make your models generalizable to all these different things? How do you tackle it both on the imagery and the model side?

[00:07:35] AO: This is actually the question for any company that’s working with machine learning. How can I make my model diverse enough and generalized enough without sacrificing the other restrictions I’m trying to count forward like the real-time speed and so on, so forth? So there are a lot of ways actually to be able to have a good generalized model and a lot of architectures that would empower a successful machine learning to be able to generalize on unseen scenarios.

However, the real challenge would be to be able to do that but still without compromising the speed. In our case, this is extremely challenging. Not just challenging because we are trying to deal with vegetation. The analogy here would be if you are trying to find the differences between all the white cats. So if you are trying to find the differences between different animals, that’s an easier problem. If you’re trying to find the differences between cats, that’s a more challenging problem. But if they are all white cats, and you are still trying to differentiate between them, that’s really a challenge for computer vision.

To be able to generalize on different environments, you have to first guarantee that you have a diverse representation for the general environment you are trying to generalize on. That’s why we have what we like to call multi-stage testing and the model scoring. It’s some sort of scoring system that we usually use to be able to move the model readiness from stage to stage. So when the model starts to be developed, it’s usually just a custom model based on custom data. Then the model goes through a lot of rounds, enhancements, and augmentation, and tests that its feedback will be then used to tell us where the areas, where the model is still failing. Then we will use that alternative feedback and then staging process to try to develop the model or move the model to the next threading stage.

I would say in a nutshell, it’s, of course, the data diversity and guaranteeing enough representation for the physical environment you’re trying to operate on successfully. This is a key, of course. That’s why we are collecting a lot of data. We’re investing a lot in the data collection to be able to guarantee the quality we are promising is verified using actual data from different environments, and with diverse weather conditions and enough generalization to capture all the challenges we might face.

Of course, we rely on some sort of architectural tracks, I would say, that are helping in the generalization for such models. So inside our model, we have implemented some custom modules that would efficiently work, especially for the vegetation detection problem, to empower us and help us actually generalize in different environments. Even if we don’t see that physical environment or we don’t have enough representation, we could at least gain enough confidence that the model has a good starting point to operate in that environment.

[00:10:40] HC: Thinking back to the beginning of a project or when you’re developing a new machine learning product or a new feature, what are the first steps your team takes to plan it out?

[00:10:51] AO: Well, Heather, I think the first step we usually take is to set the expectations. I would say that’s actually the key element in a successful AI-driven team or company, in my opinion, because machine learning is very big. It’s very mysterious and surrounded by a lot of assumptions. When there’s not enough information, people tend to make assumptions. When people don’t know exactly what to expect from a system, they might over-promise or over-expect.

So usually, what happens is when we have either it’s a business request or a new challenge internally that we would like to tackle, we start with setting the expectation and thoroughly understand the request. That’s a step that takes a lot of effort, I would say, because once the expectations are set, everything else is becoming easier. But without the expectation being set, there is a lot of risk in developing a feature that might be just cool, but it’s not what was requested, right? So we start with that.

Then we usually move to the state-of-the-art review. We try to capitalize on other people’s great work and study what they invented, what they reached so far, whether it’s in the area of vegetation or, in general, in the computer vision area. Then we tend to start by doing fast prototyping to our ideas and verify that we have a potential improvement or a potential successful feature of fulfillment for a business request using the approach we are taking right now. Then everything else from that point is an iterative process between developing feedback and pivoting.

In machine learning, pivoting is very common. I would say the flexibility to pivot within the development of a certain feature is what empowers any team that’s developing AI-driven applications or products to really scale and to really succeed without facing any unexpected challenges, because machine learning is all about trying to control the unknown. It’s usually a black box. You’re trying to train a model, but you don’t really have enough control on every single operation that’s happening inside that large black box.

So the flexibility to pivot from one architecture to another, from one design to another, and to limit the vagueness and the unclarity about the problem and transfer that into action items, that’s usually what enables us to try to quickly come up with new features and validate them very fast.

[00:13:30] HC: I want to dig a bit deeper into that state-of-the-art review step that you mentioned. That’s because this is a very important step and a step that I think some teams don’t perhaps don’t put enough effort into. Perhaps you could have some important insights before they get further into a project. So on your team, when you’re doing this review, what exactly goes into it? What types of things are you reviewing? Is it just the academic literature? Or is it code bases as well? How long do you spend on it? What do you learn from it, typically?

[00:14:04] AO: Yes, I see. So the view process on our definition is basically – definitely, it’s deliverable, and the deliverable we usually expect from that state is two things I would say. The first one is proof that we can apply that approach or this approach after reviewing some theoretical architecture by reading papers, for example, academic papers and then settling on the best approach that has the highest chance to work well on our problem.

Then we have to try it. The process doesn’t stop at the stage of the literature review because we know that the devil is in the details. Once you try something, you might miss a whole new challenge that you are not aware of. So that actually might contribute to you reiterating and resetting the expectations before even starting to work on the feature or work on the enhancements you are hoping to finish.

The state-of-the-art here is defined, but it’s deliverable, and its deliverable is some sort of proof that we applied that, and we have potential. That proof usually doesn’t have to be highly accurate output or high confidence, some sort of evidence. It’s proof that we are moving in the right direction. So without that proof, we usually don’t consider that process done. We try to reiterate and go deeper as much as we can to try to come up with that proof before you move to the next phase.

However, just like any other problem, sometimes you might not find enough proof, given the time allowed for the investigation. So if a project started to seem like it’s going to go out of track, we usually tend to move to different departments before it’s dragging from us.

[00:15:55] HC: So, I think really the key there is if it turns out that the problem you’re trying to solve isn’t feasible or doesn’t make sense with the resources you have, at least you learn that early, and you can pivot. Or if it does turn out something feasible, you’ve learned that from the beginning. You’ve learned maybe a baseline approach, and that gets you started moving forward with the rest of the project in a much more efficient fashion.

[00:16:21] AO: Yes, exactly. As you just mentioned, most of the problems have solutions. It’s just about how much you are willing to invest in that solution versus the value you’re going to get out of that, right? Yes. We try almost to get that information as early as possible and make the subsequent decisions based on that.

[00:16:39] HC: So another large challenge with machine learning teams right now is hiring. Certainly, the economy changes throughout this year. But machine learning is still very much in high demand and a hard field to hire for. What approaches to recruiting and onboarding have been most successful for your team?

[00:16:56] AO: Yes, that’s a question that requires a little bit of a long answer because at Precision AI, we are having this philosophy of relying on not just hiring the most talented engineers and developers. But we are also looking for really, really ambitious owners who aspire to have, to leave a mark in the world, right? We all have this dream big mentality at Precision AI, from the leadership to the junior engineers. We all wish to make something big happen with what we are doing.

To be able to really achieve that, you need a team of believers, not just efficient engineers. While on the other hand, you also want to rely on a limited number of highly effective calibers who are very efficient and very talented in what they do. So this is basically the mentality we have at Precision AI. To do that, we have kind of, I would say, a complex interviewing or hiring procedure. We usually start with a test, a coding test. The coding test is supposed to just test that this person is qualified enough in the basics. That they have the minimum of what it takes to be able to work on that.

Especially that a lot of people, a lot of calibers working on the machine learning and artificial intelligence area, they tend to treat the software engineering and solid principles of coding and efficient and successful architecture lightly nowadays, relying only on scripting and simpler tools and simpler programming languages to focus more on the problem and focus less on the code itself. But at Precision AI, we try to care for both because we are somehow doing something critical here. We are flying drones at a very high speed, and we are embedding these drones with a machine learning engine that’s spotting weed on the fly and spraying that by signaling nozzles.

So there are a lot of risks here. If the code is not efficient and the code is risky, if the code have any criticality, that’s not something that could be taken lightly. So we start with the test to guarantee that whoever is working at Precision AI, they have the basic requirements of what we need in an AI engineer in the team. Then we adopt this interview loop method, which is a loop of four interviews on average, where each candidate get to meet with different team members, seniors and juniors and tech leads and so on and so forth. It’s actually beneficial for both sides. It gives the candidate an exposure and kind of a trip inside the company mentality.

On the other hand, we all get to get a collective feedback about the candidate from different perspectives and different experience levels as well. Then when there is some candidate that we all feel that this person is a good fit for our culture, is a good fit for our goals, and that person has enough talent and enough technical expertise to help us build what we build, and they have also what it takes, on the personal side, trustworthy person, accountable person, someone who wishes really to make this work and make this a success, that usually is a candidate we choose.

So it’s not an easy job. We usually take a lot of time to find the right talent. However, it’s time worth taking because the handful of highly impactful people we are relying on this, this is what we guarantee that our product will succeed from.

[00:20:35] HC: How do you measure the impact of your technology?

[00:20:38] AO: The answer to that question is it’s going to be when we finally start to find the drones are supporting large acre farms, large acre farmlands, and like resulting in the most efficient food supply chain and crop supply chain. This is the best way to measure the impact of the technology because once then, we will know the farmers are happier, and the crops coming from like the farms sprayed by Precision AI are actually higher in quality and friendly in cost and efficient in cost, which makes everybody happy, the farmers and the end user. Also, encouraging more investors to invest in that field, and open the door for something bigger.

However, on the short term, what we try to do because we can’t really rely on that kind of end goal to measure whether we succeeded or not, we have to have some sort of lead goals, right? What we do is we try to contribute in different conferences, and in different fairs, and in different venues and events, where there are actual farmers who can come and survey and even participate in a trial run for our technology.

One of the ways we measure our success rate year by year is what we call the field test, where we go and pack our things and go physically in some fields in North America. Sometimes, it’s in the States. Sometimes, it’s in some Canadian provinces. We try our technology by flying the drone and seeing how efficient we are this year compared to the last year. That’s an important aspect of how we progress because each year we fly, we are faced with some challenges that we need to work on and enhance. This constant enhancement is what drives the whole thing, right? Because we can do all we want to do in the lab and test on the images we have.

But the field test is the key thing that’s telling us that this is the real thing. This is actually something that when a farmer or a farming company starts to use it, it’s safe. It’s efficient. It’s adding an actual value for everyone. Because at the end of the day, no farmer or no farming land will just invest in a cool project or in a cool product. They will buy it because it’s efficient. They will buy because it saves money, and it saves – it increases the quality of the product.

So the field test is one of the most important tests that measure our technology impact. We see how good we are doing in detection, how good we are doing in spraying, how good we are doing in chemical saving. Yes, that’s usually it.

[00:23:20] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[00:23:24] AO: I wouldn’t say it’s an advice, but I can share with you the lessons I learned over time in working in that field. I think the most important lesson I learned over the years is that to bet on the process, not on the product, while working in machine learning or in AI-driven teams. The process is way more important than the product. The product will come at the end of the day.

But even though everyone right now can do machine learning, can create things out of machine learning, not everyone still can guarantee the consistency and the safety and the actual value proposition coming of these products. So I would say the only way to secure yourself that you are doing something correct and flexible enough to pivot at any time towards something new and adapt to the very fast-moving technologies we are dealing with right now, is to rely on a very, very successful, efficient, solid process, and invest in the process. The process will guide you. The process will save you. That’s what I learned so far.

Also, to build some sort of an iterative process of educating the customers while measuring the impact, not just measuring the impact from your perspective because artificial intelligence, like anything that started new, I mean, it’s a revolutionary thing, right? So it needs some education to be able to set the expectations of what to expect from a product or a system or a drone that uses artificial intelligence.

After all, it’s not a human being doing that. So you need to set the expectations on what to expect. Setting the expectation is as important as developing the right technology and finding the right customer because without that, you will be chasing something that might not come, right? So setting the expectations and adopting a culture of continuous and iterative education to your customer segments, I would say it’s very important.

[00:25:32] HC: Finally, where do you see the impact of Precision AI in three to five years?

[00:25:37] AO: I think we’ll be leading smart agriculture. That’s how I like to see it. We are progressing very well so far, I guess. We have the right customers, and we are in the right time and in the prime of such technology. The world really needs something like that to save money, to save water and enhance the quality of the products coming from agriculture as well. So I guess we are on the right track, the leader or a leader in the smart agriculture world.

[00:26:09] HC: This has been great, Amr. Your team at Precision AI is doing some really interesting work for farming. I expect that the insights you’ve shared will be valuable to other AI companies. Where can people find out more about you online?

[00:26:21] AO: Thank you, Heather. I think my LinkedIn or feel free to contact me at my Precision AI email. It’s on the website.

[00:26:28] HC: Perfect. I’ll include those in the show notes, and the website is precision.ai. Is that right?

[00:26:33] AO: Yes.

[00:26:34] HC: Perfect. Thanks for joining me today.

[00:26:36] AO: Thank you. I appreciate your time. Thanks for having me, Heather.

[00:26:39] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.

[OUTRO]

[00:26:49] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend. If you’d like to learn more about computer vision applications for people and planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]