Manufacturing is a fundamental part of our economy. Unfortunately, a huge swath of the industry is still dependent on outdated methods, adversely impacting our environment. To address these challenges, one company is harnessing the power of AI to transform traditional manufacturing, driving unprecedented efficiency and sustainability in the industry. Joining me today is Berk Birand, co-founder and CEO of Fero Labs, to unpack how AI is optimizing the manufacturing sector.

Tuning in, you'll learn all about Fero Labs' innovative software and how it’s empowering engineers in industries like steel and chemicals to harness machine learning, drastically reducing waste and energy consumption. We discuss how their AI analyzes historical production data to ensure factories operate at peak performance and how this is boosting sustainability and profitability. Our conversation also unpacks the critical role of explainable AI in building trust within the industrial sector, where precision and reliability are essential. Tune in to discover how Fero Labs is paving the way for a greener industrial future!


Key Points:
  • Berk Birand’s education and career background.
  • How he co-founded Fero Labs with his business partner.
  • An overview of Fero Labs’ AI software.
  • Fero Labs’ role in reducing raw material waste in the steel industry.
  • How they have helped improve energy efficiency in chemical manufacturing.
  • Data analysis and how their software provides recommendations for efficient operations.
  • Understanding the high stakes involved in manufacturing processes.
  • Why AI explainability is crucial in the industrial sector.
  • How they are building explainable models that engineers can trust and understand.
  • Why now is the right time to build this technology.
  • His advice to AI-powered startups: seriously consider the cost of a bad prediction.
  • Fero Labs’ long-term vision to achieve a more circular and sustainable industrial sector.

Quotes:

“One of our largest customers was able to reduce the waste of raw materials, about a million pounds just throughout last year, by using our software AI system.“ — Berk Birand

“We think AI will play a key role in the transition to a green economy.“ — Berk Birand

“The best people to be solving these types of challenges, ultimately, are the engineers that work at the plants. The engineers that have the most domain expertise.“ — Berk Birand

“In an environment like this, an engineer in a factory would just not want to use a software that they don't trust, because ultimately, it's their job that's on the line.“ — Berk Birand

“With the new drive towards building an industrial sector that is more circular and more sustainable, there's incredible potential to optimize not just an individual factory, but beyond that, to optimize the entire supply chain by optimizing factories jointly.” — Berk Birand


Links:

Berk Birand on LinkedIn
Fero Labs


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[EPISODE]

[0:00:33] HC: Today, I’m joined by guest Berk Birand, co-founder and CEO of Fero Labs, to talk about optimizing manufacturing. Berk, welcome to the show.

[0:00:43] BB: Thank you for having me, Heather.

[0:00:44] HC: Berk, could you share a bit about your background and how that led you to create Fero Labs?

[0:00:48] BB: Absolutely. I’m an engineer by training. I have a background in Electrical Engineering and Computer Science. My most recent academic stint was at Columbia University here in New York, doing a Ph.D in the telecom space. So, I worked on developing algorithms for cell stations. I worked on algorithms that aim to reduce the energy consumption of the fiber optic backbone of the Internet. Very theoretical research applied to the telecom space. Around that time, after my Ph.D, I was looking for an area to apply my knowledge that would create the most impact. My co-founder, who I met around the same time, also had the same goal. His background is pure machine learning. Together, we decided to apply our joint backgrounds in developing a software solution for the industrial sector, which ultimately became Fero Labs.

[0:01:39] HC: What does Fero Labs do? Why is it important for sustainability?

[0:01:44] BB: Fero Lab is a software platform targeted for the manufacturing sector. We build an AI software that is meant to be used by process engineers, chemical engineers, industrial engineers, working at factories, and these folks effectively use our software to develop machine-learning AI systems that learn the behavior of a factory. So, think heavy industry, steel plant, you chemical plants, which consumes a lot of energy, produce a lot of emissions, consumes a lot of raw materials. There are armies of engineers that spend their lives making sure that these plants operate effectively, that they always only consume the resources that are necessary for the production, make sure that the plant keeps running, and that the plant’s performance doesn’t degrade over time in, which it is susceptible to do.

Our software empowers these engineers to use AI systems and models, and we call this machine learning as well. So, machine learning systems that can extract complex patterns from these giant data sets in order to make sure that the plant is operating at peak performance. So, as an example, steel is one of our core sectors of focus, and steel, one of our largest customers was able to reduce the waste of raw materials, about a million pounds just throughout last year, by using our software AI system and use the recommendations and explanations that our system provides to run their factory.

Of course, the impact that this can have is huge in terms of sustainability, and in terms of improving profits for these plants as well. The sustainability angle is that, in this case, million pounds of raw materials did not need to be mined across the world and shipped across the world, processed to get to these plants. This has in unsustainability side, a huge impact on the scope three sides. We have other use cases of Fero in steel and chemical sector, where engineers are using our software to reduce the time it takes to process a particular batch of say chemicals. So, instead of building chemicals in high quality in two hours, they can do so in an hour and a half, which means that they’re saving half an hour of this plant’s energy consumption, which in this case corresponds to scope to impact.

This is very crucial. We think AI will play a key part, key role in the transition to a green economy. The neat thing with what we do is that all of these savings also come with a corresponding increase in profitability. Because of course, if you’re not wasting raw materials, that means you’re also paying less for raw materials. This is why we describe our software as a profitable sustainability platform.

[0:04:20] HC: How do you go about using machine learning in this manner? You gave a couple of examples there, but I’m just curious how you set up the problem. What type of data you use in order to do this?

[0:04:30] BB: Absolutely. So, the data that’s used by the software typically is the historical production data of the plant. Let’s continue with the steel example. In this case, let’s say a steel plant has been running for 20, 30 years and throughout, they’re just running 24/7 and produces let’s say 21 different batches of steel. What this means is that they take scrap steel in this case, so car parts, and washing machine parts, and put 50 to 150 tons of the scrap steel in a furnace. The steel inside the furnace is molten using some electrode using electricity. So, this is the electric arc furnace production. Effectively with giant electrodes and huge currents, the scrap steel becomes liquid steel, and then they add different kinds of ingredients to it. They store the scrap blends they used for each particular batch. They store the raw materials that they add.

So, very similar to cooking, following a recipe, they have recipes for steelmaking, and they keep track of how much of these individual ingredients they put in. There’s all these store – all the different quantities. How much pressure they’ve applied to roll the steel. All the different process parameters and temperatures. Ultimately, at the end of the line, after the steel has been produced, they measure the strength of the steel. So, this is, let’s say this example is from a construction steel plant that makes long-form steel, so steel beams. They check that the strength of steel is where they would expect it to be within their specifications.

Fero uses all of the data that I just mentioned, starting from the very first stage of the ingredients that they put in, to different temperatures and pressures, and so forth, all the way to the target data. So, the Y parameters in this case, which we did with strength. All this data is fed into the software and engineers can use a visual interface, no code interface, to build a machine learning model that effectively from all historical data, learns the science and the chemistry of that production site. Just by looking at past data, you can figure out how each of the raw materials that affects the end-of-line quality. As a result of this knowledge, how much of these raw materials they should put in to achieve the desired goal that they want to achieve. Of course, you know, with a few parameters, this would be a simple statistical problem. But when you have thousands of different parameters, and very nonlinear complex type of relationships, this becomes quite challenging, and that’s where the AI comes in.

[0:06:58] HC: So, let’s talk a bit more about some of those challenges. What kinds of things do you encounter in trying to build these models for manufacturing? How do you approach those challenges?

[0:07:06] BB: Ultimately, it comes down to one of the common challenges in any kind of data-driven systems. Of course, garbage in, garbage out. This problem exists in industrial sector as well. Namely, most of the data that I mentioned comes from control systems, comes from sensors. Even though sensors are operating, normally, most of the time, there’s always data issues in real systems. It could be about some sensor malfunctions or it could be around communication issues or it could be challenges tracking products, as it goes through different stages, and being able to align the data from sensors in stage one with sensors in stage two for the same product.

So, the solution that we’ve come up with over the years, well, the main observation we had was that the best people to be solving these types of challenges, ultimately, are the engineers that work at the plants. The engineers that have the most domain expertise. They know the plan, they’ve been running the plan for many years, they know its quirks, and they understand how these quirks translate into what type of data problems they translate to. That’s why we built a software that’s meant to be entirely used by these users. Of course, ML and AI is the very core thing that we do. It’s the core part of our technologies. The only thing that software is supposed to do.

But as a consequence of this, we’ve also built additional modules that also uses leverages AI to do data cleaning, for example, or to track material throughout the process and to align different datasets. Ultimately, the way that we’ve overcome this challenge of garbage in, garbage out is to really build a solution that can leverage the domain expertise of folks that are closest to the problems.

[0:08:51] HC: Is model explainability important in this scenario? If so, how do you achieve it?

[0:08:56] BB: Model explainability is extremely important. In fact, that was one of the early observations that we’ve had was that the industrial sector would benefit uniquely from important explainability and specifically trust as well as explainability. The reason for this is as follows. Effectively, most machine learning models and algorithms come out of academia, usually in conjunction with work in the tech sector, let’s say. In most of these sectors, black box models that are not explainable are actually completely fine and they deliver quite a substantial amount of value. Typical example that we would use would be, let’s say, computer vision models. Google Photos, you can upload an image and it tells you what it thinks is on the image. These models work well, most of the time. Sometimes they don’t work and when they don’t work, nothing really bad happens. Maybe you chuckle a little bit because this algorithm didn’t recognize or recognize your friend as another friend.

But ultimately, people don’t cancel their accounts necessarily because of a bad prediction. What this means ultimately is that the cost of a mistake is very low. Now, this is not how things work in the industrial sector. In the industrial sector, the cost of a bad prediction or a bad recommendation could be extremely high and I’m not just talking about safety concerns, which is, of course, a problem in the industrial sector when you’re dealing with liquid steel that says 2,500 Fahrenheit temperature. But even before getting into the safety issues, even if a bad prediction leads to the loss of a product, so the quality of a product being bad enough that they can sell that product. Let’s say, a steel beam, that’s not as strong as it needed to be, then they wasted an entire batch of production.

Now, you can typically rework or melt it again and again. But the financial cost of this is huge, because you’ve just missed out on your schedule of production schedule. You have spent multiple hours of energy and operator time running this production. So, the costs of this one mistake could be in the hundreds of thousands of dollars. These kinds of environments, explainability and trust is really, really important. In an environment like this, an engineer in a factory would just not want to use a software that they don’t trust, because ultimately, it’s their job that’s on the line, their bonuses and their salaries are dependent on them running the factory well. Why would they take the risk of trusting a black box model that they don’t fully understand?

With this observation, we build all of our core technology to be explainable, and there’s typically two different avenues of how you can aim to get to explainability in the machine learning world. One direction is to start with black box machine learning models, and then try to explain those using a secondary model, a surrogate model that is explainable. That’s one way of doing things. Of course, there was some challenges with that. Otherwise, we would have explainable machine learning already.

The other dimension that we subscribe to, the other approach, is to start with explainable models and the models that are inherently explainable. Now, of course, the sounds, why would you – if you couldn’t use explainable models and why not do that all the time? Of course, the answer is that it’s very hard to build explainable models that are horizontal that can be applied to the industrial world, to apply to supply chain problems, applied to insurance problems, applied to movie recommendations. It’s very hard to build expendability across the board. We’ve done a lot of R&D internally to build explainable models, specifically focusing on the needs of the industrial sector and that’s why we’ve been so successful at creating this kind of trust in our customers.

[0:12:34] HC: To build this type of explainable model, does it mean the features are explainable, and that’s how you get the explainability out of it? Or is there another way to tackle this?

[0:12:43] BB: The features are explainable and beyond providing, which features contribute, I say, factor importance type of way. Our models, what it can do is actually display the physical relationships that it has learned about the system. So, the way that we were able to achieve this, of course, in industrial sector, is that ultimately, are models in factories, tried to learn from a physical system that’s at its core, governed by physics, by differential equations and chemicals equations. As a result, there’s some assumptions that we can make, for instance, since we’re not approximating human systems. We’re not forecasting sales numbers or your stock prices. Numbers can change drastically from one second to another. If you have a temperature reading that goes from 300 degrees to 99,999 degrees, then you know that can’t be right based on this being a physical system.

So, by incorporating these types of observations, we’re able to build a system that not only provides factor importance the way other systems do, but also actually shows the relationships that our models have learned to the engineers. Let’s say, an example would be every steel, metallurgist knows that adding carbon to steel increases the strength. This is textbook information. Our software can actually put this, pull this out of the machine learning model and display it to our users, which can then verify that the model has indeed learned the right type of relationships. Of course, this builds trust and it makes it more likely for an engineer to use our software just by looking at these curves that they know to be true. But also, just by through this expandability our customers learn new relationships that they may not know about, that are very specific to their individual plants. That’s how it’s mainly also drives additional value in addition to trust by increasing the value of promoting new learnings in these plants.

[0:14:32] HC: Why is now the right time to build this technology?

[0:14:35] BB: Well, the way I would think about it is in terms of the need, we live in times where what we’re doing is absolutely needed. The reason for this is that the industrial sector over the past, let’s say six to seven years has evolved with certain assumptions. The most important way that factories have improved their production was by reducing variability. Because ultimately a factory has variability across the board. It uses raw materials that comes from different sources. Ideally, you want your raw material to be as consistent as possible. You want your machinery to be as precise as possible, the hardware. You want your operators, your engineers to be as well trained as possible, so they know exactly, to do the exact right thing at the right time.

Ultimately, any remaining variability currently is being eliminated by using over-design margins. I don’t know ahead of time, what’s going to happen with this steel production. So, I’ll just play it safe, design my production for the worst-case scenario, and just run my production in that way. All these things are changing now. Ultimately, one of the big drivers in industrial sectors, the circular economy, which means that output from the waste from one plant could be used as an input to a different plant as the raw materials for different plant.

Now this is great. It’s an ultimate recyclability. But this also means that now you don’t have consistency in your inputs anymore. Another factor is that the aging workforce in the industrial sector means that it’s getting extremely hard to hire operators and engineers that can run the plant with little or little variation as possible. Of course, because we’re moving to a more sustainable energy infrastructure, it also could mean that the even energy source could not be as relied on as much. Ultimately, in the middle of all of this uncertainty and all this variance, we need a solution that can adapt the process dynamically by measuring this uncertainty, and making recommendations stabilize the factory so that they can run with still the same positive outcomes as they did before.

We think AI plays a key role in making factories dynamic and adaptable to these new challenges. The second answer that I would say is that the value of our technology is way more now than it was even 10 years ago. Because of all the investments that companies have made in terms of their digital infrastructure, they’ve spent a lot of time storing data and buying, started with the big data movement. Ultimately, the pandemic was also a key step change in terms of improving digitization, and advancing digitization. Now, we’re at a time where the need is definitely there, and then the value can be much higher, because there’s all this data that is not more readily available to be used for machine learning models.

[0:17:23] HC: Is there any advice you could offer to other leaders of AI-powered startups?

[0:17:27] BB: I would say, the most important thing to think about in my mind is what happens when the underlying AI systems make bad predictions? What is the cost of a bad AI prediction? Now, this is not something we appreciate it a lot in the early days. But the reality is, most AI demos that you see, of course, are built to work well. Then, you see these demos, and then they look great, and that might even get you in the door of a prospect or even get you to sign a customer. But the point is, once these systems are in production, especially when they’re providing recommendations to mission-critical units. You’d have to understand what happens if the AI systems make a bad prediction. You could be making good predictions 364 days of the year, but that one time you make a bad prediction, this is going to hurt trust. Building a system that takes that into account will save a lot of issues in the future so that you’re prepared for when this is going to happen both in terms of the relationship, in terms of maybe the promises that you make. Ultimately, this will increase the trust in your system and make it more likely to stick and not just be a cool project for the first year.

[0:18:38] HC: Finally, where do you see the impact of Fero in three to five years?

[0:18:42] BB: I’m really excited about the next five years, I think the industrial sector is in the middle of a huge transition and I think AI will play a massive role in this transition. Where I see Fero in five years is that we’re front and center as a core part that helps runs factories within the four walls of the plant. But even going beyond that, especially with the new drive towards building an industrial sector that is more circular and more sustainable, there’s incredible potential to optimize not just an individual factory, but beyond that, to optimize the entire supply chain by optimizing factories jointly.

So, I’m optimizing my factory. But in the meantime, Fero also runs in a factory that supplies me with my raw materials, and as a result, now Fero can make better decisions in my factory, it can make better decisions in the factory upstream, knowing that it’s going to go to me. Effectively seeing a factory less as its own unit, but as part of an organism of sorts, supply chain of sorts, and then of course building what I like Fero to be a solution that can optimize this entire organism so that we can lower waste as much as possible and take an important step forward towards carbon zero economy.

[0:19:55] HC: This has been great. Berk, I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:20:04] BB: Ferolabs.com is the website. Feel free to reach out if you have any questions or on LinkedIn.

[0:20:09] HC: Perfect. Thanks for joining me today.

[0:20:11] BB: Thank you so much, Heather.

[0:20:12] HC: All right, everyone. Thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI.

[OUTRO]

[0:20:22] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]