May 9, 2026

Designing Multi-Agent Systems for Industrial Operations

More than 95% of what the industry calls "industrial AI" is really just storing data and connecting software to the plant floor. The intelligence — the part that perceives, decides, and acts — is almost entirely missing.

That's the central argument Kence Anderson, CEO and founder of Amesa, made in my conversation with him on the AI in Manufacturing podcast. Anderson has spent eight years walking plant floors around the world, from steel mills to chemical plants to snack food factories, interviewing operators about how they actually learn and make decisions. What he found led him to develop a methodology called machine teaching and a platform that builds teams of autonomous AI agents modeled on how human expertise really works — not how we wish it worked.

Why Does Most Industrial AI Fail to Deliver Real Productivity Gains?

The Bureau of Labor Statistics data tells a damning story: manufacturing productivity has been essentially flat since 2008. That's the exact period when the industry poured billions into connectivity, IoT platforms, and data infrastructure. We got the data out. We built the pipelines. And productivity didn't move.

Anderson puts it bluntly, comparing the situation to the Scarecrow in The Wizard of Oz: "If I only had a brain." The data layer is necessary but insufficient. What's missing is the capacity to perceive a situation and take action. And perception without action is pointless in an industrial context — every sensor reading, every camera feed, every data point exists for the express purpose of informing a decision. If nothing decides and nothing acts, you've built an expensive archive.

This is the gap that no amount of additional data collection will close. The bottleneck was never data access. It was — and remains — the intelligence to make sense of that data in real time and do something useful with it.

Why Has AI Research Failed to Translate into Production-Ready Industrial Solutions?

Anderson describes a pattern he calls the "research to PR pipeline," and it's worth understanding because it explains a lot of the disillusionment in manufacturing. It started with DeepMind's breakthroughs in reinforcement learning — systems that could beat the world's best Go player, master Starcraft, fold proteins. These were genuine scientific achievements. But the leap from research paper to press release to "this will transform your factory" skipped an essential step: the development work of figuring out how to make any of this function in the real world.

Anderson lived this firsthand. Working at Microsoft after the acquisition of a startup called Bonsai, he traveled to plants where his AI research colleagues would recommend specific algorithms to improve operations. What he found when he actually talked to operators was that no stock algorithm could approximate how humans learn these tasks. People were practicing for 10, 15, 20, sometimes 30 years to get good. And they were being taught — not just handed data and told to figure it out.

This is the development phase that keeps getting skipped. The research is real. The engineering to make it work in a factory is where the hard, unglamorous work lives. And that work requires understanding how expertise actually forms in industrial environments.

What Happens When Manufacturers Deploy AI Without Understanding the Problem?

The most dangerous outcome isn't that the AI fails spectacularly. It's what Anderson calls "pilot purgatory" — the slow death of organizational momentum for innovation. When someone gets excited about a technology and deploys it without understanding the right tool for the job, the pilot produces underwhelming results. Leadership becomes disillusioned. And because a manufacturer's primary job is to make and move things — not to innovate — there's only so much bandwidth for experimentation. Waste that bandwidth on misapplied technology, and you don't just lose a project. You lose the organization's willingness to try again.

Anderson saw this play out with a major beverage manufacturer. The team applied reinforcement learning to an extruder that made snack foods — a machine that takes a human operator at least 10 years to learn to control well. When they let the AI practice on its own, it performed okay. Not expert-level. Just okay. It was only when Anderson interviewed the expert operator and decomposed the task into specific skills — the different strategies for different situations, the sequences and hierarchies of decisions — that the AI achieved expert-level performance. The intelligence wasn't in the algorithm. It was in the structure of how the problem was taught.

The compounding cost here is real. Every failed pilot doesn't just waste the project budget. It erodes trust, delays the next initiative, and widens the gap between organizations that figure this out and those that don't.

What Is Machine Teaching, and How Does It Differ from Traditional AI Approaches?

Machine teaching starts from a premise that sounds obvious but is routinely ignored: if AI can learn, someone should probably teach it something. Anderson frames it through an analogy that sticks. A basketball coach doesn't tell a player to figure out how to get the ball through the hoop from scratch. There are infinite ways a human body could attempt that. The coach says: hold the ball here, push it out like this. That guidance comes from collective experience. It's not a rigid rule — there are many valid jump shots — but it bounds practice to promising areas where success is likely.

This is exactly what Anderson found operators doing in factories. They described "schools of thought" for operating complex machines. Different strategies for different situations. If conditions look like this, you do this kind of thing. If they look like that, you do that kind of thing. It's like running plays versus passing plays in football — both valid, but you need to execute the right one in the right situation.

The methodology becomes concrete in Anderson's work with a Fortune 500 glass manufacturer. The machine in question had 60 degrees of freedom — 60 set points that needed adjustment roughly every minute. No automation had ever controlled it. Only humans. Anderson's team used historical data to create a simulation model for practice, interviewed the expert operator to identify eight distinct strategies (some sequential, some hierarchical), and built one agent to learn each strategy. The full team of agents practiced in simulation for about two weeks and learned something the human operator hadn't discovered in 12 years of practice.

The operator wasn't threatened. He gave the AI a name. He saw it as a teammate — because it had learned the skills he taught it, then extended them.

How Do Multi-Agent Design Patterns Solve Problems That Monolithic AI Cannot?

The instinct in manufacturing is to want one monolithic system that handles everything. Anderson thinks this impulse comes from a hope that some master algorithm will eventually spare us the messy work of decomposing problems into their constituent parts. He doesn't think that's possible, and his evidence is compelling.

There are really only four ways to make decisions: calculate what to do (control theory), search through options (optimization), look up past experience (rules-based systems), or learn by practicing (reinforcement learning). Each has strengths and brittleness. Control theory breaks down when physics becomes nonlinear and complex. Optimization is brittle to environmental changes. Rules-based systems can't handle nuance. Learning by practicing is powerful but needs structure. The insight is that real expertise — human or artificial — combines all four, deployed appropriately for different aspects of a task.

Anderson has codified this into repeatable design patterns drawn from over 250 use cases with Fortune 500 companies. The strategy pattern uses a supervisory agent to select between different learned strategies based on conditions — like a football play caller deciding between running and passing. The plan-and-execute pattern separates setting a strategy from executing it, because those are fundamentally different cognitive tasks. The perception pattern handles sensor interpretation. These patterns are additive — you compose them based on what the specific task demands.

Critically, this means the multi-agent system for a glass bottle line looks different from the system for a chemical reactor, which looks different from the system for a rail yard. Expertise is task-specific. Autonomy is task-specific. The design should be too.

How Fast Can Autonomous AI Agents Be Deployed in Existing Manufacturing Environments?

Amesa's platform compresses the path from data to working agents into roughly 12 weeks. The platform has three core components: an agent orchestration studio where engineers define agents, their goals, constraints, and success criteria without writing code; a training cloud where agents practice on simulation models built from historical data; and an edge runtime that connects trained agents directly to PLCs and existing IoT platforms inside the OT network.

The scaling question — how do you go from one line to 100 plants — gets addressed through what Anderson calls operating regions. When a manufacturer says they have 100 recipes or 100 machines, the platform's algorithms identify that those 100 actually cluster into perhaps six distinct operating regions. An agent trained for one operating region covers every machine or recipe in that cluster. When a new recipe or plant enters the picture, it either falls into an existing region (already covered) or creates a new one (requiring only incremental training, not starting over). This is how you avoid the trap of building bespoke solutions for every line and never achieving scale.

What Question Should Manufacturing Leaders Actually Be Asking About Industrial AI?

The wrong question is "which AI algorithm should we deploy?" The right question is "how does expertise actually work in our operations, and how do we systematically capture, extend, and scale it?"

Anderson's prediction for the future is a bifurcation: on one side, capital-intensive gigafactories built from scratch with AI and robotics baked in from day one. On the other, the vast majority of existing plants where humans and AI agents learn from and teach each other, continuously optimizing operations that were never designed for full automation. If you're running existing operations — and most manufacturers are — the second path is the one that matters. The strategic question isn't whether to adopt AI. It's whether you're building intelligence that actually understands your process, or just adding another layer of data infrastructure and hoping something smart emerges.

Kudzai Manditereza

Founder & Educator - Industry40.tv

Kudzai Manditereza is an industrial data and AI educator and strategist. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping manufacturing leaders implement and scale Smart Manufacturing initiatives.

Kudzai shares this thinking through Industry40.tv, his independent media and education platform; the AI in Manufacturing podcast; and the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. Recognized as a Top 15 Industry 4.0 influencer, he currently serves as Senior Industry Solutions Advocate at HiveMQ.