March 30, 2026
March 30, 2026
Most manufacturers asking "how do we use AI?" are asking the wrong question. The right question is: does your data infrastructure even allow an AI agent to understand your business?
In my conversation with Walker Reynolds on the AI in Manufacturing podcast, we dug into a reality the industry is only now confronting. Agentic AI is not a software feature you bolt onto existing systems. It is an architectural capability that demands a semantic, real-time data backbone, and for most manufacturers, that backbone does not exist yet. Walker, president and Solutions Architect at 4.0 Solutions and the architect behind the Unified Namespace concept, laid out a sharp picture: vendors are racing ahead with agentic AI offerings while the vast majority of end users are still stuck at "where do I start?" The gap between the digitally mature and everyone else is widening, and knowledge graphs are emerging as the critical connective tissue most organizations have never heard of.
The uncomfortable truth is that AI fatigue is setting in across the industry — not because the technology is overhyped, but because it is being applied in a vacuum. As Walker put it, many vendors are incorporating agentic AI into their solutions without really understanding what problem they're trying to solve. End users see these demos and think: I would never use that. That is not useful for me. They can tell when something was built for a marketing slide rather than a plant floor.
The distribution of digital maturity in manufacturing is a bell curve with stretching tails. The bleeding-edge companies — the ones who understood Unified Namespace years ago and invested in semantic data infrastructure — are pulling further and further ahead. Meanwhile, 98% of the market is still clustered around the mean, asking the most basic question: where do I start? This is not a technology problem. It is a fluency problem. Most organizations have not yet built the internal muscle to understand what agents are, how they reason, or what kind of data infrastructure they require to be useful.
The lesson is clear: if you are waiting for a turnkey agentic AI solution, you are already falling behind the organizations that are building understanding now. Fluency precedes value.
The manufacturers getting the most value from Unified Namespace understand one thing above all else: what it is and what it is not. And the ones failing to extract value have almost always tried to make it something it was never designed to be.
Unified Namespace is the current state of your business — semantically organized, contextualized, and normalized. It is the single source of truth for all events, data, and information models, and the hub through which the smart things in your business communicate. Critically, it is the architectural foundation of your digital transformation initiative. What it is not is a historian, a transaction log, or a data lake. When you look at the Unified Namespace, you are looking at your business right now — not five minutes ago, not last week.
This distinction matters enormously for agentic AI. Consider how a supervisor uses Unified Namespace today. You oversee assembly in a tier one automotive plant with 100 machines running across 65 product lines. If someone asks you how your area is doing right now, the honest answer has traditionally been: I don't know. Unified Namespace solves that. It gives you the current state — active work orders, top downtime reasons, machine performance relative to schedule — all semantically organized so that the data carries its own context.
Where manufacturers get into trouble is when they try to force the Unified Namespace to be everything — a historian, a transaction store, an analytics engine. If you're not getting value out of it, you are using it for something it isn't. The companies that succeed treat Unified Namespace as the real-time centerpiece and let other purpose-built systems handle history, transactions, and analytics around it.
If Unified Namespace is the real-time state layer, knowledge graphs are the reasoning layer. And this is where the industry is experiencing a genuine paradigm shift.
Knowledge graphs represent relational context — the relationships between nodes in any intelligent system. They have existed for years in various forms, but the manufacturing world largely ignored them because no one could explain why they mattered in practical terms. Agentic AI changed that overnight. The reason is straightforward: an AI agent needs to navigate your infrastructure to answer questions and meet objectives. It needs to move from a high-level business question — why are we 45 minutes behind on this work order? — down through layers of context to find where the answers actually live. That navigation path is a knowledge graph.
Imagine an agent operating at the MES layer. It detects a rising anomaly — decreasing OEE, a work order falling behind schedule. The agent does not just flag the problem. It navigates down through the knowledge graph, reasoning through relationships between production lines, equipment, work orders, and historical performance to identify what interventions are needed. Without a knowledge graph, the agent has no map. It cannot reason about relationships it does not know exist.
Walker highlighted a critical architectural parallel. Just as UNS supports both standardized (red) and ad hoc (blue) namespaces, knowledge graphs need to support both standardized and ad hoc relationships. A PLC plugged into an infrastructure should automatically publish its standard relationship definitions into the graph. But an engineer on a specific production line also needs the freedom to define non-standard relationships that matter for their context. This federated approach — building knowledge from the bottom up rather than imposing a single top-down ontology — is what will make knowledge graphs practical at scale.
The implication is direct. If you spent last year learning Model Context Protocol, this year your homework is knowledge graphs — what they look like, how relationships are handled, why ontologies matter. Every platform that wants to be useful to an agent will need to expose a navigable knowledge graph. If a platform cannot visualize the relationship between all entities inside it, that platform will not be useful to an agent.
There is a persistent fantasy in the market that agents will eventually run autonomously on the plant floor. Walker was characteristically direct about this: the only people who believe agents can run autonomously are people who do not work with agents.
The math is unforgiving. The most reliable large language models today operate at roughly 99.9% accuracy — one error in every thousand words. A PLC has nine nines of reliability. It would run longer than the history of the universe before a single failure. An agent cannot run for minutes without an error. That gap means autonomous agents in manufacturing are not just premature — they are a mathematical certainty of wasted money.
The underlying problem is that the language models driving agents are trained on data containing native conflicts — academic papers that contradict one another, opinions, speculation, and misinformation coexisting with verified facts. Until we have better mechanisms for training models on truth, agent reliability will plateau. We are already seeing diminishing returns in the base models even as development tools like Claude Code continue to improve exponentially.
The correct mental model is agents as force multipliers for your workforce. Walker's prediction is counterintuitive but compelling: we will see more people on the plant floor, not fewer. Those people will be analysts supervising AI agents to optimize operations, while middle management shrinks. Fewer people managing people, more people managing agents. That is the real transformation — not lights-out factories, but augmented human decision-making at every level.
The architecture becomes clear when you trace how an agent reasons through a real question. Say you are a supervisor and you ask: give me the 10 assembly machines I need to visit right now and talk to the operators.
The agent starts with Unified Namespace. It looks at all 100 assembly machines, semantically organized in context to one another, and identifies the 10 performing worst based on whatever metric you use — PackML state, OEE, custom KPIs. The Unified Namespace provides the real-time snapshot.
But the agent needs more. It sees a production line running behind schedule and reasons: I need historical work orders for the product code currently running on that line. It uses MCP tools to query the work order system, pulling history from a different system of ownership. The knowledge graph is what allows the agent to know that relationship exists — that this line runs this product code, which has historical work orders in that system.
This three-layer pattern — human context, Unified Namespace state, MCP tool retrieval guided by knowledge graph reasoning — is the practical architecture for industrial agentic AI. The agent does not need to know everything. It needs to know where to look and how things relate. Unified Namespace provides the what. Knowledge graphs provide the how. And the human supervisor provides the why.
The practical path forward does not start with AI. It starts with infrastructure. For a small to midsize manufacturer, the architecture begins simply: a server positioned between IT and OT running Linux with Docker, device connectors pulling data from PLCs and databases, an IIoT platform for visualization and control, and a historian in containers. Then you extend outward to a clustered MQTT broker like HiveMQ for your Unified Namespace. From there, you layer in data operations tooling, analytics platforms, and — critically — a knowledge graph that maps relationships across your entire infrastructure and gives agents a navigable map of your business.
The strategic question is not "which AI vendor should we choose?" It is "can an agent understand our business?" If you do not have a Unified Namespace providing real-time state, an agent has no starting point. If you do not have knowledge graphs mapping the relationships between your assets, processes, and systems, an agent has no way to reason through your infrastructure. And if you are not investing in AI fluency across your workforce — not just your data science team, but your supervisors, your engineers, your operators — then even the best infrastructure will sit underutilized.
The organizations pulling ahead right now made a bet on foundational architecture before the AI wave hit. They built the Unified Namespace when it seemed like just a data integration play. They are now layering knowledge graphs and agentic capabilities on top of infrastructure designed for exactly this moment. The window to make that same bet is still open, but it is closing fast.
Kudzai Manditereza is an industrial data and AI educator and strategist. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping manufacturing leaders implement and scale Smart Manufacturing initiatives.
Kudzai shares this thinking through Industry40.tv, his independent media and education platform; the AI in Manufacturing podcast; and the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. Recognized as a Top 15 Industry 4.0 influencer, he currently serves as Senior Industry Solutions Advocate at HiveMQ.