November 2, 2025
November 2, 2025
You trained a model in the cloud. It worked beautifully in testing. You deployed it to the shop floor. And then... nothing. Or worse, it worked for a few weeks and then quietly stopped being accurate, and nobody noticed until it was too late.
This isn't a failure of AI technology. It's a failure of architecture. According to Dr. Nikita Golovko, Solution Portfolio Architect for Industrial AI at Siemens, the problem is that most companies are still thinking about AI as a one-time deployment rather than a continuous system.
The solution? Closed-loop AI, systems that continuously collect data, monitor their own performance, detect when they're drifting, and retrigger retraining automatically. But building these systems requires rethinking everything from data collection to who owns AI in your organization.
Traditional AI follows a linear path: collect historical data, train a model in the cloud, deploy it, and hope it keeps working. When accuracy drops, someone (hopefully) notices and manually kicks off retraining with fresh data. The process is slow, reactive, and entirely dependent on humans spotting problems.
Closed-loop AI flips this model. It creates a continuous cycle where:
The business impact is immediate: reduced downtime, faster response to changing conditions, better asset utilization, and, most importantly, models that actually stay accurate over time.
But here's the key insight Kolovko emphasizes: closed-loop doesn't mean removing humans from the equation entirely. Even in a closed-loop system, AI can function as an assistant, making recommendations that operators approve rather than executing actions autonomously. This lets you build trust gradually while still getting the benefits of continuous learning.
If you've been paying attention to manufacturing AI over the past few years, this won't surprise you: data is still the biggest barrier. But Kolovko breaks down exactly why in ways that might challenge your assumptions.
The fragmentation problem is worse than you think. It's not just that data lives in different systems. The data itself is incomplete, inconsistent, and often doesn't represent the full range of states your processes can be in. You're trying to build models on datasets that only capture a fraction of what actually happens on the shop floor.
Your data is outdated by the time it reaches data scientists. Models trained in the cloud are often working with data that doesn't reflect current shop floor conditions. By the time they're retrained and redeployed, conditions have changed again. You're always behind.
Nobody owns the data. This is less technical and more organizational, but it kills AI projects just as effectively. Who's responsible for data quality? For ensuring sensors are calibrated? For deciding when data should be collected and how it should be formatted? Without clear ownership, your data strategy is just wishful thinking.
You don't have a strategy for retraining. Most teams can't answer basic questions: What metrics trigger retraining? How do we detect drift? Who decides when a model needs to be updated? When these decisions are ad hoc, your models rot.
The solution Kolovko recommends might sound simple, but it's rarely done well: collect everything first, organize it second, and use it third. Don't try to predict which data you'll need for future AI use cases. Collect all available data from the shop floor into a mid-level storage layer—something between raw sensors and cloud data lakes. Then standardize it, add context, and create clear ownership before you worry about sophisticated analytics.
The friction between IT and OT isn't new, but AI makes it urgent. These teams speak different languages, solve different problems, and have fundamentally different priorities.
IT cares about: system reliability, security, cloud infrastructure, and enterprise data management.
OT cares about: keeping production lines running, maintaining equipment, and avoiding downtime at all costs.
When you try to implement closed-loop AI, you need both. You need OT's domain knowledge and access to shop floor data. You need IT's infrastructure and security expertise. And you need them working together, not throwing requirements over walls at each other.
Kolovko points to an encouraging trend: these worlds are finally starting to speak the same language. The gap is closing because technology is forcing it to close. Data from the shop floor needs to reach the cloud securely. Models trained in the cloud need to run on edge devices. PLCs are moving to the cloud. The technical progress is creating common ground.
But technology alone won't solve the organizational divide. You need to:
The companies that figure this out first will have a massive advantage. The ones that don't will keep burning money on AI pilots that never scale.
Here's an uncomfortable truth: the people who understand your manufacturing processes best—your process engineers, quality experts, and operators—can't use your AI tools. And the people who can use your AI tools—your data scientists—don't understand your processes.
This gap is killing your AI initiatives.
Kolovko argues for a fundamental shift: bring domain experts as close as possible to AI tools, not the other way around. This doesn't mean turning every process engineer into a data scientist. It means giving them tools to test their hypotheses without needing a PhD.
Two approaches work:
Create collaborative environments where data scientists and domain experts work together. Data scientists upload pre-trained models. Domain experts test them with real data, map them to specific problems, and provide feedback. Both sides learn each other's language through doing, not training sessions.
Use AI to make AI more accessible. Large language models and conversational AI can act as interpreters between domain experts and complex ML models. Instead of writing Python code, a quality engineer could ask in plain language: "Show me the correlation between temperature fluctuations and defect rates in the past month." The system translates that into the right queries and models, and presents results in language they understand.
The key is recognizing that domain experts don't need to build models from scratch. They need to:
This democratization of AI isn't about replacing data scientists. It's about multiplying their impact by letting them focus on building sophisticated models while domain experts focus on applying them to real problems.
Kolovko outlines a reference architecture for closed-loop AI that manufacturing leaders should understand, even if they're not implementing it themselves. The key components:
An SDK layer that lets data scientists package models in formats that can run on the shop floor, regardless of which ML framework they used (PyTorch, TensorFlow, etc.). This should plug into your CI/CD pipeline so model deployment is automated, not manual.
A model management layer that handles distribution, deployment, and orchestration across your factory floor. This is also where you monitor for model drift and trigger retraining when performance degrades. Think of this as mission control for your AI fleet.
Inference engines at the edge that actually run the models on shop floor hardware. These need GPU support for complex models and must be able to operate reliably in industrial environments (dust, heat, vibration, and all).
Data collection components that continuously gather time-series data from sensors and vision data from cameras. Critically, these should include rules for what data gets sent to the cloud and what stays local, based on bandwidth, security, and latency requirements.
Secure communication between all these layers using protocols like MTLS (mutual TLS) to ensure data moves safely from air-gapped OT networks to IT systems.
The architecture matters because it determines whether your AI can actually close the loop. If data takes too long to reach the cloud, or models can't be updated quickly on the edge, or there's no mechanism to detect drift, you don't have a closed loop—you just have a regular AI deployment with extra steps.
Here's a scenario that plays out more often than anyone wants to admit: you optimize production throughput on one line using AI. It works brilliantly—output increases by 15%. Everyone celebrates.
Then your warehouse can't handle the increased volume. Your suppliers can't deliver raw materials fast enough. Quality issues emerge downstream because other processes weren't designed for this pace. Your "successful" AI project just created chaos.
Kolovko calls these "isolated AI wins" and warns they can negatively impact your entire value chain. The problem isn't the AI—it's implementing optimization in one area without understanding system-level effects.
The solution is an innovation validation framework. Before deploying any AI model, map out:
Tools like process mining software (Celonis, for example) can help you model your end-to-end business processes and simulate the impact of changes before you make them. This turns AI from a local optimization game into a strategic capability that improves the whole system.
Whether you're just beginning with shop floor AI or trying to move beyond pilots, Kolovko's advice is consistent: start with data, not with models.
Month 1-3: Get your data house in order
Month 4-6: Build knowledge on top of data
Month 7-9: Create collaboration environments
Month 10-12: Deploy first closed-loop systems
Throughout this process, resist the urge to jump straight to deploying models. The organizations that succeed with closed-loop AI are the ones that built solid foundations first.
AI isn't optional anymore for manufacturers who want to stay competitive—especially if you're competing against low-cost producers in Asia or trying to maintain European quality standards with tighter margins.
But the difference between AI that works and AI that fails isn't about algorithms or compute power. It's about architecture, data strategy, and organizational capability.
Closed-loop AI represents the maturity manufacturers need to reach: AI systems that don't just predict, but continuously learn, adapt, and improve themselves. Systems that bridge IT and OT. Systems that empower domain experts instead of gatekeeping them. Systems that optimize the whole business, not just individual processes.
The technology is ready. The question is whether your organization is ready to build it right.
Kudzai Manditereza is an Industry4.0 technology evangelist and creator of Industry40.tv, an independent media and education platform focused on industrial data and AI for smart manufacturing. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping digital manufacturing leaders implement and scale AI initiatives.
Kudzai hosts the AI in Manufacturing podcast and writes the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. He currently serves as Senior Industry Solutions Advocate at HiveMQ.