November 2, 2025
November 2, 2025
You have a choice when deploying AI in manufacturing: run it in the cloud, embed it in your PLCs, or deploy it at the edge between shop floor and enterprise systems.
Most companies gravitate toward the cloud because that's where their data science teams live. Some try to push intelligence into PLCs because that's where control happens. But according to Andrea Garcia Gangoi, Director of Data Intelligence for Industry at Vicomtech research center, there's a better option that most manufacturers overlook: the edge.
After years of implementing AI systems in European factories, Vicomtech developed a modular edge architecture that solves problems neither cloud nor PLC approaches can address. The results? Factories where maintenance teams can validate hypotheses in hours instead of weeks, machines that optimize their own processes in real-time, and AI systems that companies actually own and control.
Here's why the edge matters and how to build AI systems that work there.
The cloud seems like the obvious choice for AI. You have unlimited compute, your data scientists already work there, and modern ML platforms make deployment easy. But Garcia points to three problems that kill cloud-based shop floor AI:
Cultural resistance is real. Many European manufacturers remain deeply uncomfortable sending shop floor data to the cloud. It's not just paranoia—it's legitimate concerns about intellectual property, competitive intelligence, and losing control of proprietary processes. You can't build AI systems that require technology your organization fundamentally doesn't trust.
Infrastructure costs add up fast. Getting enough network bandwidth to send high-frequency sensor data to the cloud requires infrastructure upgrades. Cloud services might seem cheap initially, but over the 20-30 year lifecycle of industrial equipment, those variable costs become substantial. Small companies especially struggle with the unpredictable expense.
Latency matters for real-time decisions. When you're controlling a process that needs millisecond response times, round trips to the cloud don't cut it. Even with good connectivity, network delays can make real-time optimization impossible.
What about pushing intelligence into PLCs? Modern PLC software has improved dramatically and many vendors now support more sophisticated logic. But you face a different trap:
You become a prisoner of the vendor ecosystem. Once you build intelligence using Rockwell's approach, or Siemens' approach, or ABB's approach, you're locked in. You can't leverage the rapidly evolving open-source AI tools that the broader community is developing. Every vendor does it their own way, and switching becomes prohibitively expensive.
The edge sits in the middle. You keep latency-sensitive intelligence close to the shop floor while maintaining flexibility to use any tools you want. You own the hardware, control the data, and can swap components as technology evolves.
Vicomtech's architecture is built entirely on microservices running in Docker containers. This isn't a technical detail—it's the core design principle that makes everything else work.
Each function becomes a separate, swappable module:
Gateway modules connect to industrial equipment using OPC UA, Modbus, or other industrial protocols on one side, and translate everything to MQTT on the other side. If you need to change how you connect to equipment, you swap the gateway. The rest of your system doesn't change.
Storage modules subscribe to data from the MQTT broker and write to databases. Want to switch from one time-series database to another? Swap the writer module. Your gateways and AI models keep running.
Visualization modules like Grafana query the database and let people explore data through dashboards. Need better visualization tools later? Add them without touching your data collection.
AI inference modules read real-time data from the broker, run models, and publish results back. When models need updating, you deploy new versions without rebuilding the entire system.
This modularity delivers two critical advantages:
Start small and grow. You don't need to design the perfect system upfront. Begin with a single machine, prove value, then scale horizontally. Add new machines by deploying more gateway modules. Add new analytics by deploying more processing modules. The architecture grows with your needs.
Reduce learning curves. New team members can understand and modify individual modules without grasping the entire system. When you transfer solutions to companies (as Vicomtech does), they can maintain and extend them without specialized expertise.
The MQTT broker ties everything together. For most use cases, it's simple, lightweight, requires minimal resources, and handles the data volumes from individual machines or lines easily. When you need more scale, swap in Kafka or a commercial broker without changing how modules communicate.
Vicomtech originally used InfluxDB following the popular TICK stack pattern. They switched to TimescaleDB. The reason why should matter to anyone building industrial data systems.
The learning curve kills adoption. InfluxDB requires learning Flux, a proprietary query language. Finding people in manufacturing companies who know SQL is easy. Finding people who know Flux is nearly impossible. When you transfer a solution to a client, they need to maintain it. If they can't query the database without specialized training, your solution won't survive.
You need more than just time-series. Manufacturing data isn't purely time-series. You have relational data (equipment hierarchies, maintenance records, quality specs) and document data (work instructions, procedures). TimescaleDB handles all three: traditional relational tables, JSON documents, and time-series extensions. One database, one skill set, simpler operations.
Built-in features solve common problems. Vicomtech kept hitting the same issues: dashboards requesting millions of data points that browsers couldn't render, databases filling up with old data, and needing different aggregation levels for different analyses.
TimescaleDB solved these with native features:
These aren't exotic capabilities—they're table stakes for production time-series systems. But most teams implement them in application code, creating complexity and maintenance burden. Having them in the database means they work consistently across all applications.
AI deployment on the edge happens in three stages, each building on the previous:
Stage 1: Custom business logic. Before you need sophisticated AI, you need the ability to implement complex rules that reflect actual business knowledge. A domain expert might know: "When temperature exceeds X AND pressure drops below Y AND vibration is in range Z, we're about to have a problem."
With all data flowing through the MQTT broker, writing custom logic is straightforward. You can be a process engineer without deep programming skills and use Node-RED to build rules visually. Or write simple Python scripts. The point is removing barriers between domain knowledge and automated responses.
Stage 2: Trained models in production. Once you're collecting quality data, export it for data scientists to work with. They explore, develop hypotheses, train models, and measure accuracy offline. When a model is ready, package it in a Docker container that:
Results might be alarms, quality predictions, recommended parameter adjustments, or calculated values to store in the database. The key is that models run where the data is, with latency measured in milliseconds, not seconds.
Stage 3: Continuous learning systems. This is where edge AI gets sophisticated. Deploy additional modules that:
You're building closed-loop systems where AI improves itself based on real operational data. But unlike cloud-based systems, everything happens locally with no dependency on external services.
Theory is easy. Vicomtech validated their architecture on a real leak detection machine where vacuum quality determines measurement accuracy.
The original problem: Operators manually tuned two vacuum pumps through trial and error. Start the primary pump, wait, estimate when to start the secondary pump, hope the vacuum stabilizes properly. Timing varied by part condition, ambient humidity, and dozens of other factors. No one could consistently nail it.
The edge AI solution: Deploy the modular architecture on an industrial PC. Create an OPC UA interface to the machine exposing all vacuum values and pump controls. Model the entire machine as a black box that accepts commands and reports states. Collect weeks of test data under different conditions.
Train simple models (the data curves were clear enough that sophistication wasn't needed) to:
Results? The system controls pump timing better than human operators, adapts automatically to varying conditions, and catches problems before they waste test cycles. The intelligence runs entirely on the edge device with no cloud dependency.
The broader lesson: Edge AI doesn't always need deep learning or neural networks. Often, having the right data in the right place with simple models beats sophisticated algorithms running on stale data.
As you scale edge AI, you hit a standardization problem. How do you ensure systems at different sites, built at different times, can share information and evolve together?
Vicomtech tackles this through Asset Administration Shell (AAS), the digital twin standard from the Industrial Digital Twin Association. Understanding AAS helps clarify what standardization actually means for edge systems.
Internal standardization comes first. Within your organization, model equipment consistently. If you have three motors, each should expose speed, temperature, and operational state using the same naming conventions and structures. This lets you reuse dashboards, analytics, and AI models across equipment.
AAS provides patterns for this modeling work through submodels—standardized ways to represent different aspects of assets (technical specifications, operational data, maintenance history, energy consumption). You don't have to reinvent these models.
External integration needs standards. When you collaborate with suppliers or customers who need access to data about your equipment, AAS becomes critical. It provides:
The key distinction: AAS complements rather than replaces protocols like OPC UA. OPC UA excels at real-time device communication. AAS excels at representing complete asset information across lifecycle stages and organizational boundaries. Edge architectures need both.
Here's a pattern Vicomtech sees repeatedly: Manufacturers deploy monitoring systems. Data flows into databases. Data scientists build models. Nothing changes on the shop floor.
The problem? Insights stay trapped in reports and dashboards that operators don't trust or understand.
Vicomtech solved this for a maintenance team responsible for industrial ovens running 24/7. Instead of building sophisticated AI they didn't need yet, they taught the maintenance engineers to use Grafana's query and visualization tools.
These domain experts already had hypotheses:
Give them tools to validate these hypotheses themselves. When someone reports a problem from last weekend, let them query exactly what happened at that time. When they suspect a pattern, let them visualize months of data to confirm it.
The result: Maintenance teams became proactive problem solvers instead of reactive firefighters. They identify issues before they cause downtime. They propose optimizations based on data they've explored themselves. They trust the systems because they understand them.
This is the edge advantage: Intelligence close enough to operations that the people who understand the process can actually use it.
If you're leading manufacturing data and analytics at an enterprise, here's how to start with edge AI:
Week 1: Pick your first machine
Week 2-4: Deploy minimal architecture
Week 5-8: Build with domain experts
Week 9-12: Add first AI models
Quarter 2: Scale and standardize
The key is starting small but building right. A modular edge architecture on one machine scales to hundreds. A monolithic cloud system that works for one use case rarely survives contact with manufacturing reality.
Cloud AI and PLC intelligence both have roles. The cloud handles compute-intensive training and enterprise-wide analytics. PLCs handle safety-critical control loops with microsecond timing.
But the sweet spot for manufacturing AI is the edge: close enough to operations for real-time decisions, flexible enough to evolve with technology, and owned by you rather than locked into vendor platforms.
The companies winning with AI aren't the ones with the biggest cloud budgets or the newest PLCs. They're the ones building modular, open, edge-based systems that their operators and engineers can actually use.
That's where intelligence belongs in manufacturing: not in distant data centers or locked vendor boxes, but right there on the shop floor, working for the people who make things.
Kudzai Manditereza is an Industry4.0 technology evangelist and creator of Industry40.tv, an independent media and education platform focused on industrial data and AI for smart manufacturing. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping digital manufacturing leaders implement and scale AI initiatives.
Kudzai hosts the AI in Manufacturing podcast and writes the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. He currently serves as Senior Industry Solutions Advocate at HiveMQ.