November 2, 2025
November 2, 2025
Your machines are fully instrumented. Your MES captures every cycle. Your sensors track temperature, pressure, and throughput. But here's what you're not measuring: the operator searching for parts in a bin for three minutes. The assembly team skipping a critical quality step because they're rushing. The unsafe workaround that happens on the night shift when no supervisor is watching.
This is the data gap that Cyrus Shaoul, CEO of Leela AI, calls "the rest", everything that happens between the machines that has been nearly impossible to capture at scale. And according to their work with manufacturers, this invisible data represents some of the biggest opportunities for improvement: 20% capacity gains, 50% reductions in safety incidents, and 10% yield improvements.
The breakthrough isn't just computer vision. It's what Cyrus calls visual intelligence—context-aware AI that understands the meaning of what's happening, not just what's visible.
Most manufacturers have experience with machine vision for product inspection—looking at parts to find defects. That technology has been valuable and continues to evolve. But it's focused on objects, not processes.
Visual intelligence takes a different approach. It watches entire operations—people, machines, and environment together—and understands what's actually happening in context.
Here's the critical distinction: the same action can mean completely different things depending on when and where it happens. An operator walking across the shop floor might indicate:
Traditional computer vision sees walking. Visual intelligence understands which type of walking is happening based on the full operational context—what the machines are doing, what status codes are showing, what the standard work should be for that moment.
As Cyrus explains: "Things mean different things depending on what's going on. In one moment in time, in one location, something happening means something. In another point in time, in a different location, it means something else, the exact same thing could be happening."
Most solutions solve one problem with one sensor. Lila AI deliberately designed their platform to address three manufacturing challenges simultaneously with the same video infrastructure:
Performance Optimization: Identifying when excellence is happening versus when it's not. The system watches the interaction between people and their environment—machines, robots, tools—to understand what drives strong performance and what causes drops.
Safety Monitoring: Detecting PPE compliance, person-down situations, hazardous behaviors, and environmental risks in real time. Beyond just alerting, the system tracks trends and creates benchmarks across shifts and areas.
Quality Assurance: Catching process deviations that create defects invisible to end-of-line inspection. Many quality issues only show up in how the work is performed, not in the final product appearance.
This multi-purpose approach matters for data leaders managing infrastructure investments. You're not deploying separate vision systems for each use case. You're creating multiple synthetic sensors from one physical camera—each providing different operational insights from the same visual data stream.
The numbers from Lila AI's customer deployments show why this data gap matters so much:
High-Mix Assembly Environment: A manufacturer building complex machines with one to four-hour assembly cycles couldn't meet production targets. Their problem wasn't machine capacity—it was understanding where time was actually going in highly manual operations.
Without expensive MES implementations in manual areas, they deployed visual intelligence to track when standard operating procedures were followed versus when they weren't. The system built high-fidelity value stream maps showing exactly how time was spent every second of every day.
Result: 20% increase in line capacity. They went from falling behind on orders to consistently hitting hourly and daily targets. Timeline: visible signals in 30 days, measurable improvements by day 90.
Metal Casting Safety: In an environment with liquid metal, explosion risks, and environmental hazards, a large manufacturer needed to improve safety compliance without hiring more safety managers.
Visual intelligence monitored PPE compliance, person-down detection, and zone overcrowding across areas where workers often operated alone. They added a live scoreboard showing shift-by-shift safety scores, creating friendly competition between teams.
Result: 50% reduction in non-compliant safety events initially, continuing to trend toward zero. The behavior change happened simply because people knew there was consistent monitoring—no more running across hazardous areas without proper equipment.
Aluminum Vacuum Molding: Some process errors create defects that are invisible from outside the finished part. If operators don't follow precise timing and procedures during molding, internal weaknesses develop that can cause failure later.
Visual intelligence tracked adherence to proper procedures in real time, providing immediate feedback when operators deviated from trained methods. This closed the loop between process execution and quality outcomes.
Result: 10% yield improvement by reducing non-conforming production from 15% to 5%, with a path to zero. For high-value parts, this represents millions in savings with no other way to detect the issue.
For data leaders, the integration story matters as much as the capabilities. Visual intelligence only delivers value if it fits into your existing data architecture without creating new silos.
Lila AI built their platform around MQTT publishing from day one. Every event the system detects—process started, quality problem detected, safety issue identified—gets published as an MQTT message with proper timestamps and topology organization (factory > line > workstation).
This architecture choice aligns perfectly with the unified namespace trend gaining traction in manufacturing. Your MES, quality systems, and ERP can all subscribe to the visual intelligence data stream just like any other sensor data.
But the integration is bidirectional. The visual AI can also consume MQTT messages from your other systems to improve its understanding. If your CNC machine publishes status codes, the visual system uses that context to interpret what operator activities mean.
Example: An operator standing idle near a CNC machine means something completely different if the machine status is "running normally" versus "error requiring intervention." The visual system combines what it sees with machine status to understand whether the operator behavior indicates a problem or is expected.
This bidirectional flow also enables dynamic standard work tracking. In high-mix environments where the product changes every few hours, the system receives ERP data about what's being built and matches operator activities against the correct work instructions for that specific model and SKU.
Manufacturing productivity has been flat for 15 years. You can't significantly increase people-hours—the skills shortage is real and getting worse. The only way forward is raising output with the same workforce.
For data and analytics leaders, visual intelligence represents a fundamental expansion of what's measurable and therefore improvable. You're not replacing existing instrumentation—you're adding a layer that captures human-machine interactions that were previously invisible.
Consider what this means for your analytics stack:
Your data catalog expands dramatically: You now have measurable KPIs for direct labor productivity, value-add time percentages, SOP adherence rates, and safety compliance—metrics that were either unavailable or required manual time studies.
Your value stream maps become complete: Most digital twins and process models miss the human element because it wasn't instrumented. Visual intelligence fills that gap, giving you a true representation of how work actually flows.
Your quality analytics improve: You can correlate process execution with quality outcomes, not just machine parameters with defects. This opens entirely new improvement pathways.
Your real-time operational visibility extends to manual work: Dashboards that previously showed only automated processes can now include assembly areas, maintenance activities, and other human-intensive operations.
The technology architecture also addresses a key concern for data leaders: avoiding vendor lock-in and proprietary protocols. MQTT integration means visual intelligence data flows through your unified namespace just like any other operational data.
One reason visual intelligence solutions have been difficult to scale is the training data problem. Traditional neural networks need hundreds or thousands of labeled examples for each specific scenario. In a manufacturing environment with thousands of possible tool-activity combinations, this becomes impossible.
Lila AI's approach combines neural networks with causal learning—a technique that builds knowledge networks understanding cause-and-effect relationships at a symbolic level, not just pattern matching.
Instead of training the system to recognize "tightening a screw with this specific screwdriver on this specific device," the causal network understands higher-level concepts like "tool use" and "assembly steps." It can then apply this knowledge to new combinations it hasn't seen before.
This architecture dramatically reduces both training data requirements and deployment time. Customers see working systems in one to two weeks, with full accuracy achieved in 30 days. For scenarios with standard equipment and PPE, the system often works immediately with no additional training.
For data leaders evaluating AI platforms, this matters because it determines scalability. Can you deploy to a second line in days rather than months? Can you roll out across facilities without rebuilding training datasets? The causal learning approach makes this practical.
Kudzai Manditereza is an Industry4.0 technology evangelist and creator of Industry40.tv, an independent media and education platform focused on industrial data and AI for smart manufacturing. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping digital manufacturing leaders implement and scale AI initiatives.
Kudzai hosts the AI in Manufacturing podcast and writes the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. He currently serves as Senior Industry Solutions Advocate at HiveMQ.