November 8, 2025
November 8, 2025

Manufacturers face a scaling problem: as you deploy more sensors and connected equipment, your cloud infrastructure costs balloon while system latency increases. Zin Kyaw, Senior User Success Engineer at Edge Impulse, presents a different approach—one that questions the assumption that all machine learning must happen in the cloud. His perspective on embedded machine learning offers a practical framework for reducing infrastructure costs, improving response times, and making your data platform more efficient at scale.
Most organizations building industrial data platforms default to a cloud-centric architecture: sensors collect data, devices transmit everything to the cloud, and machine learning algorithms run on centralized servers. This approach made sense when edge devices lacked computational power and when you were managing hundreds or thousands of data points.
But the economics change dramatically at scale.
Kyaw describes the reality: "When you're talking about millions of sensors, tens of millions of sensors, that is a big scale. To support that kind of data and storage requires a lot of cloud infrastructure. Management of that data—being able to provide enough systems on the back end to crunch through that data in an efficient manner—becomes important."
For data leaders managing global manufacturing operations, this manifests as:
The cloud-only model treats edge devices as "dumb sensors"—simple data collectors with no processing capability. But modern microcontrollers and edge processors have evolved dramatically. The question isn't whether your edge devices can run machine learning, but whether your architecture takes advantage of this capability.
Kyaw rejects the binary thinking that machine learning belongs either entirely in the cloud or entirely at the edge. Instead, he advocates for strategic placement based on what each model needs to accomplish.
Machine learning at the edge excels for:
Cloud machine learning remains essential for:
The key insight is that these aren't competing approaches—they're complementary. Edge ML handles time-critical operational intelligence while cloud ML provides strategic insights and continuous model improvement. Your architecture should explicitly define which decisions happen where.
The practical value of edge machine learning becomes clear when you examine actual manufacturing use cases. These aren't futuristic scenarios—they're applications being deployed today.
Predictive maintenance through vibration analysis:
Traditional approaches monitor equipment vibration by continuously streaming sensor data to the cloud for analysis. Edge ML models can analyze vibration patterns locally, identifying anomalies that indicate bearing failure or mechanical wear. The edge device only alerts the cloud when anomalies are detected, transmitting a few kilobytes of diagnostic data instead of continuous high-frequency sensor streams. This reduces bandwidth by orders of magnitude while providing faster detection.
Real-time quality control in production lines:
Manufacturing processes require immediate feedback—detecting defects as they occur, not minutes later after cloud analysis. Edge ML models running on cameras or sensor arrays at inspection points can identify quality issues in real-time, triggering immediate responses like line stops or automated adjustments. The cloud receives summary statistics and flagged defects for trend analysis, not every inspection image.
Environmental monitoring and cold chain management:
The example Kyaw uses is particularly relevant: detecting when refrigeration units have been left open in retail environments. In manufacturing, similar applications include monitoring temperature, humidity, or atmospheric conditions in cleanrooms or controlled environments. Edge ML provides instantaneous alerts when conditions deviate, while cloud systems track long-term trends and optimize HVAC operations.
Anomaly detection in operational patterns:
Equipment doesn't always fail in predictable ways. Edge ML models trained on normal operational patterns can detect subtle deviations that indicate problems before they escalate. By processing sensor data locally and only transmitting anomaly notifications, you dramatically reduce the data volume your platform must handle while improving detection speed.
For data leaders evaluating edge ML deployment, processor selection directly impacts what your models can accomplish and at what cost. Kyaw identifies three critical factors:
Computational capability and specialized accelerators:
Modern microcontrollers increasingly include neural network accelerators—specialized hardware that runs inference efficiently. These accelerators dramatically improve performance while reducing power consumption compared to running models on general-purpose processors. When evaluating hardware platforms, look for processors with ML acceleration capabilities, not just raw processing power.
Power consumption constraints:
Edge devices in manufacturing environments often need to operate on battery power or have limited power budgets. Model complexity must balance accuracy requirements against power constraints. Simpler models running on lower-power processors may provide adequate accuracy for many applications while extending device lifetime and reducing operational costs.
Memory and storage limitations:
Edge ML models must fit within the memory constraints of embedded devices. This means model optimization and compression become critical. The good news is that many manufacturing applications don't require the complexity of cloud-based models—simpler models optimized for edge deployment often provide sufficient accuracy for operational decisions.
The hardware landscape is evolving rapidly. Silicon manufacturers are incorporating more capable ML accelerators into microcontrollers specifically designed for industrial applications. This means that edge devices deployed today have capabilities that were impossible just a few years ago.
One challenge data leaders face is the specialized skillset required for embedded machine learning development. Traditional ML engineers understand cloud frameworks like TensorFlow and PyTorch, but may lack experience optimizing models for embedded devices. Embedded engineers understand hardware constraints but may lack ML expertise.
Platforms like Edge Impulse address this skills gap by providing workflows that abstract much of the complexity:
The platform approach matters because it lets your existing data science teams deploy edge ML without requiring them to become embedded systems experts. The learning curve is measured in days or weeks, not months.
Importantly, Edge Impulse uses Apache 2.0 licensing for generated inference code—meaning no per-device royalties that would make large-scale deployments economically unfeasible. For data leaders planning deployments across thousands of devices, licensing models directly impact total cost of ownership.
The question facing data leaders isn't whether to deploy machine learning in manufacturing—it's where to deploy it. Cloud-centric architectures made sense when edge devices lacked computational capability. But the hardware landscape has fundamentally changed, and clinging to cloud-only models means paying exponentially increasing infrastructure costs while accepting latency constraints that prevent real-time operational intelligence.
Edge machine learning isn't about replacing cloud analytics—it's about strategically placing intelligence where it creates the most value. Real-time decisions happen at the edge. Strategic analysis happens in the cloud. Your data platform architecture should explicitly support both, with clear decision criteria for which models run where.
For manufacturing data leaders, the opportunity is significant: dramatically reduced cloud costs, faster operational responses, and more efficient use of your data infrastructure. The technology is proven, the tools are maturing, and the hardware is increasingly capable. The question is whether your architecture is ready to take advantage of it.