November 7, 2025
November 7, 2025

When you're building data infrastructure for a global enterprise, edge computing sounds compelling in theory. But between vendor marketing and technical complexity, how do you actually decide where edge fits in your strategy?
Rob Tiffany—who built the Lumada industrial IoT platform at Hitachi, co-authored Azure's IoT reference architecture at Microsoft, and now leads IoT strategy at Ericsson—has spent years implementing edge computing at scale. His perspective cuts through the buzz to focus on practical business cases and architectural decisions that actually matter.
Here's the reality: edge computing isn't one thing. It's a spectrum of deployment patterns, each solving different problems. Some installations need millisecond response times for safety-critical operations. Others face bandwidth costs that make cloud processing economically impossible. Still others operate under regulatory constraints that prohibit data from leaving specific locations.
Your job is understanding which problems edge computing actually solves for your organization, then architecting solutions that deliver value without creating operational nightmares. This guide provides the framework for making those decisions based on real-world implementations.
Stop looking for a single definition of edge computing—there isn't one. Instead, think about edge as a spectrum of compute locations, each with different characteristics and use cases.
Gateway edge devices:These sit near machines in facilities, connecting to PLCs and equipment using industrial protocols. They aggregate data from multiple sources and route it upstream. Originally conceived as simple data routers, they've evolved to handle local analytics and decision-making.
On-machine edge:When equipment has sufficient embedded compute, networking, and storage, the edge moves directly onto the machine itself. This works when you need the absolute lowest latency and can't tolerate any network delay.
Facility edge:Some organizations run edge infrastructure in on-site data centers adjacent to their operations. This provides more compute power than gateway devices while keeping data within facility boundaries. It's particularly common in manufacturing where plant managers refuse to send data to external clouds.
Telecom edge:Mobile operators deploy edge computing at cell towers and metro data centers within cities. This multi-access edge computing (MEC) intercepts data before it reaches distant clouds, reducing latency and network congestion.
What they have in common:All these patterns intercept and process data closer to its source than traditional cloud architectures. The business case for each varies based on your specific constraints around latency, cost, security, and data sovereignty.
The key insight: don't force your problems into someone else's edge definition. Evaluate where compute needs to happen based on your actual requirements.
Edge computing solves real business problems. Understanding which problems you actually have determines your architecture.
Latency and speed requirements:Safety-critical operations in manufacturing can't wait for cloud round-trips. When milliseconds matter, edge processing becomes non-negotiable. Predictive maintenance that prevents equipment damage needs local analytics to trigger immediate responses.
Bandwidth economics:Consider the factory generating terabytes of machine data hourly. Sending that volume to the cloud costs more than the insights are worth. Local processing to extract meaningful signals, then sending only aggregated data upstream, makes economic sense.
Data sovereignty and regulatory compliance:Many facilities operate under restrictions that prohibit data from crossing geographic boundaries. Healthcare regulations limit patient data movement. Some plant managers simply refuse to send operational data to external clouds. Edge processing respects these constraints while still enabling analytics.
Network reliability:Remote oil and gas operations in West Texas or Saudi Arabia can't depend on consistent connectivity. Expensive satellite links mean limiting data transfer. Edge devices at remote sites make local decisions even when disconnected from central systems.
Security and attack surface:Keeping sensitive operational data within facility boundaries reduces exposure. Processing locally means less data traversing networks where it could be intercepted.
Cost optimization:Cloud ingress charges for massive industrial data volumes add up quickly. Edge processing reduces what flows upstream, directly cutting costs. For some workloads, the total cost of ownership favors edge deployment.
The pattern here: edge computing isn't about being cutting-edge. It's about economics, physics, and regulatory reality. Match your architecture to actual constraints rather than following industry trends.
When implementing analytics at the edge, resist the temptation to immediately deploy complex machine learning. Start with basics that deliver value, then progress to advanced capabilities.
Data filtering as foundation:The simplest edge analytics involve filtering redundant data. If a temperature sensor reports the same value it sent last time, drop the packet. This basic filtering can reduce data volumes by 50-70% for stable processes, directly cutting bandwidth and storage costs.
Pattern matching and thresholds:Define expected values and acceptable ranges in your digital twin models. As data flows in, compare actual values against expectations. Green, yellow, and red zones for each parameter enable simple but effective alerting. This isn't sophisticated, but it works and scales.
KPI calculation at the edge:Rather than sending raw data upstream for processing, calculate key performance indicators locally. Send only the KPIs rather than underlying data. This reduces bandwidth while providing business-relevant metrics in real time.
Rule-based actions:If-this-then-that logic deployed at the edge enables automated responses to events. When conditions match defined patterns, trigger local actions or alerts without waiting for cloud processing.
Progression to machine learning:Once basic analytics deliver value and you understand your data patterns, deploy machine learning models at the edge. But do this strategically—models trained in the cloud can be deployed locally for inference. Some advanced systems even train models at the edge based on local data patterns, though this remains complex to implement.
The progressive approach:Start with data filtering and basic pattern matching. Demonstrate value quickly. Build organizational capability and trust. Then expand to more sophisticated analytics as use cases mature. This approach reduces risk and delivers incremental value rather than attempting complex implementations that may fail.
Think of it as taking a test—answer the easy questions first, then tackle the difficult ones. Too many organizations try to start with AI without establishing the basic data quality and processing foundations that make AI effective.
Digital twins provide the semantic layer that makes edge analytics practical at scale. Understanding this integration helps you architect more intelligent edge systems.
Digital twin models as data dictionaries:At the model level, you define everything about an asset class—what sensors it has, the data types for each sensor, units of measure, expected value ranges, and commands it accepts. This acts as a comprehensive data dictionary that edge devices use to understand incoming data.
Orchestrating twins to edge nodes:When you deploy edge devices, your orchestration system pushes the relevant digital twin definitions to each node. An edge device monitoring pumps receives pump digital twins. A device monitoring conveyors gets conveyor twins. This targeted deployment keeps edge footprints small while providing necessary intelligence.
Local decision-making with twin logic:Edge devices use twin definitions to make intelligent decisions locally. The twin defines that tire pressure should be 32 PSI with acceptable range 30-35. The edge device compares incoming sensor data against these definitions and takes defined actions when thresholds are crossed.
Offline operation:Because digital twin definitions live on edge devices, they continue operating even when disconnected from central systems. This is critical for remote operations where connectivity is intermittent or expensive.
Database replication patterns:Technically, you're replicating digital twin definitions from central databases to smaller embedded databases on edge devices. This follows proven patterns from mobile application development—maintain definitive records centrally while replicating subsets to edge nodes for local operation.
Benefits beyond analytics:Digital twins at the edge provide more than analytics capability. They normalize data from heterogeneous equipment into consistent models. They enable command and control through standardized interfaces. They create the semantic foundation for scaling edge deployments across hundreds or thousands of nodes.
The insight: digital twins aren't just fancy data models. They're the mechanism for distributing intelligence to edge devices in a scalable, maintainable way.
Deploying edge computing at scale creates an orchestration challenge that many organizations underestimate. You're not managing one cloud environment—you're managing potentially thousands of distributed compute nodes.
Registration and identity:Every edge device needs unique identity and security credentials. Like IoT endpoints, edge nodes register with central systems, authenticate their identity, and establish secure communication channels. This registration process determines what intelligence gets pushed to each node.
Topology mapping:Your orchestration system must understand relationships—which edge nodes communicate with which devices, which analytics run where, and how data flows through your architecture. Visualizing these relationships helps administrators maintain complex deployments.
Configuration distribution:When you define analytics logic centrally, orchestration systems distribute it to relevant edge nodes. This might be containerized applications, digital twin definitions, machine learning models, or simple configuration parameters. The goal: define once centrally, deploy everywhere automatically.
Container-based deployment:Many modern edge platforms use containers to package and distribute analytics. This provides consistency, versioning, and rollback capability. You can update analytics across thousands of nodes without manual intervention at each site.
Alternative approaches:Not everyone uses containers. Some platforms deploy lightweight agents that consume less resources than container runtimes. Others use actor-based frameworks where tiny software agents handle individual sensors independently. Evaluate which approach matches your edge device capabilities and operational model.
Bidirectional communication:Edge nodes maintain persistent outbound connections to orchestration systems. This security model—no listening ports on edge devices—reduces attack surface while enabling command and control from central systems using the same connection.
Version management:With hundreds of distributed nodes, version control becomes critical. Your orchestration system must track what software version runs where, manage rolling updates, and provide rollback capability when updates fail.
The challenge: edge computing creates distributed systems complexity. Invest in robust orchestration or you'll drown in operational overhead as deployments scale.
Edge devices present unique security challenges because they sit outside traditional data center security perimeters. Your security model must account for physical and network exposure.
Physical security fundamentals:Edge devices may be accessible to unauthorized personnel. Physical tamper detection, locked enclosures, and restricted physical access provide the first security layer. When possible, locate devices in controlled spaces rather than exposed areas.
Operating system hardening:Choose edge device operating systems with security features designed for exposed deployment. Secure boot prevents unauthorized code execution during startup. Full disk encryption protects data if devices are stolen. Application whitelisting limits what software can execute.
Network architecture:Never give edge devices public IP addresses. Use exclusively outbound connections where edge devices initiate all communication with central systems. This eliminates most network-based attacks that rely on connecting to listening services.
Certificate management:Implement proper certificate verification even though managing certificates on distributed devices is difficult. The alternative—disabling certificate verification—opens you to man-in-the-middle attacks. Invest in certificate management infrastructure rather than accepting this risk.
Embedded firewalls:When edge devices must listen for connections from IoT endpoints, enable embedded firewalls that permit only necessary ports. Modern operating systems include firewall capability—use it to minimize attack surface.
Regular updates:Edge devices remain secure only when kept current with security patches. Implement automated update mechanisms similar to Windows Update or Linux package managers. Your orchestration system should handle updates across all edge nodes without manual intervention.
Antivirus and malware protection:Even on Linux-based edge devices, consider antivirus scanning. Keep virus definitions current. This adds defense depth against malware that might infect devices through other vectors.
Authentication and access control:Require strong authentication for any direct device access. Rotate credentials regularly through automated processes. Implement role-based access control so technicians and administrators have appropriate permissions.
Defense in depth:No single measure provides complete security. Layer multiple controls—physical, network, operating system, and application level—so attackers must breach multiple barriers. This defense-in-depth approach is essential for exposed edge infrastructure.
The reality: edge security requires diligence. Budget for security infrastructure and operational overhead rather than treating it as an afterthought.
The edge computing market lacks mature standards, making platform selection challenging. Different vendors offer incompatible solutions, each claiming superiority.
Cloud vendor approaches:Major cloud providers offer edge solutions that integrate with their platforms. AWS Greengrass and Azure IoT Edge push containers and intelligence to edge devices while integrating with respective cloud services. These provide strong orchestration but lock you into specific clouds.
Third-party platforms:Companies like Losant and Crosser provide visual tools for designing edge analytics that deploy across multiple environments. They abstract cloud-specific implementations, providing portability at the cost of cloud-native features.
Open source options:EdgeX Foundry provides open-source edge framework focused on industrial integration. Eclipse Foundation offers various IoT projects including edge capabilities. These avoid vendor lock-in but require more implementation effort.
Specialized solutions:Some companies like Swim.ai take novel approaches—using actor frameworks and self-organizing agents rather than containers. These advanced architectures can provide remarkable capabilities but require understanding non-traditional patterns.
Evaluation criteria:Assess orchestration capability—can you manage hundreds of distributed nodes from central systems? Evaluate protocol support for your specific industrial equipment. Consider integration with your existing cloud infrastructure and analytics tools. Examine operational overhead required to maintain deployments.
The hybrid reality:You'll likely use multiple approaches. Edge gateways might run one platform while facility-level edge computing uses different infrastructure. Accept this heterogeneity rather than forcing everything into one framework.
Start narrow, expand strategically:Begin with specific use cases using platforms that solve those problems well. Prove value, learn operational requirements, then expand. Avoid committing to enterprise-wide edge platforms before understanding what you actually need.
The market remains immature with no clear winners. Evaluate based on your specific requirements rather than betting on which vendor will dominate long-term.
The 5G rollout creates genuine new capabilities for edge computing, though separating reality from hype requires understanding what 5G actually delivers.
Increased capacity:5G supports 100 times more concurrent device connections than 4G LTE using the same infrastructure. A single cell tower can handle one million connected devices per square kilometer. This removes previous bottlenecks for IoT device density in scenarios like smart cities or connected manufacturing.
Enhanced bandwidth and speed:Higher data rates enable new use cases like video analytics at scale or high-frequency sensor monitoring that were previously impractical over cellular networks.
Lower latency:Reduced latency enables edge computing within cellular infrastructure itself—multi-access edge computing (MEC) that processes data at cell towers or nearby data centers before it reaches distant clouds.
Network slicing:5G allows carriers to create isolated network segments with different quality of service characteristics. Critical applications get guaranteed bandwidth and latency while less critical traffic uses best-effort delivery. This brings enterprise network quality control to cellular infrastructure.
Edge computing integration:Mobile operators deploy compute infrastructure at cell towers and metro data centers. This creates new edge deployment patterns where your edge nodes run in carrier facilities rather than your own locations. It also reduces network congestion by processing data locally rather than sending everything across backbone networks.
Industrial private networks:Organizations can deploy private 5G networks for facilities, creating secure wireless infrastructure with controlled access and guaranteed performance characteristics. This enables untethered connected equipment while maintaining security.
The practical implications:5G removes previous cellular bottlenecks that limited IoT adoption. If your architecture currently uses expensive Ethernet or Wi-Fi for dense device deployments, 5G private networks become viable alternatives. For remote monitoring scenarios, carrier-provided edge computing reduces your infrastructure footprint.
The hype versus reality: 5G is genuinely enabling for specific use cases, particularly high device density and wireless industrial applications. But it's not revolutionary for scenarios already served well by existing connectivity options.
Edge computing represents a fundamental architectural shift in how enterprises process data. Rather than centralizing everything in the cloud, you're distributing intelligence based on where it delivers most value.
But success requires moving beyond vendor marketing to understand actual business drivers. You're not deploying edge computing because it's innovative—you're deploying it because latency, bandwidth economics, or regulatory constraints make it necessary.
The organizations succeeding with edge computing start with clear business problems, implement simple solutions that work, build operational capability gradually, and expand strategically as use cases mature. They invest in orchestration infrastructure from the beginning because managing distributed nodes without it becomes impossible at scale.
Your role as a data leader is establishing the framework for these decisions—identifying which workloads belong at the edge, selecting appropriate technology platforms, ensuring security controls work for exposed devices, and building the orchestration capability that makes edge computing manageable.
The specifics of which platforms you choose or which edge architecture patterns you implement matter less than the principles driving those decisions. Understand your actual constraints. Start simple and deliver value incrementally. Invest in orchestration and operational capability. Design security as layers from the start.
Edge computing isn't replacing cloud infrastructure—it's complementing it by placing compute where physics, economics, and regulatory reality demand. Organizations that understand this distinction and architect accordingly will build data infrastructure that actually solves their problems rather than creating new ones.