November 8, 2025
November 8, 2025

Manufacturing organizations face decisions about how to structure their operational data infrastructure, often encountering terms like Industrial IoT, Industry 4.0, and edge computing without clear guidance on what these concepts mean in practice. Understanding the fundamental technologies and architectural patterns enables more effective planning for data integration across manufacturing operations.
Benson Hougland, Vice President at Opto 22 with over 30 years of experience bridging operational technology and information technology, provides perspective on the essential components of industrial data infrastructure. His experience implementing systems across diverse manufacturing environments demonstrates how technical architecture choices impact data accessibility and system integration.
The core objective of industrial IoT implementation is not adopting specific technologies, but rather enabling effective data distribution across systems and stakeholders. This means making operational data accessible to the systems and people who need it, when they need it, in formats they can use.
Manufacturing data traditionally remained isolated within individual systems—PLCs, SCADA platforms, MES applications—each with proprietary interfaces and limited connectivity options. This isolation creates several operational challenges. Analytics teams cannot access real-time production data without custom integration projects. Quality systems operate independently from process control systems. Production planning relies on delayed data rather than current conditions.
The shift toward open, standards-based connectivity enables what can be described as data democratization: operational data becomes available across systems without requiring point-to-point custom integrations for each new connection. When a machine publishes its data using standard protocols, multiple systems can consume that data independently—MES for production tracking, analytics platforms for process optimization, maintenance systems for predictive monitoring.
Core principles for data accessibility:
Understanding current industrial IoT architecture requires context about how these technologies developed. In the 1980s, PC-based control introduced general-purpose computing to industrial applications, replacing purpose-built controllers with standard hardware running specialized software. This established the foundation for using commercial computing platforms in operational environments.
The 1990s brought Ethernet to manufacturing facilities, though adoption faced resistance. Equipment vendors had built business models around proprietary communication protocols, which created vendor lock-in. Ethernet represented a commodity network technology that would enable interoperability between different vendors' equipment.
The critical development was TCP/IP as the standard network protocol. TCP/IP provided the transport layer that enabled reliable connections between systems regardless of hardware vendor. This created the foundation for application-level protocols that could operate across diverse equipment.
MQTT emerged in 1999 to address specific requirements for lightweight, reliable messaging over constrained networks. JSON provided a self-describing data format that both humans and machines could parse effectively. These application-level protocols, operating over TCP/IP and Ethernet, created the building blocks for modern industrial data infrastructure.
Key technology milestones:
Organizations implementing industrial data infrastructure face decisions about proprietary versus open technologies. Open technologies refer to standards and protocols that any vendor can implement, creating interoperability between different systems without requiring specific vendor pairings.
The technology stack for industrial data infrastructure includes several layers. The physical layer consists of Ethernet or wireless networks. The transport layer uses TCP/IP for connection management. The application layer includes protocols like MQTT for messaging and specifications like Sparkplug B for data organization.
Sparkplug B adds structure to MQTT messaging by defining how devices should organize and describe their data. Rather than each vendor creating custom data formats, Sparkplug B provides a standard approach to metadata, data types, and namespace organization. This standardization means analytics platforms can consume data from different equipment vendors without custom parsing logic for each source.
Open technologies reduce long-term integration costs and vendor dependencies. When equipment publishes data using standard protocols, organizations can select best-of-breed systems at each layer—edge devices, message brokers, analytics platforms, visualization tools—without being constrained to single-vendor solutions.
Benefits of open technology standards:
The term "edge computing" refers to data processing that occurs at or near the source of data generation, rather than transmitting all raw data to centralized systems for processing. This approach reduces network bandwidth requirements and enables local decision-making with lower latency.
A fundamental principle for edge architecture involves the client-server model. Servers listen for incoming connections and respond to requests. Clients initiate outbound connections to servers. In operational technology environments, the distinction matters significantly for both performance and security.
Traditional industrial architectures often implemented devices as servers, requiring a master system to continuously poll each device for its current state. This creates several limitations. Each device requires an open port accepting incoming connections. Network traffic scales with the number of polling cycles. Adding new data consumers requires either increased polling frequency or complex data distribution logic.
Implementing devices as clients reverses this pattern. Edge devices initiate connections to message brokers or data platforms, publishing information as it becomes available. Multiple consumers can subscribe to this data without affecting the publishing device. The edge device requires no open ports accepting incoming connections, reducing security exposure.
Edge architecture considerations:
SCADA (Supervisory Control and Data Acquisition) systems have served manufacturing operations for decades, providing human-machine interfaces and data collection functionality. The role of SCADA in modern data architecture depends on specific operational requirements.
SCADA excels at providing operators with real-time visibility and control interfaces. Operators need to visualize process state, respond to alarms, and make control adjustments. SCADA systems provide tested, reliable interfaces for these functions.
However, SCADA was not designed as a general-purpose data distribution platform. Extracting data from SCADA for use in other systems often requires proprietary interfaces or manual export processes. Organizations with mature SCADA deployments face decisions about whether to route all operational data through SCADA or implement parallel data paths.
Modern architectures often implement SCADA as one consumer of operational data rather than the sole repository. Edge devices publish data using standard protocols. SCADA subscribes to this data for operator interface functions. Analytics platforms subscribe to the same data for process analysis. Maintenance systems subscribe for equipment monitoring. Each system receives the data it needs without custom point-to-point integrations.
SCADA integration patterns:
Security in industrial environments requires design decisions during architecture planning rather than additions after deployment. The concept of defense in depth means implementing multiple layers of security controls, so that compromise of any single layer does not expose the entire system.
Network segmentation establishes zones with different security requirements. Production networks should be separated from enterprise networks, with controlled connections between zones. This limits the potential impact of security incidents in one network zone.
Encryption protects data in transit from unauthorized access. All network communication should use TLS or similar encryption protocols, including traffic within production networks. The assumption that internal networks are inherently secure has proven incorrect in numerous security incidents.
Authentication verifies the identity of systems attempting to establish connections. Each device should have unique credentials rather than shared passwords. Certificate-based authentication provides stronger security than username-password approaches, particularly for devices without user interfaces.
Authorization controls what authenticated systems can access. A temperature sensor should only be authorized to publish temperature data, not production counts or quality measurements. Authorization policies should map to operational boundaries, limiting each device to its legitimate data access requirements.
Security architecture requirements:
Implementing industrial data infrastructure requires teams that understand both operational technology and information technology. This combination of skills has historically been rare, as OT and IT organizations operated independently with different priorities and expertise.
OT professionals understand production processes, control systems, and reliability requirements. They know which data matters for operations and what failure modes need consideration. IT professionals understand network architecture, security principles, and data management practices. Effective implementation requires collaboration between these skill sets.
Organizations can develop this capability through several approaches. Cross-training programs help OT professionals learn IT concepts and IT professionals understand operational requirements. System integrators with experience in both domains can supplement internal capabilities during initial implementations. Formal education programs now offer coursework combining OT and IT topics specifically for industrial digitalization.
The technical skills needed include understanding of network protocols, data formats, security principles, and how to configure edge devices and message brokers. Beyond technical skills, implementation requires understanding of how data flows through operational processes and which integration patterns support organizational requirements.
Key capabilities for implementation:
Manufacturing organizations building data infrastructure should focus on reducing integration complexity through standard protocols rather than optimizing for specific use cases with proprietary solutions. The short-term efficiency of custom integration often creates long-term technical debt when requirements change or new systems need access to operational data.
Starting with open protocols and standard data formats establishes a foundation that can accommodate future requirements without major architectural changes. Organizations that selected proprietary solutions for initial projects often find themselves redesigning entire systems when expanding to additional use cases or integrating new equipment.
The shift toward edge devices acting as clients rather than servers changes how systems integrate. Rather than building complex polling logic and managing connections to numerous devices, data platforms subscribe to streams published by edge devices. This pattern scales more effectively and reduces security exposure.
Security should be designed into the architecture from the beginning rather than added after initial deployment. Implementing encryption, authentication, and authorization during initial configuration is more straightforward than retrofitting security controls into operating systems.
Equipment selection decisions should consider long-term integration requirements, not just immediate functional needs. Devices supporting standard protocols enable flexible integration patterns. Proprietary interfaces create dependencies that constrain future architecture decisions.