November 7, 2025

A Practical Guide to Industrial IoT Connectivity: Standards and Protocols

When you're architecting data infrastructure for industrial operations, one of the most confusing challenges is selecting connectivity standards. The market presents dozens of protocols—DDS, OPC UA, MQTT, CoAP, oneM2M—each with advocates claiming their technology solves everything.

Stan Schneider, CEO of Real-Time Innovations (RTI), helped develop the Industrial Internet Connectivity Framework that cuts through this confusion. His company's software runs in over 1,500 industrial designs, from Navy ships to autonomous vehicles to power plants. Through this work and his role with the Industrial Internet Consortium, he's developed a framework that actually helps you choose the right technology rather than forcing everything into one standard.

Here's what he's learned: these technologies don't overlap the way you think they do. They're not different solutions to the same problem—they're solutions to fundamentally different problems. Trying to abstract them with wrappers fails for the same reason you can't pedal a train. The technologies evolved from completely different environments and operate on different architectural principles.

This guide provides the framework for making those decisions based on what actually works in industrial deployments.

Three Categories of Industrial IoT Systems

Before evaluating connectivity technologies, understand what type of system you're building. The Industrial Internet Consortium identifies three distinct categories, each requiring different architectural approaches.

Device monitoring:This is the simplest pattern—individual devices connecting to a single cloud service. Think consumer IoT like Nest thermostats or Ring doorbells. Each device type talks to one backend. The architecture is straightforward: one-to-one connections. You may have millions of devices, but they're all the same type connecting to the same service.

This is surprisingly where the entire consumer internet lives. The connectivity requirements are simple because the use case is simple. Devices send telemetry, receive occasional commands, and that's it.

Analytic optimization:This represents most current industrial IoT deployments. You instrument existing facilities—power plants, oil pipelines, manufacturing lines—with sensors. Data flows to the cloud for big data analysis, visualization, and optimization insights.

The key characteristic: you're not changing the fundamental system, just observing it. You're adding intelligence on top through analytics. This is what platforms like Siemens MindSphere and GE Predix were designed for. The connectivity challenge is getting diverse sensor data from legacy equipment into cloud systems for analysis.

Edge autonomy:This is the most complex and least mature category. You're building distributed intelligent systems where AI operates in the real world, not just in the cloud. Autonomous vehicles, smart robotics, adaptive power grids—systems that make decisions locally while coordinating globally.

The architectural requirements are completely different. You need real-time coordination between distributed components, not just data collection. Safety, reliability, and low latency become critical. This is where you're building truly new systems rather than instrumenting existing ones.

The connectivity standard appropriate for device monitoring fails completely for edge autonomy. Understanding which category your use case fits determines your technology choices.

The Interoperability-Based Connectivity Stack

Traditional network models like the seven-layer OSI stack don't capture what matters for industrial systems. The Industrial Internet Connectivity Framework proposes a different approach based on what you're trying to interoperate between.

Technical interoperability:At this level, you can share opaque blocks of data. You're moving bytes around but have no idea what they contain. The data could be temperature readings, alarm conditions, or employee names—you can't tell by looking at it.

Technologies at this level are pure transport mechanisms. MQTT and CoAP fit here. They reliably move data from point A to point B, but provide no semantic understanding. You must know ahead of time what the data means.

Syntactic interoperability:Now you can share data types. You receive data and can determine it contains a string called "name," a float called "temperature," and an integer called "status." You know the structure even if you don't know what it means.

This enables language interoperability—your Python code can understand data from someone's Java application because the type system is defined. It's the difference between receiving random bytes versus receiving a typed data structure.

Semantic interoperability:At this level, you share meaning. You know that temperature field represents degrees Celsius from a specific sensor measuring coolant temperature in a particular pump. The data has context and meaning beyond just structure.

Most industrial systems require at least syntactic interoperability to build anything complex. Pure transport protocols fail quickly when you need software components to understand each other's data.

The insight: don't evaluate protocols by features alone. Evaluate them by what level of interoperability they provide and whether that matches your requirements.

Why Connectivity Technologies Don't Overlap

Stop thinking of industrial connectivity protocols as competing alternatives that overlap. They don't. Understanding why they're different helps you select appropriately.

The transportation analogy:Trains, cars, bicycles, and shoes are all "transportation technologies." For a simple use case—going a few kilometers to work—several might work. But complexity eliminates overlap quickly. Need to drop kids at school? Train doesn't work. Carrying cargo? Bicycle fails. Moving cross-country? You need a truck.

The technologies solve different problems, not the same problem in different ways. Industrial connectivity works identically. MQTT, DDS, and OPC UA appear to overlap for simple data collection, but diverge completely as requirements become complex.

What makes them fundamentally different:The technologies evolved in completely different environments for completely different purposes. MQTT came from constrained devices needing lightweight publish-subscribe messaging. OPC UA evolved from industrial device integration where mixing vendor equipment was the primary goal. DDS emerged from defense systems needing hard real-time distributed coordination.

These aren't stylistic differences. They're fundamental architectural choices that determine what applications can successfully use each technology.

When requirements get complex:For distributed systems requiring subsecond coordination between hundreds of components, DDS is designed exactly for this. MQTT fundamentally can't deliver. For integrating devices from multiple vendors on a factory floor, OPC UA provides the device modeling and companion specifications. DDS doesn't address this problem.

The overlap disappears under real-world complexity. Your job is understanding your actual requirements well enough to identify which technology fits.

Transport Layer Versus Framework Layer

A critical distinction that most organizations miss is the difference between transport and framework technologies. This determines whether a protocol can actually support complex applications.

Transport layer protocols:These move bytes from point to point. MQTT and CoAP fit here. They reliably transmit data but provide no structure. When you receive data, you get bytes. Those bytes could represent anything—you must know the format ahead of time.

This works fine if you control the entire system and write all the code yourself. But it fails when you need interoperability between software components, particularly if those components are written in different languages or by different teams.

Framework layer protocols:These provide data type systems that enable syntactic interoperability. DDS, OPC UA, and oneM2M offer complete typing systems. You can describe data structures, share them across components, and maintain interoperability even as systems evolve.

The critical capability: frameworks handle version evolution. When you add a field to a data structure, DDS can match old and new versions automatically. Your system keeps working even though different components run different software versions. This is essential for large deployments where you can't update everything simultaneously.

Why this matters for enterprise systems:In a hospital with 300,000 connected medical devices, you can't update everything at once. Some devices are keeping people alive and can't be rebooted. Security patches, new features, and bug fixes roll out incrementally over months or years.

Without framework-level protocols that handle version evolution, your system breaks every time any component updates. Transport protocols force you to maintain rigid version synchronization across potentially thousands of devices—an operational impossibility.

For simple applications, transport protocols suffice. For enterprise-scale systems that evolve over time, you need framework capabilities.

Why Connectivity Wrappers Fail

Many organizations attempt to abstract connectivity technologies behind a common wrapper API. The goal is portability—if you change technologies later, you only rewrite the wrapper, not your applications. This approach consistently fails.

The least common denominator problem:When you wrap multiple technologies behind a single API, you can only expose features common to all of them. But these technologies have almost nothing in common beyond moving bytes.

DDS provides distributed shared memory with automatic data delivery based on specifications. MQTT provides publish-subscribe messaging to named topics. OPC UA provides object-oriented device models with methods. Wrapping these means you lose everything unique about each—you're left with byte transport.

The train-bicycle-shoes analogy:Imagine a wrapper that abstracts trains, bicycles, and shoes. The only common operation is "move short distances with no luggage at any speed." You can't use the train's capacity, the bicycle's efficiency, or the shoes' ability to climb stairs. The wrapper reduces everything to the least useful version.

When wrappers make sense:Wrappers work when simplifying APIs without changing underlying capabilities. If DDS is too feature-rich for your use case, wrap it to expose only what you need. This is common and useful.

Wrappers also work when adding functionality on top—perhaps adding your domain-specific data types while DDS handles distribution underneath. You're extending, not abstracting.

When wrappers fail:Wrappers fail when trying to make fundamentally different technologies interchangeable. You're not gaining portability—you're losing the capabilities that make each technology valuable for its intended use case.

The red flag: any vendor claiming their wrapper abstracts IoT technologies so you can swap them freely has never successfully deployed this in complex systems. Schneider has seen hundreds of attempts and zero successes.

Accept that different parts of your architecture will use different technologies matched to their specific requirements. Design for heterogeneity rather than attempting impossible abstraction.

DDS: Data-Centric Architecture for Real-Time Systems

Data Distribution Service provides an architectural approach fundamentally different from other connectivity technologies. Understanding this helps you recognize when it's appropriate.

The core abstraction:DDS creates the illusion that all data in your entire distributed system exists in your local memory. When you write sensor software, it appears you're writing to local variables. When you write analytics, it appears you're reading from local memory. This is fiction—data lives across thousands of devices—but your code operates as if it's all local.

How it actually works:You specify what data you need: "Give me anything within 200 meters moving toward me faster than 2 meters per second, updated 100 times per second." DDS finds anything in the entire system matching that specification and delivers it to your local memory exactly when you need it.

The specification can be arbitrarily complex. You're not subscribing to topics or connecting to servers. You're declaring data requirements and DDS handles discovery and delivery.

Liveliness and quality of service:DDS provides guarantees beyond data delivery. You can specify liveliness requirements: "This data must be no more than 100 milliseconds old." If a sensor fails to report, you immediately know—very different from receiving no data because nothing interesting happened.

You also specify reliability requirements, bandwidth constraints, and many other quality-of-service parameters. DDS enforces these automatically.

The "future database" concept:Traditional databases store past information. You query for what already happened. DDS queries future information—you specify what you want to know when it happens. When a plane enters your airspace meeting your specification, you're notified within milliseconds.

Where DDS fits:DDS is designed for systems requiring real-time coordination between intelligent components. Autonomous vehicles coordinating their sensors. Power grids balancing load across generation sources. Medical device networks where timing is critical.

It's not designed for simple telemetry collection or device integration. The overhead is unnecessary for those use cases. But for distributed intelligent systems requiring subsecond coordination, nothing else comes close.

OPC UA: Device Integration and Vendor Interoperability

OPC UA takes a completely different architectural approach optimized for a different problem—integrating devices from multiple vendors on industrial floors.

The object-oriented abstraction:OPC UA models systems as objects that interact through methods. A curing oven is an object with methods like "set temperature" and properties like "current status." This mirrors how people think about equipment.

Objects have well-defined interfaces. Different vendors' curing ovens implement the same interface through companion specifications, enabling true plug-and-play integration.

Device-centric rather than data-centric:Where DDS focuses on data flows, OPC UA focuses on device models. You're describing what equipment exists and how to interact with it. The question isn't "what data do I need" but "what devices do I have and what can they do."

Solving the vendor lock-in problem:The primary goal is freeing end users from single-vendor dependence. If multiple vendors implement the same companion specification, you can swap equipment without redesigning integration. This is critical for factory floors with decades-long equipment lifecycles.

Who uses OPC UA:End integrators and factory operators, not typically programming teams. You're assembling systems from devices rather than writing custom distributed software. There's very little application code—mostly configuration and device coordination.

Walk through trade shows and you'll see walls of devices with OPC UA badges, implying they can be mixed and matched. That's exactly the use case OPC UA addresses.

Where OPC UA doesn't fit:Building custom intelligent distributed systems with software components coordinating in real time. OPC UA doesn't provide the data-centric architecture or real-time coordination capabilities that DDS offers. It's solving a different problem—device integration, not software architecture.

The non-overlap with DDS:These technologies are nearly opposite despite both being "industrial IoT protocols." DDS targets programming teams building intelligent systems. OPC UA targets integrators assembling equipment. One is software architecture, the other is device integration. Choosing between them is usually obvious once you understand your actual problem.

The Core Connectivity Standard Architecture

For large enterprises, different systems will use different connectivity standards. The question becomes how to integrate them without creating architectural chaos.

The gateway explosion problem:With 30-40 domain-specific protocols in industrial environments, building gateways between all of them requires n-squared gateways—an impossible number. You'll spend forever writing integration code.

The four-core approach:The Industrial Internet Connectivity Framework identifies four core standards that cover the major use cases: DDS, OPC UA, web services (HTTP/REST), and oneM2M. With four technologies, you only need 12 gateways (each technology to the three others).

Actually, you need fewer. DDS already has standard gateways to web services. DDS to OPC UA gateways now exist. Many legacy protocols only need connection to one or two core standards—if Modbus connects to OPC UA, and OPC UA connects to DDS, Modbus effectively connects to everything.

Is universal connectivity necessary:Schneider questions whether we actually need to connect everything. The internet's power comes from connecting anything to anything, enabling unexpected use cases. But do hospital operating rooms need to integrate with power plants? Do automotive factories need to connect with retail systems?

There are legitimate integration needs, particularly for legacy systems. As systems persist for decades, gateway standards become valuable. But don't build universal connectivity just because it sounds good—build it where business value justifies the complexity.

Practical implementation:For most organizations, the strategy is: select the appropriate core technology for each major system based on actual requirements. Build or buy standard gateways only where integration is truly needed. Accept heterogeneity rather than forcing everything into one standard.

You'll likely use multiple standards. That's not a problem if each is chosen appropriately and you have defined integration points where necessary.

Conclusion

Industrial IoT connectivity is not a question of picking the "best" protocol. It's a question of understanding what problems you're actually solving and matching technologies to those specific problems.

Device monitoring, analytic optimization, and edge autonomy have fundamentally different connectivity requirements. Transport protocols like MQTT work for simple use cases but fail for complex distributed systems requiring framework capabilities. Technologies that appear to overlap actually solve completely different problems and aren't interchangeable.