November 2, 2025

Software Defined Control, UNS and AI Optimization in Process Industries

Your PLCs and DCS systems work perfectly. They're reliable, proven, and have run your plants for decades without major issues. So why would you ever replace them?

According to Mercy Zhang, VP at SUPCON and founder of FreezoneX, that's exactly the problem. Traditional control systems are "too good" and "too reliable." They work so well that manufacturers see no reason to change, even as those same systems become the bottleneck preventing you from deploying advanced AI.

The uncomfortable truth: your control infrastructure determines what AI you can deploy. Traditional PLCs and DCS systems weren't designed for machine learning models, reinforcement learning agents, or real-time optimization algorithms. No matter how sophisticated your data science team gets, they're limited by 1970s control architecture.

The solution isn't incremental improvements to existing systems. It's software-defined control, running all logic on standard servers with unlimited computing power and the flexibility to deploy AI wherever you need it. Supcon just replaced over 1,000 control cabinets at a major refinery with two Dell servers. Same functionality, 100x more computing power, and the ability to run AI in closed loops.

Here's why software-defined control is the future, and what it means for your AI strategy.

PLCs and DCS systems were revolutionary when they emerged in the 1970s. Within five years, a small group of engineers fundamentally reshaped industrial automation. That invention still works today, remarkably well.

But Zhang points out that the very success of these systems has created stagnation. In the IT world, revolutionary platforms emerge every decade or so: mainframes, client-server, web, mobile, cloud. Each shift enables capabilities impossible in the previous era.

Industrial control hasn't had that moment since the 1970s. We've had incremental improvements—better processors, more memory, newer protocols—but the fundamental architecture remains unchanged.

The question isn't whether a revolutionary shift will happen. It's who will lead it and when.

Software-defined control could be that iPhone moment. Not because software control is new (people have been working on it for decades), but because the enabling technologies finally exist:

  • Standard servers powerful enough for hard real-time control
  • Ethernet protocols reaching down to field devices
  • Container orchestration mature enough for industrial use
  • AI models sophisticated enough to replace traditional control strategies

The technology is ready. The question is whether the industry is ready to embrace it.

What Software-Defined Control Actually Means

Strip away the marketing and software-defined control has one core principle: all control logic and computing happens in software, independent of hardware.

Your PC could be a control system. A server could be a control system. A Raspberry Pi could be a control system. Any standard hardware platform capable of running the software becomes your controller.

The only exception is the I/O layer—the physical interfaces to sensors and actuators. Those still require specialized hardware because physics matters. But everything above the I/O layer runs in software containers.

This isn't theoretical. Supcon's Nix platform runs on standard Dell servers with a custom real-time Linux operating system. The entire stack:

Field connectivity: Ethernet APL brings IP addresses all the way to field instruments. A single pair of optical fiber aggregates thousands of I/O points and connects directly to the server. No intermediate cabinets, no proprietary fieldbus networks.

Operating system: Custom Linux combining the best features of preempt-RT and vanilla Linux for hard real-time performance. Tasks distribute physically across CPU cores for deterministic timing.

Container orchestration: Enhanced Kubernetes managing control logic as containers. Not vanilla Kubernetes—they've optimized for real-time and reliability—but compatible with standard Kubernetes tools like Helm.

Control logic: Each control function runs in its own container. Containers communicate directly with millisecond-level latency, no OPC UA or other protocol bridges needed.

The result? You can deploy a Python script or a machine learning model as a container that talks directly to your control loops. No integration middleware. No data historians as intermediaries. Just direct, real-time communication.

The Computing Power You Didn't Know You Needed

Zhang describes a petrochemical project where they replaced more than 1,000 DCS cabinets with two Dell servers. The space savings alone were enormous—process industries know that explosion-proof control rooms are extremely expensive to build and maintain.

But the real story is computing power. Those two servers have over 100 times the computing capacity of the 1,000 cabinets they replaced.

The immediate objection: "We don't need that much computing power. Our processes run fine with what we have."

That's true today. But it's circular reasoning—you don't need the power because you can't deploy applications that would use it. Once you have unlimited compute at the control layer, new possibilities emerge:

Complex optimization in real-time: Run sophisticated models that would choke traditional controllers. Optimize across hundreds of variables simultaneously instead of controlling loops independently.

AI-powered control strategies: Deploy neural networks, time-series transformers, or reinforcement learning agents directly in control loops. No sending data to the cloud and waiting for results.

Digital twins running alongside production: Simulate ahead of your actual process, test control strategies, and automatically adjust based on predictions.

Adaptive systems: Continuously retrain models, evaluate performance, and deploy improvements without human intervention.

Traditional control systems can't do these things not because the logic is impossible, but because there's no computing substrate to run them on. Software-defined control removes that constraint.

Addressing the Reliability Objection

Every automation engineer's first reaction to software-defined control: "But Windows crashes. Linux has vulnerabilities. Traditional systems are more secure."

Zhang's response: look at the actual history of industrial cyber attacks. The major incidents that shut down refineries and chemical plants—Stuxnet, NotPetya, ransomware attacks—targeted traditional OT systems, not modern software infrastructure.

Why traditional systems are actually more vulnerable:

They run on outdated operating systems (Windows XP, Windows 7) that no longer receive security patches. You can't update them without risking stability, so vulnerabilities accumulate for years.

They use proprietary protocols and closed architectures that hide problems rather than exposing them. Security through obscurity doesn't work—it just means you don't know you've been compromised until it's too late.

They're rigidly configured with limited ability to adapt security policies as threats evolve. Once deployed, they're essentially frozen in place.

Why modern software infrastructure is more secure:

You can configure security precisely, closing unnecessary ports and limiting attack surfaces in ways impossible with traditional systems.

You get continuous security updates as vulnerabilities are discovered and patched. Modern Linux distributions have entire teams dedicated to security.

You can implement defense-in-depth: container isolation, network segmentation, authentication and authorization at multiple layers.

You have visibility into what's running and how components communicate. Everything is instrumented and logged by default.

The vulnerability isn't in using modern software—it's in using software incorrectly. With proper configuration and security practices, software-defined control is more secure than legacy systems, not less.

From IEC 61131 to Python: The Programming Shift

Traditional control systems use IEC 61131 programming languages: ladder logic, structured text, function block diagrams. These served the industry well for decades, but they're not where innovation happens anymore.

Software-defined control enables higher-level languages like Python and JavaScript for control programming. Supcon already translates Python into traditional control system formats, but the next step is using Python directly.

This matters more than you might think. The objection Zhang addresses mirrors debates from decades past: assembly language programmers arguing that higher-level languages lose too much performance, C programmers insisting you need low-level control.

History proved them wrong. The productivity gains from higher-level languages vastly outweigh the performance costs, especially as hardware gets faster. The same pattern applies to industrial control.

What you gain with modern programming languages:

Access to the entire ecosystem of open-source libraries for data processing, machine learning, optimization, and analytics. No reimplementing algorithms in ladder logic.

Ability to hire from a larger talent pool. Finding experienced Python developers is easy. Finding IEC 61131 experts gets harder every year as that generation retires.

Faster development cycles. Build, test, and deploy changes in hours instead of weeks. Iterate quickly based on real operational data.

Integration with modern development practices: version control, continuous integration, automated testing, collaborative development.

The control logic doesn't become less reliable—it becomes more maintainable, more testable, and easier to understand and modify.

Industrial AI That Actually Closes the Loop

Most industrial AI runs in the cloud, analyzes historical data, and presents recommendations to operators. It's useful, but it's not control. Real AI-powered control makes decisions and adjusts processes automatically in real-time.

Traditional systems make this nearly impossible. You might train sophisticated models, but deploying them in closed loops requires:

  • Getting real-time data out of the control system
  • Running inference somewhere with enough compute
  • Getting results back into the control system fast enough to matter
  • Dealing with protocol conversions at every step

Software-defined control eliminates these barriers. Models run as containers directly on the control infrastructure, accessing process data with millisecond latency.

Supcon's Time Series Pre-trained Transformer (TPT) approach: Some control loops are too complex to model with first principles—too many variables, too much interaction, too much nonlinearity. Traditional advanced process control (APC) struggles here. Engineers spend months or years building models that become obsolete when conditions change.

Instead, use time-series AI to reverse engineer the process. Train models on operational data that predict how the process will behave. Use those predictions to inform control adjustments. The model doesn't need to understand the physics—it just needs to predict accurately.

Reinforcement learning for autonomous operation: Train agents that learn optimal control strategies through interaction with the process. Define reward functions (maximize productivity, minimize downtime, optimize quality) and let the agent explore different control actions to achieve them.

This isn't possible without software-defined infrastructure. You need the computing power to run these models, the architecture to deploy them as containers, and the integration to connect them directly to control loops.

Zhang is careful to note that reinforcement learning isn't fully productized yet—they're piloting it with customers. But the possibility exists now in ways it couldn't before.

The Migration Path That Actually Works

Process industry executives hear "replace your DCS" and immediately shut down. Too risky. Too expensive. Too disruptive.

But migration doesn't mean ripping out working systems. The Open Process Automation Standard (OPAS) provides roadmaps for transitioning from brownfield to hybrid architectures to full software-defined control.

Start small, prove value, expand gradually:

Phase 1: Pilot on non-critical equipment. Pick one unit or process line. Deploy software-defined control alongside existing systems. Prove reliability, test AI applications, build confidence. If something fails, you haven't risked core operations.

Phase 2: Greenfield deployment. When building new facilities or expanding capacity, deploy software-defined control from the start. No migration challenges, full benefits immediately.

Phase 3: Selective replacement. As existing DCS systems reach end-of-life and need replacement anyway, choose software-defined alternatives instead of buying another generation of traditional systems.

Phase 4: Hybrid operation. Run traditional and software-defined control side by side. Some processes on legacy systems, others on new infrastructure. They can interoperate using standard protocols.

The real deployment proof: Supcon has over 50 major process industry customers running their universal control system in production—not pilots, actual production. The largest deployment is at what Zhang describes as the world's largest single-site refinery, using software-defined control for critical distillation units.

This isn't experimental technology anymore. It's production-ready for mission-critical applications.

Unified Namespace: The Data Backbone

Software-defined control solves the compute problem. But you still need to solve the data problem. That's where unified namespace architecture becomes critical.

Zhang's FreezoneX team built SuperOS, an open-source unified namespace platform, because traditional industrial architectures create too many integration barriers. When every layer has different protocols and every data movement requires translation, deploying AI becomes an integration nightmare.

Unified namespace provides:

  • Event-driven architecture: Real-time data flows through publish-subscribe patterns instead of point-to-point connections
  • Modern protocols: MQTT replacing OPC DA/UA complexity with lightweight, scalable messaging
  • Single source of truth: All systems read from and write to the same namespace rather than maintaining separate data silos
  • Context by design: Data carries semantic meaning through topic structure, not just values and timestamps

The SuperOS implementation bundles existing open-source tools (MQTT brokers, flow-based programming, time-series databases) into a cohesive platform. They're not reinventing wheels—they're assembling proven components into an architecture optimized for industrial use.

This matters because software-defined control plus unified namespace creates the substrate for advanced AI. Your control logic, optimization models, digital twins, and AI agents all operate on the same data infrastructure with direct access to what they need.

Factory Agents: LLMs Meet Industrial Control

The newest frontier: integrating large language models with industrial systems through AI agents. Zhang's team built Factory Agent, a Node-RED integration that lets you program industrial agents using natural language.

The architecture:

  • MQTT data feeds into LLM context
  • User-defined prompts guide agent behavior
  • State machines prevent message conflicts
  • Action nodes define what agents can control
  • Tool nodes give agents access to OT assets

This bridges what Zhang calls "the last mile"—connecting AI that understands language to systems that control physical processes.

Early days? Absolutely. But the pattern is clear: as LLMs become more capable at reasoning and planning, the bottleneck shifts from AI intelligence to deployment infrastructure. Software-defined control with unified namespace removes that bottleneck.

You're not there yet. But when AI agents become sophisticated enough to optimize complex processes autonomously, you'll need the infrastructure to deploy them. Traditional control systems won't support it. Software-defined systems will.

What This Means for Your AI Strategy

If you're leading data and analytics in manufacturing, software-defined control changes your planning horizon:

Current systems limit what AI you can deploy. Stop trying to force sophisticated models into architectures designed 40 years ago. The impedance mismatch between modern AI and traditional control systems guarantees suboptimal results.

Cloud AI hits ceiling. Models that run in the cloud and make recommendations are valuable, but you'll never achieve full autonomous optimization without intelligence at the control layer. Plan for hybrid architectures where AI runs both centrally and at the edge.

Open technology is non-negotiable. Vendor lock-in on control systems has always been painful. When control systems become software platforms, lock-in becomes fatal. Insist on open interfaces, standard protocols, and the ability to run any software you choose.

The compute power enables capabilities you haven't imagined yet. Don't design for today's applications. Design for 100x computing capacity at the control layer and the AI applications that become possible with it.

Start exploring now while systems are being built. The companies that wait for perfect, mature, proven solutions will find themselves years behind competitors who started learning early. Begin pilots. Build expertise. Understand what works and what doesn't.

The Future Arrives Unevenly

Zhang closes with a call to action: "Take a leap of faith." Software-defined control has moved from concept to tested technology with thousands of deployments worldwide.

But adoption will be uneven. Some companies will embrace it quickly and unlock AI-powered optimization impossible for competitors. Others will wait, comfortable with systems that work "well enough," until they realize they're competing against autonomous factories they can't match.

The irony is that your existing systems work so well that you don't feel urgency to change. By the time you do, you'll be playing catch-up.

The enabling technologies exist. The products are production-ready. The deployments prove feasibility. The only question is timing—how soon does your organization need the AI capabilities that only software-defined control can deliver?

For process industries competing on margins, quality, and efficiency, the answer is probably sooner than you think.

Kudzai Manditereza

Founder & Educator - Industry40.tv

Kudzai Manditereza is an Industry4.0 technology evangelist and creator of Industry40.tv, an independent media and education platform focused on industrial data and AI for smart manufacturing. He specializes in Industrial AI, IIoT, Unified Namespace, Digital Twins, and Industrial DataOps, helping digital manufacturing leaders implement and scale AI initiatives.

Kudzai hosts the AI in Manufacturing podcast and writes the Smart Factory Playbook newsletter, where he shares practical guidance on building the data backbone that makes industrial AI work in real-world manufacturing environments. He currently serves as Senior Industry Solutions Advocate at HiveMQ.