November 9, 2025
November 9, 2025

How low-code visual programming enables rapid development of industrial data collection and integration applications
Industrial organizations face a persistent challenge: connecting disparate systems and extracting data from equipment in ways that enable informed decision-making. Traditional approaches require custom code for each integration, skilled developers for every connection, and months of development time for what should be straightforward data flows.
Node-RED emerged as a solution to this problem over a decade ago, though initially not specifically for industrial applications. Nick O'Leary, who co-created Node-RED and now serves as CTO of FlowForge, explains that the tool was born from the need to prototype IoT solutions quickly at IBM. Today, it has become a widely adopted platform for industrial data integration, though questions remain about its suitability for enterprise production environments.
Node-RED started in 2012 within IBM's Emerging Technologies Group. The team was building proof-of-concept projects for clients across various domains, often involving early IoT applications using MQTT. At the time, MQTT was not yet widely known outside of IBM, where it was originally developed.
The repetitive nature of writing boilerplate code for each project became apparent. Every project involved connecting to sensors, combining data from multiple sources, and routing information between systems. Much of this work was the same each time, just applied to different data sources and destinations.
The initial idea was simple: create a visual way to map MQTT topics and define how messages should flow. Drag a node representing an MQTT topic onto a canvas, draw a wire to show where messages should go. Within days, this approach proved useful. Within weeks, the team was adding nodes for every new system they needed to connect for client projects.
The fact that the team used Node-RED for actual client work, even though initially for proof-of-concept projects, shaped its development. Each real project revealed something else that needed to be supported. This rapid iteration based on real requirements made the tool practical rather than theoretical.
Colleagues working in other domains beyond IoT started experimenting with Node-RED and finding value. This suggested the concept had broader applicability than just IoT data flows. IBM supported open sourcing the project within six to eight months of its creation, which proved crucial to its eventual success.
When Node-RED first gained popularity, it was primarily used for personal projects and small-scale implementations. Organizations wanting to adopt it for production systems faced concerns that had little to do with Node-RED's technical capabilities and everything to do with enterprise requirements.
These concerns are common for any tool entering corporate environments. Access control becomes important—who can change what, and how do you restrict editing permissions? Audit logs become necessary—you need to know who made changes and when. Development lifecycle management becomes required—you cannot develop directly in production systems.
Many organizations already have people using Node-RED in unofficial capacities. An engineer might run a Node-RED instance to solve a specific problem or collect data that existing systems cannot provide. These skunkworks projects demonstrate value but exist outside official IT governance.
Transitioning from unofficial experiments to official production systems requires addressing organizational concerns, not just technical ones. Companies want vendor support. When a production system goes down because of a bug, they want someone to call. This is a challenge for pure open source projects regardless of how technically capable they are.
The reluctance is not about Node-RED specifically. It is about relying on any critical open source tool without commercial backing. Organizations need confidence that the technology will be maintained, that bugs will be fixed, and that someone is accountable for ensuring it works in production environments.
Traditional software development uses well-established practices for managing code from development through testing to production. Developers write code locally or in development environments. Changes go through testing before deployment. Production systems are protected from direct modification.
Node-RED, as designed, is a single-user runtime. You develop flows in a browser-based editor connected to a Node-RED instance. If that instance is your production system, you are editing live. There is no built-in concept of separate development, testing, and production environments. There is no version control for flows. There is no deployment pipeline.
DevOps for Node-RED means bringing standard software development practices to this low-code environment. This includes maintaining separate development and production instances, managing versions of flows, creating deployment pipelines that move flows from development through testing to production in controlled ways, and providing audit trails of all changes.
The typical Node-RED developer is not necessarily a software developer. They might be an industrial engineer who understands manufacturing systems but lacks formal programming training. Node-RED enables them to build solutions without writing code. But they still need proper development practices, even if they do not recognize the need from a software development background.
FlowForge addresses this by providing a platform that manages multiple Node-RED instances, organizes users into teams with appropriate permissions, handles version control for flows, and automates deployment pipelines. It brings the DevOps sensibilities that software teams take for granted to the low-code Node-RED environment.
The core Node-RED runtime is fundamentally enterprise-capable. It works reliably and performs its data integration functions well. The gaps are in the surrounding infrastructure and operational capabilities that enterprises require.
Access control is essential. Organizations need granular control over who can view flows, who can edit flows, and who can deploy changes. Node-RED includes integrated user authentication and authorization, but managing this across many instances and users requires additional tooling.
Audit logging provides accountability. Security standards like ISO 27001 and SOC 2 require detailed logging of who changed what and when. This is not about distrust. It is about compliance and incident response. When something goes wrong, you need to quickly understand what changed.
High availability becomes important for production systems. If a Node-RED instance crashes, how quickly does it recover? Can multiple instances run in parallel with load balancing? If one instance fails, can traffic automatically route to a standby?
These capabilities exist in traditional application infrastructure but require adaptation for Node-RED's architecture. Node-RED instances often maintain state related to active connections and in-process flows. Failing over between instances requires careful handling of this state to avoid data loss or duplication.
A common pattern in industrial environments is that PLCs and control systems exist in somewhat closed ecosystems. Accessing data from these systems often requires vendor-specific tools or protocols. This creates a walled garden where your operational data is trapped.
Node-RED provides an open way to extract data from these closed systems. Run Node-RED instances on edge devices co-located with PLCs. These instances connect to PLCs using vendor protocols and expose data in more open formats. This breaks the data out of proprietary silos.
The architecture typically follows a hierarchical pattern. Edge Node-RED instances connect directly to equipment and handle data collection. These edge instances also perform initial processing—data cleansing, basic aggregation, filtering, and pattern recognition. Process data as close to the source as possible.
Edge instances then publish processed data to higher-level instances. These might run on-premises in a data center or in the cloud. At this level, Node-RED combines data from multiple edge sources, performs cross-facility analysis, and feeds enterprise systems or dashboards.
This hierarchical approach distributes processing load appropriately. The edge handles high-frequency data streams—sensors reporting ten times per second. The edge instance processes these high-frequency readings into meaningful information—average temperature increased, threshold exceeded—and sends only the information upstream. Upstream instances receive information from many sources but at much lower rates.
Consider availability requirements for different data types. If you miss a single temperature reading, it rarely matters because another reading arrives shortly. But some data streams are mission-critical and cannot tolerate gaps. Design your architecture with appropriate redundancy for critical data paths.
For some edge sensors, you cannot connect them to multiple systems. The sensor has one physical connection. This limits your redundancy options at the edge. Consider buffering data locally so that temporary communication failures do not result in data loss.
Where you deploy Node-RED instances depends on several factors. Data governance requirements often dictate location. Some organizations have strict policies about data leaving their network. If regulations or policies prohibit sending raw data to cloud services, your processing must happen on-premises or at the edge.
Processing location should follow data location and volume. When sensors generate data at high rates, handle that data locally. A sensor reporting ten readings per second generates 36,000 data points per hour. Sending all raw readings to the cloud wastes bandwidth and cloud processing resources. Instead, process locally to extract meaning—temperature rising, operating normally, anomaly detected—and send this information upstream.
Think in terms of a hierarchy. Raw data becomes information at the edge. Information becomes knowledge at higher levels. Knowledge becomes insight in analytics systems. Each transformation reduces data volume while increasing value.
Practical connectivity also influences deployment. If you are integrating with cloud services that send webhooks, your Node-RED instance must be accessible from the public internet. This typically means running in the cloud. If you need to collect data from equipment that cannot be accessed remotely, your Node-RED instance must run locally near that equipment.
Node-RED's flexibility means you can deploy instances wherever they make sense for your architecture. Edge instances collect and process equipment data. Cloud instances integrate with business systems and provide dashboards. On-premises data center instances aggregate data from multiple facilities. Different instances handle different roles in your overall solution.
Node-RED includes integrated authentication and authorization. You can require users to log in before accessing the editor. You can restrict who can view flows and who can edit them. This access control has been a core Node-RED feature for years and provides the foundation for secure deployments.
Data encryption is an application-level concern rather than a Node-RED runtime concern. If your flows write to a database, that database must be configured with encryption at rest. If your flows transmit sensitive data over networks, you use MQTT over TLS or HTTPS. Node-RED supports these secure transport protocols, but enabling them is a flow design decision.
Think of Node-RED as one component in your security architecture. It integrates with other systems that each have their own security capabilities. The end-to-end security of your solution depends on properly configuring all components, not just Node-RED.
For industrial environments, network segmentation provides an important security layer. Your operational technology network with PLCs and control systems should be separated from your information technology network. Node-RED instances on the edge sit within the OT network and communicate with IT systems through controlled connection points.
A single Node-RED instance can handle substantial throughput. Many organizations run production systems on single instances for years without hitting performance limits. The throughput of individual instances is often underestimated.
When you do need to scale beyond single instances, the standard approach is horizontal scaling. Run multiple Node-RED instances and distribute work across them. For HTTP-based workloads where external clients make requests, load balancing is straightforward—put a load balancer in front of multiple instances and route requests.
An interesting challenge occurs with outbound connection workloads. When Node-RED instances initiate connections to external systems, traditional load balancing does not apply. MQTT provides shared subscriptions in version 5 to address this specific scenario. Multiple subscribers can share a subscription, and the broker distributes messages among them.
High availability requires careful architectural design. If you are running multiple instances for redundancy, how do they coordinate? If one instance is processing a message when it fails, does another instance reprocess that message? How do you avoid duplicate processing?
These challenges are not unique to Node-RED. They affect any distributed system. The solutions involve designing flows with idempotency in mind, using message queues that support exactly-once delivery semantics, and structuring your data flows to handle potential duplicates gracefully.
The hierarchical architecture approach helps with scalability. Rather than funneling all data through a single central system, distribute processing across edge instances. Each edge instance handles data from a limited number of devices. The load is naturally distributed because the processing happens near the data source.
Organizations across various industries use Node-RED for production systems, though many do not publicly discuss their implementations. One example that can be shared is Rapa Nui, a UK clothing company that automated their entire t-shirt printing factory using Node-RED running on Raspberry Pi devices.
They started as a small operation and hand-built their automation as they grew. Node-RED controls order processing, conveyor systems, routing, and station management. They have expanded to multiple factories, all using the same Node-RED-based automation. In their early days, a single Raspberry Pi served as the brain for the entire factory.
Large enterprises also use Node-RED extensively. Companies like Siemens and Hitachi have Node-RED-based solutions and actively support the project. Organizations use it for everything from equipment monitoring to complete manufacturing execution functions.
The diversity of use cases demonstrates Node-RED's flexibility. It works for small businesses building custom automation on minimal budgets. It works for large enterprises needing to integrate complex industrial systems. The common thread is the need to connect systems and process data flows without writing extensive custom code.
Node-RED adoption continues to grow in industrial environments. As more organizations successfully deploy it, others gain confidence in the approach. Each production deployment demonstrates that Node-RED can handle enterprise requirements when properly supported.
The open source nature and open governance structure contribute to its success. Node-RED is part of the OpenJS Foundation under the Linux Foundation, not owned by any single company. This open governance means no vendor controls the project direction or restricts functionality.
The extensibility model is crucial. Anyone can write nodes that add functionality. If Node-RED cannot do something you need, you write a node rather than waiting for the core team to add the feature. This has created an active community constantly expanding Node-RED's capabilities.
There will always be space for specialized tools focused on specific use cases. These closed-source solutions can optimize for narrow scenarios. However, the flexibility and extensibility of open platforms like Node-RED provide advantages for diverse and evolving requirements.
The challenge is not whether Node-RED is technically capable for industrial applications. The challenge is providing the surrounding infrastructure that enterprises need for production deployments. This includes DevOps capabilities, high availability features, enterprise support, and commercial backing.
As these enterprise capabilities mature through projects like FlowForge, organizations will face fewer barriers to adoption. The technical capabilities have always been there. The enterprise readiness is catching up, which will accelerate adoption in industrial environments.