November 10, 2025
November 10, 2025

Most manufacturers understand that connecting their equipment provides value. Data from machines enables better decisions, predictive maintenance, and process optimization. But connecting equipment at one facility is fundamentally different from managing connectivity across dozens or hundreds of facilities. The approaches that work for pilot projects break down at scale.
Peter Sorowka, CEO of Cybus, founded the Hamburg-based company in 2015 to address connectivity challenges in European manufacturing. His perspective comes from working with companies deploying smart factory initiatives across multiple sites, where managing configuration changes manually becomes impractical.
Software development has solved a difficult problem over the past 20 years: delivering robust, functional software while remaining flexible and adapting quickly to change. This balance seemed impossible - either you have stability or flexibility, but not both.
Methodologies like version control, automated testing, and build pipelines made both possible. Developers can change their roadmap frequently while still delivering working software. Changes are tested, deployable, and reversible. This approach transformed how software gets built and deployed.
These same principles now apply to managing IT infrastructure. Twenty years ago, deploying antivirus software to 1,500 laptops meant logging into each one and clicking install. Tedious, time-consuming, error-prone, and not reversible. Today, you write a script, test it on a few machines, then deploy it automatically to all 1,500. If something goes wrong, you roll back to the previous state.
Cloud infrastructure works the same way. When you need 12 databases and 500 servers on Azure or AWS, nobody configures them manually. You define what you need in code, test it, deploy it, and modify it as requirements change. Everything is auditable, versionable, and reversible.
Manufacturing faces similar challenges when connecting equipment. Connecting one machine to collect data works fine with manual configuration. But what happens when you need to connect 200 machines? Or 2,000 machines across 50 factories?
Manual configuration doesn't scale. Each connection requires someone who understands the equipment, the protocols, the network, and the destination systems. They configure settings, test connectivity, document what they did, and hope they remember the details when something needs changing.
When you multiply this across hundreds of machines and dozens of sites, several problems emerge. Configuration becomes inconsistent between sites. Changes take months to roll out. Nobody remembers exactly how things were configured. Upgrading software or changing data models requires touching every connection individually. The system becomes rigid and expensive to maintain.
This is where Infrastructure as Code principles help. Instead of configuring connections manually, you define them in code. The code describes what equipment connects to what systems, what data gets collected, how it gets transformed, and where it goes. You test this code, version it, and deploy it systematically.
A German manufacturer of truck trailers and agricultural machines, Krone, built a new factory using event-driven architecture throughout. They designed the entire facility around events - everything that happens becomes a message on an MQTT broker.
The welding robots, AGVs, quality management system, MES, and intralogistics all produce or consume events. They defined a standard topic structure and event semantics for the entire factory. When planning the facility, they told all their suppliers: integrate with our event layer or provide your system as-is and we'll add the integration ourselves.
This approach eliminates traditional IT integration projects. Each new system added to the factory plugs into the event layer. The interconnectivity is defined in code. Configuration changes don't require calling integrators or scheduling downtime. The factory team makes changes themselves using the Infrastructure as Code platform.
The benefit shows up in deployment speed and flexibility. As they commission new halls and add equipment, integration takes days instead of months. They maintain full control without dependency on external IT projects. The factory can evolve as business needs change.
Automotive manufacturing typically allows IT changes twice per year during planned shutdowns. This makes sense - you can't risk disrupting production when cars roll off the line every few minutes. But it also kills innovation. When you can only change things twice yearly and can't afford failures, you don't experiment.
One automotive manufacturer used Infrastructure as Code to break this constraint. They built their data integration layer using code-based configuration. This enables deploying new configurations, onboarding new assets, tweaking data models, and integrating new use cases multiple times per day during active production.
Small, incremental changes become safe. If something doesn't work, you roll it back immediately. This agility lets teams experiment and iterate. They can try new approaches, learn from results, and improve continuously rather than waiting six months between change windows.
They started with one production line and are now extending the approach across the entire factory. The IT infrastructure becomes manageable and flexible rather than rigid and risky.
Hyperscalers like Microsoft and Amazon offer compelling cloud infrastructure. But manufacturers face questions about data sovereignty - who ultimately controls factory data and the systems that use it.
Microsoft recently discontinued Azure IoT Edge within about six months, surprising customers who built solutions on that platform. This illustrates a challenge with hyperscaler products: they're quick to launch new offerings and equally quick to discontinue products that don't scale to their expectations.
Factories are diverse. Each has unique equipment, protocols, network constraints, and requirements. This diversity makes it difficult for hyperscalers to create on-premise products that scale profitably. They excel at standardized cloud infrastructure but struggle with the variability of factory environments.
This creates an argument for European or regional independence in factory infrastructure. Everything that happens within factory walls represents the last area where manufacturers maintain full control of their data. Relying too heavily on US hyperscalers, even for on-premise infrastructure, creates dependencies that may not align with long-term manufacturing interests.
Independent platforms, particularly those from regions with strong data protection traditions, offer alternatives for manufacturers who want to maintain sovereignty over their local infrastructure and data.
Low-code platforms have become popular in recent years. They promise that non-programmers can build applications through visual interfaces and configuration rather than coding. This works well for certain use cases like configuring CRM workflows or building simple business applications.
However, defining robust infrastructure requires more rigor than low-code platforms typically provide. You need proper testing, version control, rollback capability, and the ability to handle edge cases. Code provides these capabilities better than visual configuration tools.
Think of it this way: low-code works for simple, well-defined tasks where the platform anticipates your needs. Infrastructure configuration involves complex logic, error handling, and integration with diverse systems. Code gives you the precision and control needed for production infrastructure.
This doesn't mean factory workers need to become programmers. It means infrastructure specialists use code-based tools, while operators use applications built on top of that infrastructure. The separation of concerns matters - the infrastructure layer needs robustness while the application layer needs ease of use.
Code-based configuration provides capabilities that manual configuration cannot match. When connectivity is defined in files, those files can be versioned using standard tools like Git. You can see exactly what changed, when it changed, and who changed it.
Need to understand why a machine stopped sending data? Check the configuration history. Want to roll back a change that caused problems? Revert to the previous version. Need to deploy the same configuration to ten more factories? Copy the configuration files and adjust site-specific parameters.
This auditability becomes critical for regulated industries. When auditors ask about system changes, you can show exactly what was modified and when. When problems occur, you can trace back through configuration history to identify when issues started.
Change management also improves. Instead of one person who knows how everything is configured, the configuration exists as code that the team can review. Multiple people can propose changes. Changes can be reviewed before deployment. Testing happens in non-production environments before rollout to production.
Infrastructure as Code requires different skills than manual configuration. Instead of using vendor-specific configuration tools, teams work with code files, version control systems, and deployment pipelines. This creates a learning curve.
The challenge is real but manageable. Organizations don't need every factory technician to master these tools. They need small teams who understand both factory operations and code-based infrastructure management. These teams create and maintain the infrastructure definitions. Operators use applications built on top of this infrastructure without needing to understand the underlying code.
Training programs can develop these hybrid skills. Controls engineers often have programming backgrounds from PLC work. IT staff understand version control and deployment pipelines. Bringing these perspectives together creates the capability needed for Infrastructure as Code approaches.
Generative AI will likely reduce the learning curve for Infrastructure as Code. Instead of learning specific syntax and configuration patterns, users could describe what they need in natural language. The AI generates appropriate configuration code. A skilled reviewer checks the output, but the initial creation becomes much faster.
Imagine telling a system "route data from these three robots into SAP" and having it generate the necessary configuration automatically. Or asking it to "replicate the configuration from Factory A to Factory B with these modifications" and getting working code you can review and deploy.
This future isn't far off. The foundation work - standardized configuration formats, well-defined patterns, version-controlled repositories - enables AI to learn from examples and generate appropriate configurations. Combined with Infrastructure as Code principles, AI could make factory connectivity much more accessible.
Some manufacturers state openly that they want to operate factories like they operate data centers within ten years. This makes sense when you consider the trajectory of manufacturing digitalization.
Data centers run on Infrastructure as Code principles. Changes happen frequently and safely. Systems are monitored continuously. Problems trigger automated responses. Multiple redundancy ensures availability. Configuration is standardized across facilities. This operational maturity comes from treating infrastructure as code rather than as manually configured systems.
Factories are moving in this direction. Equipment generates events. Systems respond to conditions automatically. Monitoring becomes continuous rather than periodic. Configuration becomes standardized and automated. The same principles that made data centers reliable and flexible now apply to factory operations.
Organizations don't need to transform everything immediately. Start with one use case at one facility. Connect a subset of equipment using Infrastructure as Code principles. Learn what works, identify challenges, and iterate.
Common starting points include new factories or production lines where you can design connectivity from scratch. Or areas where manual configuration has become a bottleneck. Or use cases requiring frequent changes where traditional approaches are too slow.
Success requires commitment to the approach. You can't mix manual configuration and code-based configuration without creating confusion. Pick an area, commit to the Infrastructure as Code approach, and build expertise with that foundation.
The manufacturers succeeding with these approaches treat it as operational capability rather than a project. They invest in skills, establish standards, and continuously improve their practices. This operational maturity delivers the benefits of speed, reliability, and control that Infrastructure as Code enables.