November 9, 2025
November 9, 2025

How standardized data models and platform-based approaches enable agile digitalization and ecosystem collaboration
Manufacturing organizations have invested heavily in digitalization over the past decade. Enterprise resource planning systems manage business processes. SCADA systems control production equipment. Yet many manufacturers find themselves with data scattered across dozens of disconnected systems, unable to extract the value they expected from their technology investments.
This situation is common enough to have a name: data silos. Each application maintains its own database. Each system uses different data formats. Connecting systems requires custom integration work. When organizations try to implement analytics or artificial intelligence, they discover their data is unusable—disconnected, unstructured, and lacking context.
Sandeep Sreekumar, co-founder and COO of IndustryApps, spent over 20 years leading digital transformation programs across industries. As global head of digital operations for a German company with 140 factories worldwide, he experienced these challenges firsthand. His perspective is that the solution requires rethinking how manufacturing organizations approach technology adoption—moving from point solutions to platform strategies and from proprietary data formats to industrial data spaces built on open standards.
Industrial technology deployment traditionally operates on long timelines. A manufacturing execution system project runs for two years. A machine learning initiative requires five years of data collection. Organizations negotiate with fifteen vendors over six months before selecting one. These long cycles were accepted as normal in industrial environments.
This approach no longer works in a rapidly changing technology landscape. Generative AI emerged as a major technology in months, not years. Technologies considered cutting edge today become legacy within six months. Organizations planning multi-year implementations risk building systems that are outdated before they launch.
Speed of value delivery has become a competitive advantage. Organizations that can deploy technology in weeks rather than years learn faster, adapt quicker, and respond to changing business conditions more effectively. This requires a fundamental mindset shift, particularly for senior leaders with decades of experience in traditional industrial technology deployment.
The challenge is cultural as much as technical. A CEO with thirty years of operational experience has always seen technology deployed slowly. Changing this mindset requires demonstrating that faster approaches work reliably. It requires digital teams presenting alternatives to traditional implementation methods and proving that agility does not mean sacrificing reliability or quality.
The typical path to digitalization follows a point solution strategy. The organization needs a specific capability—maybe OEE tracking. They evaluate products, select one, and implement it in a project. Six months later, they need predictive maintenance. They select another product and implement it in another project. This continues for years.
The result is a technology landscape with dozens of applications, each with its own database, user interface, and data format. These systems do not communicate with each other. Data that could provide insights when combined remains isolated. When the organization wants to implement advanced analytics, they face enormous integration challenges.
This creates what is often called a data swamp—massive amounts of data stored in incompatible formats with no unified structure. A data scientist wanting to analyze equipment performance might need to extract data from a historian, combine it with quality data from another system, and correlate it with production data from a third system. The data formats do not align. The timestamps might be in different time zones. Asset identifiers do not match across systems.
Even when organizations attempt to address this through data warehouse or data lake projects, they often simply move the problem to a new location. They migrate terabytes of data from various systems into a central repository. But without proper contextualization, this data remains unusable. A temperature tag labeled "ABT_17_XYZ_2" has no meaning without knowing which equipment it came from, what it measures, and what its normal operating range is.
Consumer technology shows a different model. Smartphone users download applications from centralized app stores. They do not negotiate separately with each software vendor. They do not maintain separate accounts and payment methods for each application. Everything works through a unified platform.
A platform strategy for manufacturing applies similar principles. Instead of procuring and implementing each application separately, organizations access applications through a unified platform. The platform handles connectivity to factory equipment. It provides standardized data access. Applications integrate through common interfaces rather than custom connections.
This creates several benefits. Technology scouting becomes simpler—browse available applications in a catalog rather than searching independently. Deployment becomes faster—applications connect to data already available in the platform rather than requiring custom integration. Management becomes centralized—one login, one user management system, one security model across all applications.
For vendors, platforms provide access to markets they could not reach independently. A small company in Eastern Europe can sell their application to factories in China or Australia without establishing local sales presence. The platform handles the commercial relationship, data connectivity, and support infrastructure.
The platform also standardizes the vendor onboarding process. Rather than each potential customer requiring the vendor to complete questionnaires and undergo security reviews, the platform performs this due diligence once. Organizations can trust that applications available through the platform meet baseline quality, security, and compliance requirements.
Industrial data spaces represent an evolution beyond data lakes. A data lake collects raw data from multiple sources into central storage. In theory, this enables analytics across the organization. In practice, data lakes often fail to deliver value because the data lacks context and structure.
An industrial data space organizes data according to standardized models. Rather than storing raw sensor values, it creates digital twins of equipment with sensor data attached to specific assets. It uses semantic models that define what data means, not just what values it contains. This contextualization makes data usable for analysis and collaboration.
The concept extends beyond a single organization. Industrial data spaces enable secure data sharing across company boundaries. A manufacturer can share relevant production data with their suppliers or customers without exposing their entire data infrastructure. They control exactly what data is shared, with whom, and under what conditions.
This ecosystem collaboration becomes increasingly important as industry requirements tighten. Automotive companies need to track carbon footprint across their entire supply chain. Regulatory compliance requires traceability from raw materials through finished products. These requirements cannot be met within a single organization. They require standardized data exchange across the supply chain.
For industrial data spaces to enable collaboration, they require standardized data models that all participants understand. Proprietary formats do not work—you cannot assume your partners use the same cloud provider, ERP system, or data structure that you use.
The Asset Administration Shell provides a standardized digital twin format developed by the Industrial Digital Twin Association. It defines how to represent industrial assets—equipment, products, processes—in a machine-readable format. It includes space for various submodels covering different aspects like maintenance, quality, carbon footprint, or technical specifications.
These submodels continue to evolve as industry needs change. Organizations like IDTA, Open Industry 4.0 Alliance, and Catena-X work on defining and maintaining these standards. For OPC UA users, companion specifications provide semantic models that can integrate with Asset Administration Shell structures.
The key point is that these are open standards, not proprietary formats controlled by a single company. Any organization can implement them. Any software vendor can build applications that consume and produce data in these formats. This openness enables the ecosystem collaboration that industrial data spaces require.
For manufacturers, following open standards means avoiding vendor lock-in. When data is stored in proprietary formats, switching vendors becomes prohibitively expensive. When data follows open standards, applications become interchangeable. If a vendor's service deteriorates or better alternatives emerge, switching is feasible.
Many organizations have already invested in data lake projects, often with disappointing results. Understanding why data lakes failed helps explain what data spaces do differently.
A typical data lake project migrates historical data from various sources into cloud storage. An organization might move years of historian data—millions of data points—to create a foundation for analytics and machine learning. The technical data migration succeeds, but the business value does not materialize.
The problem is context. A historian stores tag names and values with timestamps. But what does a tag name mean? If the engineer who programmed that PLC left the company five years ago, no one may remember what "TEMP_01_ZONE_A" represents. Is it actual temperature or setpoint? What equipment is it monitoring? What are normal operating ranges?
Without this context, data scientists cannot build meaningful models. They cannot even clean the data properly because they do not know which values are valid and which are errors. Terabytes of data sit unused because the necessary metadata and relationships were never captured.
Industrial data spaces address this by requiring contextualization before data enters the space. Temperature sensors are not standalone tags. They are properties of equipment digital twins. The digital twin includes semantic information about the equipment, its relationships to other assets, and what each data point represents. This contextualization happens at data collection time, not as a cleanup project years later.
The structure is not arbitrary. It follows standardized models like Asset Administration Shell. This means analytics tools can query the data space for information about specific equipment and receive structured responses. The tools do not need custom integration for each factory or data source because the data structure is standardized.
Organizations considering platform strategies often ask about quality control. If multiple vendors can offer applications through a platform, how do you ensure quality and reliability? The answer is standardized onboarding processes that vendors complete once rather than separately for each customer.
IndustryApps, for example, requires vendors to demonstrate their application is running in at least six factories before platform onboarding. This threshold ensures the product has matured through several years of real-world use. New, unproven applications are not offered until they demonstrate market success.
The onboarding process includes security reviews, penetration testing, architecture evaluation, and customer references. Vendors provide detailed information about data handling, security practices, and compliance capabilities. This information is collected in standardized formats rather than different questionnaires from each potential customer.
For vendors, this reduces the burden of responding to dozens of security reviews and compliance questionnaires from different customers. They complete the process once for platform onboarding rather than repeatedly for each opportunity.
For customers, this provides confidence that applications available through the platform meet professional standards. Small and medium manufacturers who lack extensive IT security teams benefit from quality assurance performed at the platform level. They get access to the same rigor that large enterprises apply to vendor selection.
Security concerns often arise when discussing data spaces and cloud platforms. Organizations worry about where data resides, who can access it, and how it is protected. Industrial data spaces address these concerns through several mechanisms.
First, data spaces do not dictate where organizations store data. Large enterprises with established cloud infrastructure and security teams can maintain data in their own environments. The data space provides transformation, modeling, and sharing capabilities without requiring data migration to centralized storage.
Smaller organizations without cloud expertise can use managed environments provided by the platform. These environments comply with relevant standards and regulations, providing enterprise-grade security without requiring organizations to build that capability internally.
Second, data spaces use granular access controls. Organizations control exactly what data they share and with whom. A manufacturer might share production capacity information with their customers but not detailed equipment performance data. They might share quality metrics with certain suppliers but not others. These decisions remain under the data owner's control.
Third, industrial data spaces separate data storage from data access. Applications do not receive direct database access. They receive data through standardized APIs that enforce access policies. This means organizations can change underlying storage or revoke access without modifying applications.
Organizations beginning platform and data space implementations should focus on several key principles. First, accept that agility is necessary. Multi-year digitalization programs that deliver value only after complete implementation no longer work effectively. Look for approaches that deliver incremental value quickly, allowing you to learn and adapt.
Second, prioritize standardization. Participating in industrial data spaces will become necessary for supply chain collaboration, regulatory compliance, and ecosystem participation. Following open standards like Asset Administration Shell positions your organization for these requirements. Engage with organizations like IDTA, Open Industry 4.0 Alliance, or industry-specific initiatives like Catena-X to understand where standards are heading.
Third, avoid vendor lock-in. Evaluate whether technologies you implement use open standards or proprietary formats. Consider subscription models that provide flexibility to change vendors over time rather than large upfront investments that create switching costs.
Fourth, move away from point solution strategies. Each new application that creates another data silo makes your overall data infrastructure more complex. Think about unified architectures where applications share common data access rather than each maintaining separate databases.
Fifth, consider platforms that provide multiple capabilities through a single architecture. Connectivity to equipment, data modeling, application deployment, security, and user management should work together cohesively rather than being assembled from separate products.
Finally, recognize that this transition takes time but must begin now. Organizations that wait for complete clarity before starting will find themselves unable to participate in digital supply chain ecosystems. Start with focused implementations that deliver value while building toward a more comprehensive platform strategy.