October 7, 2025

ISA-95 Masterclass: How To Standardise Your Industrial Data Architecture with ISA95

Data in manufacturing exists everywhere, but shared understanding exists nowhere. Operations, engineering, quality, maintenance, and management teams each maintain separate data systems, creating isolated silos where the same metrics produce different results depending on who calculates them.


This fragmentation destroys AI initiatives before they begin. Machine learning models trained on data that means different things to different teams produce unreliable outputs. Analytics that work at one site fail when scaled to others because each location has implemented its own definitions and calculation rules. Cross-functional initiatives stall because teams can't agree on basic metrics like production efficiency or quality rates.


The root problem isn't technical, it's semantic. Without a shared ontology defining what manufacturing entities are, how they relate, and what context surrounds them, organizations cannot build scalable data infrastructure. Adding more dashboards, integration tools, or data lakes only amplifies the underlying inconsistency.


ISA-95 provides the solution: a proven 25-year-old standard that defines a comprehensive ontology for manufacturing operations. Rather than forcing rigid structures, it establishes shared vocabulary and relationships that enable incremental growth. Organizations can start with one use case, model only what's needed, then expand by filling in entity placeholders that already exist in the standard.

Jeron Jansen, an ISA-95 expert with 17 years implementing manufacturing systems, has seen this pattern repeatedly: companies that adopt standardized ontologies early achieve AI scale, while those building custom models remain trapped in pilot purgatory. The difference isn't technical sophistication, it's whether everyone speaks the same data language.

The Real Cost of Data Fragmentation

The Excel spreadsheet problem represents a deeper issue: every team has created their own isolated version of manufacturing truth, and those versions don't align.

How fragmentation compounds:

  • Operations tracks performance metrics optimized for shift handoffs and real-time decisions
  • Process engineers analyze the same data but apply different filters and aggregations for root cause analysis
  • Quality teams maintain separate records with their own timestamps and batch associations
  • Maintenance logs downtime events without clear links to production context
  • Management receives reports aggregated from multiple sources, with calculations that don't match any operational team's numbers

Each group has built sophisticated tools over years. They trust their data because they built it. But when you try to create enterprise-wide analytics or train AI models, you discover these data sets don't interoperate. They use different definitions for "batch," different rules for calculating downtime, different ways of associating materials with production runs.

Why this destroys AI initiatives:

AI needs context to understand what data means. When you show an AI agent 100 metal detector alarms on Line 3 yesterday, it might conclude there's a machine problem requiring immediate maintenance. But with proper context, you'd know those detections happened during scheduled maintenance when engineers deliberately tested the metal detector with calibration samples.

Without a shared ontology, a common understanding of entities, relationships, and context—your AI is learning from noise, not signal. It's making decisions based on data it fundamentally misunderstands.

ISA-95 Is an Ontology, Not a Hierarchy

Most people think ISA-95 is just about the equipment hierarchy (Enterprise → Site → Area → Line → Equipment) or the functional levels (Level 0 through Level 4). That's the tip of the iceberg.

ISA-95 is a comprehensive ontology that defines:

What entities exist in manufacturing:

  • Equipment, materials, personnel (resources)
  • Job orders, schedules, work definitions (production management)
  • Material lots, product definitions, inventory (materials management)
  • Work requests, maintenance schedules (maintenance management)
  • Test specifications, test results (quality management)

How entities relate to each other:

  • Which equipment ran which job order
  • Which material lots were consumed by which production run
  • Which personnel were assigned to which shift on which line
  • How quality test results link to specific batches
  • When maintenance activities blocked production schedules

What context surrounds the data:

  • Time boundaries (when did this job start and end?)
  • Equipment state (was the line in production, maintenance, or startup?)
  • Material traceability (which supplier lot fed into which finished goods?)
  • Process parameters (what were the recipe setpoints vs. actual values?)

This is fundamentally different from just structuring data. Structure tells you where data lives. Ontology tells you what data means and how it relates to everything else.

The anchor points:

ISA-95 provides multiple entry points for querying data based on what you're investigating:

  • Start from a job order to see everything that happened during that production run
  • Start from equipment to compare performance across different products or shifts
  • Start from a material lot to trace everywhere that lot was used and what quality results it produced
  • Start from a time period to see all activities (production, maintenance, quality) in that window

Every anchor point gives you access to the full context because the ontology defines how everything connects.

Breaking Down ISA-95: What Each Part Actually Does

ISA-95 has multiple parts, each serving a specific purpose. Understanding which parts matter for your use case prevents the overwhelming feeling of "this is too much."

Part 1: Overview and fundamental concepts

Your starting point. Explains the manufacturing operations management (MOM) domain—what falls inside (production, maintenance, quality, inventory) and where boundaries exist with ERP above and controls below. Read this first to understand scope.

Part 3: Activity model

The most practical part for initial implementation. Describes eight activity categories that happen in manufacturing:

  1. Resource management (what equipment, materials, personnel exist?)
  2. Definition management (how do we make things—recipes, work masters)
  3. Scheduling and production planning
  4. Production execution (sending commands to equipment)
  5. Data collection (capturing what actually happened)
  6. Performance analysis (actual vs. planned)
  7. Tracking and tracing
  8. Production coordination

Use Part 3 to map your current state. Walk through each activity and identify: where do we do this today? Which system? Which team? What gaps exist? This becomes your roadmap for what needs integration.

Parts 2 and 4: Data models

The technical depth—UML object models defining exactly what information exists in each entity and how they relate. These are reference documents for implementation:

  • Part 2: Information exchange between Level 4 (ERP) and Level 3 (MOM systems)
  • Part 4: Information exchange between different Level 3 systems (MES ↔ WMS, MES ↔ LIMS, etc.)

You don't read these cover-to-cover. You reference them when building specific integrations or data models.

Parts 5 and 6: Implementation

How to actually move data:

  • Part 5: Message structure, transactions, MQTT topics—the "envelope" for your data
  • Part 6: Distributed systems—how to federate data across multiple sites while maintaining coherence

Start with Parts 1 and 3. Reference Parts 2, 4, 5, 6 as needed during implementation. Don't try to absorb everything at once.

The "Too Restrictive" Myth - Why It's Actually Backwards

Common objection: "ISA-95 is too rigid and top-down. It doesn't fit our unique processes. We need flexibility."

The reality is inverted. It's harder to scale analytics without ISA-95 than with it. Here's why:

The flexibility paradox:

Without a standard ontology, you build custom data structures for each use case. That feels flexible initially—you can model exactly what you need today. But then:

  • A new use case arrives requiring slightly different context
  • You add columns to tables, extend schemas, create new integration points
  • Existing queries break or need updating
  • Documentation becomes critical (and immediately outdated)
  • Only the original builders understand the structure
  • Scaling requires explaining custom models to every new person
  • Every site implements differently, preventing cross-site analytics

With ISA-95's ontology, entities and relationships already have defined placeholders. You might not fill every field on day one, but the structure accommodates growth:

  • New use case needs maintenance context? That entity exists—just start populating it
  • Want to add material traceability? The material lot entity and its relationships are already defined
  • Need to track personnel assignments? The personnel resource entity is waiting

Why this enables scale:

Everyone trained in ISA-95 understands what a "material lot" means. When you say "job order," there's no ambiguity about what data that includes. When you expand from one site to ten, you're not re-explaining custom models—you're using a shared vocabulary that works everywhere.

The evolution proof:

ISA-95 has been evolving for 25 years. Part 1 just got a new release in April 2025. Part 2 update is coming soon. This isn't theoretical—it's battle-tested across thousands of implementations.

Edge cases that feel unique usually aren't. When you think "this doesn't fit ISA-95," it typically means you haven't investigated deeply enough. The standard has seen 25 years of edge cases and evolved to handle them.

Use Case Stacking -The Implementation Pattern That Actually Scales

The trap most manufacturers fall into: trying to model everything before getting any value. The solution: use case stacking.

The pattern:

Use Case 1: OEE Tracking

Start with one specific report or analytics need. Let's say OEE for Line 3. To calculate OEE, you need:

  • Equipment definition and runtime
  • Schedule (planned production time)
  • Materials (to identify which products)
  • Downtime events

Model only what you need. Leave personnel, quality test results, maintenance work orders, and inventory empty. They have placeholders in ISA-95, but you're not using them yet. Implement this first use case completely.

Effort: Significant, because you're building the foundation.

Use Case 2: Material Traceability

Now you want to trace which raw material lots went into which finished goods. You need:

  • Material lots (new)
  • Material consumption records (new)
  • Job orders (already modeled)
  • Equipment (already modeled)

Half the data model already exists from Use Case 1. You're only adding material-specific entities.

Effort: ~50% of Use Case 1, because foundation is in place.

Use Case 3: Downtime Root Cause Analysis

You want to correlate downtime with maintenance history and quality issues. You need:

  • Downtime events (already captured for OEE)
  • Equipment (already modeled)
  • Maintenance work orders (new)
  • Quality test results (new)

Effort: ~30% of Use Case 1, because most infrastructure exists.

The compounding effect:

Each new use case requires less effort because you're building on existing models. By Use Case 5, you might only need to add one or two new entity types. The ontology makes this incremental approach possible because placeholders exist for everything.

Multi-domain convergence:

Eventually you have manufacturing, maintenance, quality, and inventory data in one place. Suddenly you can answer questions that were impossible before:

  • Why did Line 2 wait 45 minutes yesterday? (Inventory data shows you were waiting for a quality lab result)
  • Why do we have more downtime on certain products? (Maintenance history correlates with specific material suppliers)
  • Which shifts produce better quality? (Cross-reference personnel assignments with quality outcomes)

This convergence only works if all domains use the same ontology. Otherwise you're back to custom integrations between systems.

Conclusion

You can't build scalable analytics or reliable AI on fragmented data with inconsistent meanings. The Excel spreadsheet problem isn't about tools—it's about the absence of shared understanding across your organization.

ISA-95 provides that shared understanding. It's not a top-down structure you must follow rigidly. It's an ontology that defines what manufacturing entities exist, how they relate, and what context surrounds them. Every entity has a placeholder, even if you don't use it today.

The manufacturers succeeding with AI aren't building custom data models at each site. They're adopting proven standards that let them scale incrementally through use case stacking. They start with one report or one analytics need, model just what's required, then expand by filling in placeholders that already exist.

Your competitors are probably still building custom models for every project, creating technical debt that makes scaling impossible. That's your advantage window. Adopt a standard ontology, start with one use case, stack additional use cases on the same foundation, and build the shared vocabulary your AI initiatives actually need.

Because when your CEO asks "what's our production efficiency?", there should be one answer based on one model with one set of calculation rules. Not five different answers from five different Excel spreadsheets that all claim to be truth.

Start with Parts 1 and 3 of ISA-95. Map your current activities. Identify one valuable use case. Model just what that needs. Then stack the next use case on top. The ontology grows with you, not against you.

And 25 years of evolution across thousands of implementations has proven one thing clearly: the manufacturers who adopted standardized ontologies early are now deploying AI at scale, while those still building custom models are stuck in pilot purgatory.