October 7, 2025
October 7, 2025
Here's a fundamental problem: general-purpose foundation models were built to optimize the internet, trained on text, images, and video. They excel at language tasks, summarization, content generation. But they weren't trained on physics, thermodynamics, chemical engineering, equipment degradation, or process control. Asking them to optimize a refinery is like asking a literature professor to design a jet engine. Impressive credentials, wrong domain.
Callum Adamson, CEO of Applied Computing (creators of Orbital, the first domain-specific foundation model for refining and petrochemicals), argues that process industries need fundamentally different AI architecture. Not because general models aren't sophisticated, but because the consequences of being wrong are catastrophic. When temperatures exceed thresholds by even a few degrees, equipment fails. When pressure control falters, explosions happen. When predictions miss by 10%, millions of dollars vanish.
You need AI you can trust with your life. That requires AI grounded in physics, trained on process data, and architected to never hallucinate, because in process industries, hallucination kills people and destroys assets worth billions.
The hype around large language models created dangerous misconceptions about what AI can do in industrial environments. Understanding why general-purpose models fail clarifies what's actually required.
What general foundation models understand:
GPT-4, Claude, Gemini, trained on internet-scale text, images, video. They understand:
Genuinely impressive for knowledge work, content creation, coding assistance, customer service.
What they don't understand:
These aren't failures, these models weren't designed for this domain. They're literature professors being asked to design jet engines.
Why hallucination is unacceptable:
In knowledge work, if ChatGPT hallucinates a citation or generates slightly inaccurate content, you fact-check and move on. Minor inconvenience.
In process control, if AI recommends increasing reactor temperature when it should decrease pressure, you get:
Process industries operate at extreme conditions: 1,000°F temperatures, 2,000 PSI pressures, explosive hydrocarbons flowing through pipes near open flames. Small errors compound catastrophically. There's no margin for plausible-but-wrong recommendations.
The computational complexity barrier:
Refineries are among the most computationally complex environments on Earth:
General-purpose AI trained on text can't reason about this complexity using physics. It can generate language describing processes, but it can't calculate how adjusting one variable propagates through thermodynamic constraints to affect downstream equipment.
Domain-specific AI for process industries must meet standards that general-purpose models can't achieve. The architecture differs fundamentally because the requirements differ.
Zero hallucination requirement:
Not "low hallucination." Zero. Every recommendation must be verifiable against physical laws. This requires:
The Rodrigo Benchmark:
The Turing Test asks: can AI convince you it's human? That's not good enough anymore. GPT-4 passes the Turing Test easily but fails in refineries.
The higher standard: can AI convince a PhD-level domain specialist it understands what it's recommending? If a process engineer with 35 years of experience reads an AI recommendation, they should be able to follow the reasoning, validate it against their understanding of physics, and trust the conclusion.
This requires AI that explains itself using the language of the domain—not just natural language, but the mathematics, physics, and engineering principles that govern the process.
Accuracy benchmarks:
Traditional coke accumulation prediction: 30% margin of errorDomain-specific AI: Less than 1% margin of error
That's not incremental improvement. That's transformational. The difference between planning shutdowns with two-week buffers vs. scheduling maintenance within 8-hour windows. Between losing millions to conservative operating policies vs. optimizing every hour of production.
Edge deployment for real-time decisions:
Process control happens in milliseconds, not seconds. Cloud round-trip latency (100-300ms) is too slow. Domain-specific AI must run at the edge—inside the facility, connected directly to equipment.
This also solves data sovereignty: proprietary process parameters, quality issues, operational patterns never leave the facility. No cloud vendor sees your data. Security teams approve deployment because there's no external attack surface.
The instinct when building domain-specific AI is: train one massive model on everything. Time-series data, process documents, equipment specs, historical incidents—throw it all into one foundation model.
That approach fails for several technical reasons:
Incompressibility:
Large unified models can't be compressed enough to run on edge devices. You're back to cloud deployment with its latency and security problems.
Opacity:
When a single model generates recommendations, you can't see which aspects of its training influenced the output. Is it reasoning from physics? From historical patterns? From process documents? The decision-making process is opaque.
Fine-tuning difficulty:
Updating one aspect of model behavior requires retraining the entire model—expensive and risky. If you want to improve time-series forecasting without affecting safety constraint validation, you can't do it cleanly.
The federated intelligence approach:
Build specialized foundation models:
Model 1: Time-Series Foundation Model
Model 2: Physics-Based Foundation Model
Model 3: Language Foundation Model
Collaboration framework:
These models don't work independently—they collaborate:
If any model disagrees with others, the system flags uncertainty rather than outputting conflicting information. This eliminates hallucination—contradictions get caught before reaching users.
The most common question about industrial AI: should it automatically control processes, or should humans make final decisions?
Why closed-loop is dangerous today:
Advocating for AI-driven closed-loop control in refineries shows fundamental disrespect for the industry and the consequences at stake. Consider what you're proposing:
We're four years into the generative AI epoch. GPT-3 was 2021. These systems are infants in industrial time—barely crawling. They haven't proven themselves over decades of operation. They haven't been tested through every edge case, every unusual operating condition, every equipment failure mode.
The open-loop value proposition:
AI as the most intelligent coworker, not an autonomous system:
This creates immediate value without the risks of autonomous control. Operators get superhuman analytical capability while retaining authority and accountability.
The trust-building path:
Maybe in 5-10 years, after foundation models prove themselves across thousands of deployments, we consider limited closed-loop applications. But that requires:
The right sequence: build AI you can trust, deploy in advisory mode, accumulate evidence over years, then consider limited automation. Not: deploy autonomous AI immediately because the technology exists.
General-purpose foundation models revolutionized knowledge work but fail catastrophically in process industries. The architecture mismatch isn't fixable—you can't fine-tune ChatGPT into refinery control because it wasn't trained on physics, time-series data, or process engineering.
Domain-specific AI solves this through multi-model architectures: specialized foundation models for time-series, physics, and language working together with collaboration frameworks that eliminate hallucination. The result: AI you can trust because it explains itself using the laws of physics, not plausible-sounding language.
This isn't about autonomous control—that's years away and possibly dangerous. It's about giving operators superhuman analytical capability: 300 million calculations per second, 100% of data in real-time, root cause analysis in seconds instead of weeks.
The value is measurable: single-digit margin of error becoming sub-1% accuracy, double-digit millions in optimization value per facility, 10,000x improvement in diagnosis speed.
Your competitors are either deploying now and building advantages, or they're closing facilities because traditional operations can't compete. With 100% demand growth over 15 years and thinning margins, AI isn't a nice-to-have. It's survival.
The question isn't whether AI will transform process industries. It's whether you'll be using it or losing to someone who is. Start with open-loop advisory deployments that prove value in weeks. Build trust over months. Scale across facilities over years. But start now, because waiting until AI is "mature" means entering the race after winners are already determined.
Because in the end, you either operate the most efficient facilities on Earth using super intelligence, or you close. There's no middle ground when competitors have 2-5% cost advantages and demand is doubling while supply shrinks.