November 9, 2025
November 9, 2025

The conversation about AI in manufacturing often gets stuck in two extremes: either it's going to replace all workers, or it's just hype that won't affect the industry. Neither view matches what's actually happening on factory floors. AI is already improving quality control, helping workers do their jobs better, and making engineering tasks faster. The question isn't whether AI will impact manufacturing. It's how to deploy it effectively.
Roey Mechrez leads AI development at Tulip and manages their EMEA operations. With a PhD in computer vision and about ten years working with AI, including running his own company, he brings both technical depth and practical implementation experience to the discussion.
Two fundamentally different approaches exist for digital transformation in manufacturing. Understanding the difference helps explain why AI implementations succeed or fail.
Traditional MES implementations start by mapping the entire process. You plan everything from beginning to end, build it with conventional tools over a year or two, then deploy the complete system to production and change everything at once. This approach is rigid, slow to implement, hard to integrate, and difficult to modify.
The composable application approach works differently. You build one app to solve one problem. Work instructions for this workstation. Quality checks for that process. Kitting for this area. Each application connects to people, machines, data, devices, and other systems. Time to value is much shorter. You solve problems incrementally rather than attempting everything simultaneously.
This difference matters for AI deployment. Adding computer vision quality inspection to a monolithic MES might require vendor involvement, customization, and months of work. Adding it to a composable platform might take days or weeks because the platform expects extensions and integrations.
The composable approach also enables continuous improvement. You identify a problem, build a solution, deploy it, learn from it, and improve it. Then move to the next problem. This matches how manufacturing actually works better than big bang implementations.
Traditional software development requires IT resources for every change. Want to modify a form? Need a developer. Add a button? Need a developer. Change a workflow? Need a developer.
No-code platforms flip this model. Manufacturing engineers and process engineers can build solutions themselves. They understand the shop floor better than IT does. They know which problems matter and what solutions would work. Giving them tools to build applications directly unleashes creativity and speeds implementation.
This doesn't eliminate IT. You still need IT for infrastructure, security, integration with enterprise systems, and governance. But daily operational improvements can happen without IT bottlenecks.
Three main groups benefit from this approach:
Engineers use the platform as their toolbox for continuous improvement. They identify problems and build solutions rather than requesting help from others or searching for external tools.
Workers use the applications engineers create. They get better tools customized for their specific needs rather than generic systems or paper-based processes. Better user experience means better adoption and better results.
Managers get visibility into what's happening on the shop floor. Data from connected applications provides the information needed for decisions. Full traceability becomes possible when systems connect to people, processes, and equipment.
Computer vision solves real problems in manufacturing today. The technology is mature enough for production use, though implementation requires understanding what works and what doesn't.
Quality inspection is the most common application. Cameras capture images of parts or assemblies. AI models identify defects. This works particularly well for visual defects that human inspectors find tedious or inconsistent to catch. Scratches on painted surfaces, missing components, incorrect assembly orientation - computer vision handles these reliably.
The key to success is starting with good data. You need images of good parts and images of defective parts, properly labeled. The more examples you have, the better your model performs. This seems obvious but many organizations underestimate how much labeled data they need.
Work verification ensures operators perform steps correctly. Cameras confirm that workers picked the right parts, assembled components in the right order, or completed required actions. This reduces errors without making workers feel micromanaged, because the system provides immediate feedback rather than blame after the fact.
Safety monitoring watches for unsafe conditions or behaviors. Did the operator wear required safety equipment? Are hands near moving machinery? Is the work area clear of hazards? Computer vision can monitor continuously without fatigue.
The technology works best when deployed incrementally. Start with one use case at one workstation. Prove it works. Learn what challenges come up. Then expand based on what you learned.
Large language models are changing how people interact with manufacturing systems. Instead of navigating menus and forms, workers can ask questions in natural language and get helpful answers.
Think about a worker on the line who encounters an unusual situation. Traditionally, they might call a supervisor, consult a manual, or make their best guess. With a manufacturing copilot, they can describe the situation and ask what to do. The system understands the context - which product they're building, which step they're on, what data the sensors show - and provides relevant guidance.
This changes training dynamics. New workers can become productive faster because they have an expert assistant available immediately. Experienced workers can handle edge cases they haven't seen before without stopping production to find help.
The technology also helps engineers. When building applications or troubleshooting problems, they can describe what they want in plain language rather than learning specific technical interfaces. This lowers the barrier to building good solutions.
However, generative AI in manufacturing needs careful implementation. The system must understand manufacturing context, not just general knowledge. It needs access to your specific procedures, equipment data, and process information. Generic large language models don't know your facility.
Three main approaches exist for getting AI models working in your facility, each with different trade-offs.
Custom models built by your team or a partner specifically for your use case. This provides maximum control and optimization for your exact needs. But it requires significant data science expertise and time investment. You need labeled data, model training capability, and ongoing model maintenance.
Pretrained models that you adapt to your specific application through transfer learning. These models learned general patterns from large datasets, then you fine-tune them with your data. This approach requires less data and expertise than building from scratch while still providing good customization.
Out-of-box models that work without customization for common use cases. These handle standard problems like OCR, barcode reading, or generic object detection. They're fastest to deploy but least customizable.
Most organizations end up using all three approaches for different problems. Use out-of-box models where they work. Use pretrained models with transfer learning for common problems that need some customization. Build custom models only for unique problems where other approaches don't deliver adequate results.
Every AI project starts with data. Not enough data, wrong data, or poorly labeled data means failed projects regardless of how sophisticated your models are.
For computer vision applications, you need hundreds or thousands of labeled images depending on use case complexity. Simple defect detection might work with a few hundred examples. Complex inspection with many defect types might need thousands.
The data must represent actual conditions. Images captured in perfect lighting won't help if production lighting varies. Examples from one machine won't fully prepare models for other machines with slightly different characteristics. Your training data should match your deployment environment.
Labeling data takes time. Someone needs to mark which images show good parts and which show defects. For complex problems, they need to outline exactly where defects appear. This labeling work often takes more time than actual model training.
Plan for data collection and labeling from the start. Don't assume you can train models with whatever images happen to be available. Successful projects treat data preparation as a critical workstream, not an afterthought.
AI applications need to fit into existing workflows and systems. A quality inspection model that works perfectly in isolation but can't connect to your MES provides limited value.
Integration requirements include:
Data flow from PLCs, sensors, and other equipment into AI systems. The model needs the right inputs at the right time to make predictions.
Results flow from AI systems back to MES, quality systems, or wherever decisions need to happen. Detecting a defect matters only if the system can route the part appropriately.
User interfaces that let workers interact with AI systems naturally. If operators need to switch between multiple disconnected systems, adoption suffers.
Configuration and monitoring tools that let engineers manage AI applications without data science expertise. Models need retraining as conditions change. Performance needs monitoring to catch drift before it causes problems.
Platforms designed for manufacturing understand these integration needs. They provide standard interfaces to common equipment and systems. They handle edge cases like network interruptions or sensor failures. They give engineers tools to manage AI applications alongside other shop floor systems.
Technology works only if people use it. Many AI projects fail not because the technology doesn't work but because workers resist adoption or management doesn't support the change.
Several factors improve adoption:
Involve workers early. Don't surprise them with new systems. Explain what you're doing and why. Get their feedback during development. Workers often identify problems or opportunities that engineers miss.
Start with problems workers care about. If your first AI project makes their jobs harder or only benefits management, expect resistance. If it solves a problem they've been complaining about, expect champions.
Provide clear value quickly. Don't spend months building something complicated. Deploy simple applications that work and provide obvious benefits. Build from those successes.
Train properly. Don't assume new systems are self-explanatory. Invest in training that helps workers understand not just how to use systems but why they're valuable.
Monitor and iterate. Systems that work in testing sometimes have problems in production. Pay attention to user feedback. Fix problems quickly. Show workers their input matters by acting on it.
AI will not replace factory workers in the near future. Manufacturing has been discussing lights-out factories for twenty years. It hasn't happened and won't happen soon.
People remain essential in manufacturing. Engineers design processes and solve problems. Workers handle variability and exceptions. Managers make decisions about priorities and resources. AI augments all these roles without replacing them.
What changes is the nature of work. Tasks that are repetitive, tedious, or require consistent attention become automated. This frees people to focus on work that requires judgment, problem-solving, and adaptability. Quality inspection by camera means inspectors can focus on complex issues rather than repetitive checks. Copilot assistance means workers can handle more complex products or process variations.
This shift creates opportunities for workers to develop higher-value skills. It makes manufacturing jobs more interesting and often safer. The challenge is helping workers transition to these new roles through training and support.
AI in manufacturing will continue advancing rapidly. Computer vision keeps improving. Large language models become more capable. New applications emerge regularly.
The organizations that succeed will treat AI as a tool for continuous improvement rather than a project with a defined end state. They'll deploy applications incrementally, learn from each implementation, and build on successes. They'll focus on augmenting people rather than replacing them. They'll invest in data quality and integration rather than just models.
Manufacturing isn't slow to adopt technology anymore. A new generation of engineers brings different expectations and approaches. Citizen developer platforms make change faster and less risky. Success comes from enabling these engineers with good tools and supporting them with good practices.