Introduction
The excitement around generative AI (GenAI) has been undeniable, promising revolutionary changes across industries. However, for those of us in the world of industrial control and automation, the realities of implementing these powerful technologies are a little more nuanced. While large language models (LLMs) demonstrate impressive text generation and knowledge synthesis capabilities, their inherent probabilistic nature often falls short of the levels of certainty required for critical industrial processes. The issue of “hallucinations”, where an LLM confidently produces an incorrect output, underscores the need for caution. It still requires an experienced professional to interpret the result, which does not address real world limitations. In short, a technology based on “likely outcomes” often lacks the robustness required for real world applications.
For many potential industrial use cases, simple prompt-based interactions with GenAI models provide only limited value. The “prompt-engineering” paradigm (where you ask ChatGPT for an answer) is quickly reaching a point of diminishing returns. So, is the current hype cycle around GenAI fading and are we approaching the trough of disillusionment? The answer lies not in discarding the technology, but in finding ways to adapt it to more specialised industrial needs. Here AI agents may have an important role to play.
The Rise of Industrial AI Agents
AI agents are more than just enhanced LLMs. They can leverage the power of LLMs, but are designed with specific tasks and objectives in mind. The choice of underlying AI technology (or model) may be “right sized” for the task at hand. Unlike a typical LLM like Gemini, Grok or ChatGPT that responds to a wide range of general questions, an agent is designed to execute pre-defined tasks autonomously, once initial parameters are provided. Agents represent a more practical way to use AI for specific industrial purposes, reducing the statistical uncertainty inherent in standard LLM prompting.
An AI agent narrows the focus of the LLM to well-defined tasks, and can therefore deliver more targeted and predictable outcomes. But to perform well, an agent needs context, and this is obtained from a “knowledge graph”.
The Importance of Context: Knowledge Graphs
The core limitation of relying solely on LLMs for industrial applications is that they lack the deep, specific contextual information about plant-specific or site specific processes. A generic LLM has no idea how specific equipment is configured in your specific plant or site. This is why context is important. The solution lies in using “knowledge graphs”.
A knowledge graph is a structured way of organising information and the relationships between it. It’s like a map of your plant’s knowledge, going beyond simple databases by connecting information in a way that makes it meaningful to the agent. Knowledge graphs can represent not only the equipment in a facility and its relationships, but also plant specific rules, procedures, and the historical performance data associated with the plant.
For instance, a knowledge graph might show that a specific pump (node) is connected to a particular line (edge) which is part of a specific process, and that the pump was previously repaired a couple of months ago, and that current operating pressure is outside a defined threshold. Another example might be a safety procedure that defines the required PPE (Personal Protective Equipment) that is required to be worn when working on that line.
Rather than searching through isolated data points, the AI agent can follow the web of connections within the knowledge graph. This allows it to draw more accurate conclusions, and to make specific decisions within a specific context.
Standardising Knowledge for Interoperability
Creating this context as a knowledge graph is obviously no small task. In the same way that industrial communications are standardised via protocols (such as Modbus or Profibus) to facilitate the exchange of data, the same concept needs to apply to higher level information. We ideally need structured information models that define common industrial concepts.
Think about what occurred in the 1990’s with the introduction of electronic data interchange (EDI). Back then, every large business was struggling with inconsistent business practices (particularly for invoicing, payments, orders etc). With EDI the format of each “business document” was standardised and codified, using a universal and ubiquitous system. This solved a huge problem because companies could then exchange business critical documents with one another and be confident that the content would be understood. This same concept is now being applied to industrial information and data.
There are many different industrial information model standards that could be used to define different aspects of industrial information. Examples include:
- ISA-95: Defines a hierarchical model for enterprise to control system integration.
- ISO 15926: A standard for representing data about engineering projects, with a particular focus on process plants.
- OPC UA: While primarily a communication standard, OPC UA also has an information modeling component.
- IEC 61131-3: Standardizes programming languages for PLC and related automation.
- Asset Administration Shell (AAS): Developed for Industrie 4.0, providing a digital representation of an asset and all its relevant data.
These data models, when serialised, can be stored as JSON key-value pairs in a graph database and therefore provide a common standard for the storage and exchange of information.
Standards take a long time to be agreed and adopted. But the standardisation of industrial knowledge will certainly facilitate the application of specialised industrial agents, perhaps developed by specialised third parties, directly to your own plant. In a typical factory, you probably don’t want to develop your own proprietary standard.
How AI Agents Work: Beyond Simple Prompts
An industrial AI agent is more than a model responding to a prompt. Instead, it’s a process involving specific stages:
- Task Specification: The agent is configured with a specific, well-defined task within a narrow scope. For example, an agent might be set up to monitor for a specific type of equipment fault or to generate a preventative maintenance report.
- Contextualisation: The agent accesses relevant contextual data from the knowledge graph to inform its task.
- LLM Execution: The agent uses an appropriate AI model, often an LLM, to perform the core task, but with this additional context.
- Output Formatting and Validation: An agent also allows you to specify the format and type of output allowed from the language model. It has specific rules on how the LLM results can be used and has a validation mechanism to further reduce uncertainty.
- Learning and Feedback Loop: Crucially, agents can incorporate historical interaction data to further refine their operation. In theory, with each successful interaction, the agent’s effectiveness improves, creating a continuous self-learning feedback loop.
Specialisation and Orchestration
The real power of agents comes from their specialisation. In an industrial setting, you might have dedicated agents for safety compliance, quality control, predictive maintenance, inventory management, energy optimization, and more. Instead of one general-purpose AI trying to manage all facets of a plant, many specialised agents can work together, each responsible for a defined area of expertise.
Provided the knowledge graph follows industry standards, these specialised agents could be provided by third parties, companies that for example specialise in energy optimisation can develop and provide their proprietary agents for your plant.
This multi-agent approach needs a mechanism for coordination. This orchestration layer enables interaction between agents and ensures that data is passed smoothly between agents. Orchestration allows agents to tackle complex tasks by combining the capabilities of many agents. It is similar to how a multidisciplinary team of engineers might operate when solving complex problems in the real world.
The Path Forward
Like much of the AI world, agent technology is still maturing, and there is much we still need to learn. There is limited real-world experience with industrial AI agents so far, but the potential for reliability, scalability, and true automation is undeniable.
Agents provide a viable path for moving beyond the limitations of simple prompting, enabling us to scale generative AI to real industrial use cases. The complexity of industrial environments requires robust and reliable solutions, and AI agents are an important piece of that puzzle. The development of platforms that can configure, manage and orchestrate multiple AI agents are all evolving. Together with a knowledge graph capability, these AI Agent platforms might just be the catalyst that will make industrial AI a practical and widespread reality.
The challenge for technical decision-makers is to navigate the rapidly changing landscape and bet on the right platform. The next 12 months will be critical in shaping how AI will be deployed in the industrial space.
Conclusion
The initial enthusiasm for GenAI has given way to a more pragmatic understanding of what’s required for industrial applications. The focus now needs to shift away from “general purpose” LLM’s and towards the development of task specific AI agents. The combination of these agents, working in concert with a knowledge graph of all the plant-specific rules, and supported by a robust orchestration engine will finally make industrial AI a viable reality. As automation specialists, our role is to embrace these developments and steer our companies adoption of this technology in the right direction to ensure that our industrial processes are more efficient, robust, reliable, and safe.