Using the power of Generative AI — more specifically large language models (LLMs) — predictive maintenance teams can query analytics projects with natural language to get quick, accurate suggestions to commonly asked questions in the moments that matter most. For example, teams can ask questions similar to the following, and get immediate answers and visualizations:
- What is the maintenance plan for (X) site?
- What three pieces of equipment have the highest chance of failure in the next two months?
- What maintenance activities are planned for the next month, and where?
Feature Highlights
- Respond at Critical Moments: Empower teams to give on-the-ground engineers instant answers to maintenance questions, all based on robust analytics and data.
- Improved Accuracy: The model’s provided response is derived from the entirety of available data, ensuring constant relevancy and an unlimited scope, regardless of the quantity of existing data.
- Better Collaboration: Improve communication between teams and speed up on-the-ground collaboration.
- Increased Efficiency: Move from reactive to proactive maintenance to increase operational efficiency.
How It Works: Architecture
A full predictive maintenance application using traditional machine learning runs in Dataiku, fuelled by past data on equipment failure. The predictive maintenance team and any on-ground technician can have a dynamic conversation with the output of the predictive maintenance project through a conversational box, which generates visuals and responses in natural language after calling to an LLM via public API.
The model works by interacting with the metadata (the data schema) of your available datasets rather than directly accessing the raw data. When a user submits a query, the AI model receives this metadata along with the user’s question.
Upon receiving a query, the model generates a set of Dataiku instructions that is executed locally to generate the required dashboard, providing the user with a tailored response to their question.
This approach allows the model to maintain constant relevancy and an unlimited scope, no matter the size or complexity of the underlying data while preserving the highest level of data privacy, as no actual data values are transmitted to the model during the process. If tighter control on data and input are desired, this can be achieved through customization with a containerized version of the LLM.
Responsibility Considerations
This project uses an LLM to support the use and insights generated from a predictive maintenance model. The delivery of information produced by the model is via chatbot. It is important that the insights provided by the LLM are marked as AI generated and that end users know they are interacting with an AI system.
In addition to transparency and potential limitations of the LLM/chatbot’s responses, a panel titled “Data Sources Used for the Insights” provides users with an understanding of which columns from what datasets were used to generate the insights.