Aug 01, 2022
By Marie Oldfield (CStat, CSci, FIScT)
Condition Based Monitoring (CBM) can be used on platforms from Wind Turbines to Submarines. Recently there has been much interest in the academic community towards using AI to monitor systems. In journals there are many technical articles discussing potential ways forward but with the emphasis on simple systems. As we move towards potential technological developments that would lead to AI becoming integrated into CBM it is pertinent to examine where integrating AI into complex systems has been a success or a failure.
Implementation of AI systems in industry has seen mixed results. From success in the wind industry to failure in aerospace. The complexity of the platforms becomes a significant issue when intertwined with AI. AI having significant control over the platform or systems and the natural feel of the platform removed from the human operator can cause unintended and fatal consequences, as in the Boeing Air Max Disasters, where the AI was operating independently of the Pilot. This lack of interface between the AI and the Pilot, or the ability to switch off the autopilot highlighted the critical need to human-computer interfacing.
Given the complexity of systems, it is more important than ever to promote interdisciplinary working so that the end user understands and can operate the system effectively. It is important to deploy robust systems that have been developed in context and with the user and society at the heart of them.
A problematic point here is that often systems are created related to regulatory limits. When working with a complex system there are many limits of the components; the physical limit, the engineering limit and the regulatory limit.
The Physical Limit is one that operators such as pilots, submarine engineers or other operators of complex systems are aware of. A pilot may have to use this knowledge in a sudden emergency where his own experience is crucial to survival. This may lead to manoeuvres outside the regulatory envelope that are nevertheless safe but necessary. This physical limit of the system is where the component breaks. Systems are often designed and regulated in isolation in the scope of the ‘foreseeable operating conditions’. However, the true operating context has infinite possibilities and the responsibility is on the operator when they are working outside of the normal operating conditions. It means that wider considerations need to be met and some safety aspects may need to be prioritised over another. In the context of the nuclear submarine, this may require whole boat safety (ie. threat of sinking) to be put above the safety of the nuclear plant and recognising when to do so. The regulatory limit is almost always the most risk averse. Regulators and system design authorities use limits that academically might be reasonable but practically can prevent normal operations
Until the gap between research and users is closed, it is difficult to see what benefit AI driven CBM will bring. It is clear to see the type of risks faced if this gap is not closed in a robust manner. In order to facilitate this technological development the future use of AI, to predict RUL in systems as a CBM technique, would require extensive validation of any system in the actual operational environment.
Read more here