Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data
In this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data ins...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-07-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/15/5200 |
id |
doaj-17ef7c3aee0b4ef29b0e03f120ed16d6 |
---|---|
record_format |
Article |
spelling |
doaj-17ef7c3aee0b4ef29b0e03f120ed16d62021-08-06T15:31:45ZengMDPI AGSensors1424-82202021-07-01215200520010.3390/s21155200Explainable Anomaly Detection Framework for Maritime Main Engine Sensor DataDonghyun Kim0Gian Antariksa1Melia Putri Handayani2Sangbong Lee3Jihwan Lee4Korea Marine Equipment Research Institute, Busan 49111, KoreaDepartment of Industrial and Data Engineering, Major in Industrial Data Science and Engineering, Pukyong National University, Busan 48513, KoreaDepartment of Industrial and Data Engineering, Major in Industrial Data Science and Engineering, Pukyong National University, Busan 48513, KoreaLab021 Shipping Analytics, Busan 48508, KoreaDepartment of Industrial and Data Engineering, Major in Industrial Data Science and Engineering, Pukyong National University, Busan 48513, KoreaIn this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances as an anomaly. This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above. As an explainable AI method, this study adopts Shapley Additive exPlanations (SHAP), which is theoretically solid and compatible with any kind of machine learning algorithm. SHAP enables us to measure the marginal contribution of each sensor variable to an anomaly. Thus, one can easily specify which sensor is responsible for the specific anomaly. To illustrate our framework, the actual sensor stream obtained from the cargo vessel collected over 10 months was analyzed. In this analysis, we performed hierarchical clustering analysis with transformed SHAP values to interpret and group common anomaly patterns. We showed that anomaly interpretation and segmentation using SHAP value provides more useful interpretation compared to the case without using SHAP value.https://www.mdpi.com/1424-8220/21/15/5200explainable AIanomaly detectionisolation forestShapley Additive exPlanationsSHAPclustering |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Donghyun Kim Gian Antariksa Melia Putri Handayani Sangbong Lee Jihwan Lee |
spellingShingle |
Donghyun Kim Gian Antariksa Melia Putri Handayani Sangbong Lee Jihwan Lee Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data Sensors explainable AI anomaly detection isolation forest Shapley Additive exPlanations SHAP clustering |
author_facet |
Donghyun Kim Gian Antariksa Melia Putri Handayani Sangbong Lee Jihwan Lee |
author_sort |
Donghyun Kim |
title |
Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data |
title_short |
Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data |
title_full |
Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data |
title_fullStr |
Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data |
title_full_unstemmed |
Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data |
title_sort |
explainable anomaly detection framework for maritime main engine sensor data |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2021-07-01 |
description |
In this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances as an anomaly. This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above. As an explainable AI method, this study adopts Shapley Additive exPlanations (SHAP), which is theoretically solid and compatible with any kind of machine learning algorithm. SHAP enables us to measure the marginal contribution of each sensor variable to an anomaly. Thus, one can easily specify which sensor is responsible for the specific anomaly. To illustrate our framework, the actual sensor stream obtained from the cargo vessel collected over 10 months was analyzed. In this analysis, we performed hierarchical clustering analysis with transformed SHAP values to interpret and group common anomaly patterns. We showed that anomaly interpretation and segmentation using SHAP value provides more useful interpretation compared to the case without using SHAP value. |
topic |
explainable AI anomaly detection isolation forest Shapley Additive exPlanations SHAP clustering |
url |
https://www.mdpi.com/1424-8220/21/15/5200 |
work_keys_str_mv |
AT donghyunkim explainableanomalydetectionframeworkformaritimemainenginesensordata AT gianantariksa explainableanomalydetectionframeworkformaritimemainenginesensordata AT meliaputrihandayani explainableanomalydetectionframeworkformaritimemainenginesensordata AT sangbonglee explainableanomalydetectionframeworkformaritimemainenginesensordata AT jihwanlee explainableanomalydetectionframeworkformaritimemainenginesensordata |
_version_ |
1721217555406782464 |