FigureĀ 2 describes the proposed BiLSTM-XAI approach to efficiently classify the intrusions present in the industrial network. The step by step procedure of BiLSTM-XAI approach is described as follows.

Proposed BiLSTM-XAI approach

Step 1 The input databases, namely Honeypot and NSL-KDD, are injected into the BiLSTM framework that classifies the data features and detects the abnormal features if any present in the network.
Step 2 The extracted features of the BiLSTM framework may contain some inappropriate loss functions.
Step 3 Due to the loss functions, the detection accuracy gets diminished thereby causing misclassification results.
Step 4 Therefore, it necessitates interpretable explanations with justifications for the misclassified result to prevent the network from future attacks.
Step 5 This mechanism improves the transparency of the proposed intrusion detection system in making decisions regarding interpretation of predictions.
Step 6 To make this happen, this paper introduced the explainable AI models, namely LIME and SHAP models.
Step 7 The XAI approaches increase the interpretation efficiency by its ability to understand the impact of the malicious data.
Step 8 Thus, the BiLSTM-XAI approach determines the presence of any unauthorized and abnormal behavior (i.e., intrusions) of the network.