Proposed BiLSTM-XAI approach
Step 2 The extracted features of the BiLSTM framework may contain some inappropriate loss functions.
Step 3 Due to the loss functions, the detection accuracy gets diminished thereby causing misclassification results.
Step 4 Therefore, it necessitates interpretable explanations with justifications for the misclassified result to prevent the network from future attacks.
Step 5 This mechanism improves the transparency of the proposed intrusion detection system in making decisions regarding interpretation of predictions.
Step 6 To make this happen, this paper introduced the explainable AI models, namely LIME and SHAP models.
Step 7 The XAI approaches increase the interpretation efficiency by its ability to understand the impact of the malicious data.
Step 8 Thus, the BiLSTM-XAI approach determines the presence of any unauthorized and abnormal behavior (i.e., intrusions) of the network.