Explainable Intelligence for Comprehensive Interpretation of Cybersecurity Data in Incident Management

[thumbnail of AFZALISERESHT_Neda-Thesis_nosignature.pdf]
Preview

AfzaliSeresht, Neda (2022) Explainable Intelligence for Comprehensive Interpretation of Cybersecurity Data in Incident Management. PhD thesis, Victoria University.

Abstract

On a regular basis, a variety of events take place in computer systems: program launches, firewall updates, user logins, and so on. To secure information resources, modern organisations have established security management systems. In cyber incident management, reporting and awareness-raising are a critical to identify and respond to potential threats in organisations. Security equipment operation systems record ’all’ events or actions, and major abnormalities are signaling via alerts based on rules or patterns. Investigation of these alerts is handled by specialists in the incident response team. Security professionals rely on the information in alert messages to respond appropriately. Incident response teams do not audit or trace the log files until an incident happens. Insufficient information in alert messages, and machine-friendly rather than human-friendly format cause cognitive overload on already limited cybersecurity human resources. As a result, only a smaller number of threat alerts are investigated by specialist staff and security holes may be left open for potential attacks. Furthermore, incident response teams have to derive the context of incidents by applying prior knowledge, communicate with the right people to understand what has happened, and initiate the appropriate actions. Insufficient information in alert messages and stakeholders’ participation raise challenges for the incident management process, which may result in late responses. In other words, cybersecurity resources are overburdened due to a lack of information in alert messages that provide an incomplete picture of a subject (incident) to assist with necessary decision making. The need to identify and track local and global sources in order to process and understand the critical elements of threat information causes cognitive overload on the company’s currently limited cybersecurity professionals. This problem can be overcome with a fully integrated report that clarifies the subject (incident) in order to reduce overall cognitive burden. Instead of spending additional time to investigating each subject of incident, which is dependent on the person’s expertise and the amount of time he has, a detailed report of incident can be utilised as an input of human-analyst. If cyber experts’ cognitive loads can be reduced, their response time efficiency may improves. The relationship between achieving incident management agility through contextual analytical with a comprehensive report and reducing human cognition overload is still being studied. There is currently a research gap in determining the key relationships between explainable Artificial Intelligence (AI) models and other technologies used in security management to gain insight into how explainable contextual analytics can provide distinct response capabilities. When using an explainable AI model for event modelling, research is necessary on how to improve self and shared insight about cyber data by gathering and interpreting security knowledge to reduce cognitive burden on analysts. Due to the fact that the level of cyber security expertise depends on prior knowledge or the results of a thorough report as an input, explainable intelligent models for understanding the inputs have been proposed. By enriching and interpreting security data in a comprehensive humanreadable report, analysts can get a better understanding of the situation and make better decisions. Explainable intelligent models are proposed in cyber incident management by interpreting security logs and cybersecurity alerts, and include a model which can be used in fraud detection where a large number of financial transactions necessitates the involvement of a human in the analysis process. In cyber incident management application, a wide and diverse amount of data are digested, and a report in natural language is developed to assist cyber analysts’ understanding of the situation. The proposed model produced easy-to-read reports/stories by presenting supplementary information in a novel narrative framework to communicate the context and root cause of the alert. It has been confirmed that, when compared to baseline reports, a more comprehensive report that answers core questions about the actor (who), riskiness (what), evidence (why), mechanism (how), time (when), and location (where) that support making real-time decisions by providing incident awareness. Furthermore, a common understanding of an incident and its consequences was established through a graph, resulting in Shared Situation Awareness (SSA) capability (the acquisition of cognition through collaboration with others). A knowledge graph, also known as a graph to semantic knowledge, is a data structure that represents various properties and relationships between objects. It has been widely researched and utilised in information processing and organisation. The knowledge graph depicts the various connections between the alert and relevant information from local and global knowledge bases. It interpreted knowledge in a human-readable format to enable more engagement in the cyber incident management. The proposed models are also known as explainable intelligence because they can reduce the cognitive effort required to process a large amount of security data. As a result, self-awareness and shared awareness of what is happening in cybersecurity incidents have been accomplished. The analyses and survey evaluation empirically demonstrated the models’ success in reducing significant overload on expert cognition, bringing more comprehensive information about the incident, and interpreting knowledge in a human-readable format to enable greater participation in cyber incident management. Finally, the intelligent model of knowledge graph is provided for transaction visualisation for fraud detection, an important challenge in security research. As with the same incident management challenges, fraud detection methods need to be more transparent by explaining their results in more detail. Despite the fact that fraudulent practices are always evolving, investigating money laundering based on an explainable AI that uses graph analysis, assist in the comprehension of schemes. A visual representation of the complex interactions that occur in transactions between money sender and money receiver, with explanations of human-readable aspects for easier digestion is provided. The proposed model, which was used in transaction visualisation and fraud detection, was highly regarded by domain experts. The Digital Defense Hackathon in December 2020 demonstrated that the model is adaptable and widely applicable (received first place in the Hackathon competition).

Item type Thesis (PhD thesis)
URI https://vuir.vu.edu.au/id/eprint/44414
Subjects Current > FOR (2020) Classification > 4604 Cybersecurity and privacy
Current > Division/Research > Institute for Sustainable Industries and Liveable Cities
Keywords cyber security, logs, cyber alerts, fraud, incident management, money laundering, explainable intelligence models
Download/View statistics View download statistics for this item

Search Google Scholar

Repository staff login