Mental Models for Cybersecurity: A Formal Methods Approach
MetadataShow full item record
Failures in complex systems often result from problematic interactions between different system components, including human users. Furthermore, the complexity of these systems means that system designers may fail to anticipate component interactions, particularly where rare and undesirable human-automation interaction can significantly impact the safe and effective operation of these systems. These problematic and often unanticipated interactions may be driven in part by the mental models of system users, where incongruousness between user mental models---which can drive the actions users take within a given system---and functional system characteristics can lead to inappropriate or unsafe operational states.This is particularly true with respect to cybersecurity vulnerabilities. Employees of most organizations are recommended or required to use the complex digital resources that are now essential for modern business operations, including externally-facing email accounts, internet access, and filesharing capabilities. In response, threat actors have increasingly begun to eschew the highly technical exploits once used to ``hack'' these systems in favor of attacks that instead target users themselves, taking advantage of this confusion to trick them into voluntarily executing malicious code inside their network and bypassing sophisticated cybersecurity countermeasures. Because network defenders have traditionally relied on employee cybersecurity training and restricting network access privileges to counter this threat, these attacks and their consequences continue to grow. One potential explanation for this growth is that the increasing complexity of our digital tools is making it difficult for users to understand how they work and the impacts of using computer programs in certain deleterious ways. These tools can be hard to understand and interact with, particularly for those unfamiliar with the tool or for which controls presented by the system interface do not tightly correspond with the underlying system actions. If this is true, then attackers could certainly use this lack of understanding to their advantage, particularly if system defenders have also failed to anticipate how a user's lack of understanding can inadvertently make a system more vulnerable to attack. We therefore need strategies for investigating these misunderstandings and their impacts on cybersecurity.Decades of work in human factors engineering have resulted in powerful methods for exploring and improving the performance of complex systems, but the use of these methods for discovering rare and unanticipated human-automation interactive effects is difficult, time-consuming, and may not be robust to the discovery of rare events. One potential solution to this problem is the use of formal methods, a well-defined collection of mathematical languages and techniques used to prove whether system models do or do not support desirable properties. Model checking is a formal methods technique that leverages exhaustive statespace search to mathematically prove the presence or absence of these properties, regardless of their rarity. This capability offers a powerful approach to the discovery of unanticipated human-automation interaction, and the identification of these interactions can offer specific targets for improving the safety and reliability of complex digital systems. Human-automation interaction researchers have successfully explored ways of using formal methods and mental model concepts in safety critical systems. In these analyses, proof-based analyses were used to determine if dangerous mismatches exist between human mental models and models of system automation. However, these techniques have never been applied to the cybersecurity domain. This work shows that it is possible to extend formal mental modeling techniques with cybersecurity-specific concepts and computer security folk models to explore and discover unanticipated human-automation interactions resultant from mismatches between mental models and system characteristics that can lead to computer security failures. These findings could be used to improve system security by suggesting improvements to the behavior of the system or components of the user interface. Two cases are used in this investigation. The first explores configuration errors committed by users of a popular cloud data storage service, using a generic formal modeling framework to explore the actions that users take and discuss the vulnerabilities introduced into the system by users with certain mental models. The second case extends the initial generic framework by integrating folk modeling concepts in cybersecurity applications. We investigate a phishing email attack example, then use our findings to describe how users with different folk models could commit potentially dangerous actions in response to threats posed by malicious emails. Results indicate that users can commit actions leaving them vulnerable to cybersecurity risks, such as unauthorized access to data stored in the cloud or a computer compromised by an opened phishing email. While this may not be surprising to computer security professionals, our results offer a novel perspective on these problems by describing how unexpectedly dangerous consequences can occur through specific actions that may appear innocuous to the user. Furthermore, we find that effects may seem disconnected from causes, potentially making it more difficult for users to associate one with the other and understand the origin of these dangers. By rigorously characterizing these risks and their provenance, results suggest that our method sheds significant light on the unintended cybersecurity consequences of real-world user actions, offering a way to anticipate and potentially remediate hazardous system conditions before they can be exploited by threat actors.