Gust: Game theoretic user-centered security design techniques
MetadataShow full item record
The field of security has many theories that are both sound and complete, yet their implementation is of concern in modern day systems. The research challenges addressed by this thesis lie in the development and translation of sound and complete theories to a practical implementation, and ensuring the best possible match between the theoretical properties and the practical guarantees. Often this mismatch comes from the fact that theoretical properties are ensured for a single component, while the practical implementations involve the interaction between many components, each of which may individually be sound, but collectively do not retain their properties. Most often, the problem in practical implementations do not merely arise from a single component; rather they arise in the interface of each component. And thus, the weakest link in the chain causes a breach in the entire system, composed of otherwise strong components. This thesis addresses the weakest link in the security chain, viz., the human factor. In this thesis, a technically meaningful approach grounded on game theoretic principles is presented to address the weak human factor. The solutions proposed are focused on solving real world problems; thus the game theoretic models are specifically designed to match the semantics of practical situations appropriately. As a first step, users are broadly classified as ignorant, compliant and non-compliant, based on their interaction with the security mechanism. The thesis then takes the position that ' The Non-compliant User is the Enemy ' and ' The Ignorant User is a Vulnerability ', thereby setting the stage for game theoretic models' mechanism design. A trust model, based on the notion of compensatory transfers, is first presented; users are assigned trust levels based on their actions and provisions for context specific cues to adhere to the systems' best practices are provided. With this model as a basis, a theory of trust based decisions is developed which provides systems' with a basis for initiating trust based actions. These trust based actions are modeled under two different conditions: Blind Trust and Incentive Trust. Under the incentive trust model, the system can provide incentives or penalties (negative incentives) to non-compliant users to elicit cooperation. Finally, a model for incentivizing/penalizing non-compliant users based on their trust level is proposed; here, the notion of a user's workflow is incorporated into the game theoretic model in order to accurately reflect the real world scenario. The game theoretic models presented in this thesis take into account the preferences of the users and the goals of the system/security mechanism; each of the models is tuned towards the goal of providing a technically meaningful solution by actively involving the users in the loop. This thesis is a major step forward in solving the decade old problem of the weak human factor that has received little technical attention beyond mere education of users.