FORMAL METHODS FOR USER EXPERIENCE EVALUATION AND TESTING
MetadataShow full item record
Human-machine systems have been expanding their roles in industry and in society. However, the growing complexity of interactive systems requires more and more efforts to ensure reliability, usability, and satisfying user experience. Testing is an effective approach for finding and correcting discrepancies between intended and actual system behavior as well as the ability to find usability problems unanticipated at design time. Testing is often regarded as the most intellectual-demanding, time-consuming, and expensive part of software and system development processes. It can thus be difficult (if not impossible) for testers to anticipate all of the system conditions that need to be evaluated. This is especially true of human-machine systems. This is because the human operator (who is attempting to achieve his or her task goals) is an additional concurrent component to the system, and one whose behavior is not governed by the implementation of designed system elements.To address these issues, researchers have developed approaches to automating the generation of test cases. Among these are formal methods: mathematically based languages, techniques, and tools for the modeling, specification, and analysis of systems. These support model-based approaches for creating tests that are efficient and provide guarantees about their completeness (with respect to the model). In particular, model checking can be used for automated test case generation. In this, efficient and exhaustive search techniques parse models of a system to satisfy specified coverage criteria (descriptions of the system conditions the test cases should encounter during execution).In this dissertation, I introduce a new method for automatically generating test cases for human-machine systems. This uses a combination of human task behavior models, decision models, and system models. With a given formal coverage criterion (based on the system and task models), tools based on model checking are used to automatically generate test sequences from the formal model. The test cases are guaranteed to be complete with respect to the coverage criteria. Using different human decision models, based on utility theory, test cases can be designed to replicate interaction scenarios representative of different types of users. Tests generated using my approach can be used to validate that system implementations satisfy properties proved about the formal model. They can also be used to gain insights into system usability that would not have been possible to include in a formal system model. Finally, the novel inclusion of human decision modeling enables generated tests to represent interaction scenarios that will give analysts and testers insights into different types of users’ experiences.In this dissertation, I describe my method and demonstrate its capabilities with two applications: a pod-based coffee machine and a smart thermostat. I also discuss my research contributions and explore future research possibilities.