Sustained Mobile Visual Computing: A Human-Centered Perspective
MetadataShow full item record
Visual content is the most natural abstraction of the real world. Interacting with various types of visual content via mobile devices at anytime and anywhere promises the future of mobile computing. Such sustained mobile visual computing will revolutionize cyber-physical systems, especially the cyber-human interaction, in numerous applications from education and entertainment to infrastructure and healthcare monitoring. However, today’s systems, highly optimized for simple graphic user interface or stationary devices, fail to achieve the energy and bandwidth efficiency required by this long-term vision. In this dissertation, we propose a human-centered perspective to bridge this gap by understanding the human perception of the dynamic visual content under the specific mobile context. In particular, this dissertation makes a simple yet fundamental switch in system design: exposing the subjective human perception of dynamic content in mobile contexts to the decision modules of mobile visual computing, instead of purely depending on objective performance metrics. The system will then be able to perform intelligent adaptation and control in order to boost the resource efficiency. We have redesigned four typical mobile visual computing systems to reveal the benefits of the human-centered perspective. First, ShutPix leverages the unnecessarily high pixel density of smartphones and the limited visual acuity of human eyes, and selectively shuts off redundant subpixels in order to save the image display power without impacting mobile viewing experience. Second, CrowdDBS shows that dynamic brightness scaling during mobile video playback can be visually acceptable under various scaling frequency, magnitude and temporal consistency, and presents a crowdsourced brightness scaling scheme to minimize the mobile video display energy. Third, RnB pushes brightness scaling into video streaming, and introduces a joint rate and brightness adaptation framework for mobile video streaming, which shifts the classic Rate-Distortion tradeoff to a fresh Rate-Distortion-Energy tradeoff tailored for mobile devices. Forth, Prius targets the multi-client mobile video streaming and develops a hybrid adaptation framework by overlaying a layer of adaptation intelligence at the edge cloud to finalize the rate adaptation decisions initialized from the clients, thereby overcoming the playback unfairness and bandwidth inefficiency. The high-level contribution of this dissertation lies in building a strong connection between human vision theory and mobile system design. Specifically, this work is a significant step to show that human vision characteristics can be accurately modeled and cleanly integrated into commercial off-the-shelf smartphones to deliver practical and measurable gains. Second, this dissertation presents novel mobile visual computing algorithms that enrich the theory of human vision system, extending it to operate over subpixel shutoff, dynamic brightness scaling, joint bitrate and brightness adaptation, and multi-client video adaptation. Third, this dissertation makes a clear departure by blurring the border between applications and lower-layer/hardware support. This allows the visual computing applications and the lower layers as well as the hardware to collaborate on the common objective of enhancing user experience and resource efficiency. Finally, this work validates the feasibility and performance of the proposed designs using extensive analysis, simulation and testbed implementation. The results show that human-centered mobile visual computing can achieve substantial efficiency improvement from a few percent to several-fold depending on the visual content, mobile device, and network environment.