An energy complexity model for algorithms
MetadataShow full item record
Energy consumption has emerged as first class computing resource for both server systems and personal computing devices. The growing importance of energy has led to rethink in hardware design, hypervisors, operating systems and compilers. Algorithm design is still relatively untouched by the importance of energy and algorithmic complexity models do not capture the energy consumed by an algorithm. In this work, we propose a new complexity model to account for the energy used by an algorithm. Based on an abstract memory model (which was inspired by the popular DDR3 memory model and is similar to the parallel disk I/O model of Vitter and Shriver), we present a simple energy model that is a (weighted) sum of the time complexity of the algorithm and the number of "parallel" I/O accesses made by the algorithm. We derive this simple model from a more complicated model that better models the ground truth and present some experimental justification for our model. We believe that the simplicity (and applicability) of this energy model is one of the main contributions of this work. We present some sufficient conditions on algorithm behavior that allows us to bound the energy complexity of the algorithm in terms of its time complexity (in the RAM model) and its I/O complexity (in the I/O model). As corollaries, we obtain energy optimal algorithms for sorting (and its special cases like permutation), matrix transpose and (sparse) matrix vector multiplication. Next, we experimentally study the energy consumption for vector copy, vector write, vector addition and multiplication with a scalar, selection sort, quick sort, and matrix transpose. We observe that the energy consumption for any given algorithm depends on minimum of degree of memory parallelism the algorithm can exhibit for a given data layout in the RAM with variations up to 100% for many popular algorithms. Our experiments validate our energy model in but also exposes many limitations of any asymptotic model. We show that reads can be more expensive in terms of energy than writes, and different data types can lead to different energy consumption. One of our most important results is a theoretical and experimental quantification of the impact of parallel data sequences on energy consumption. We use insights from our experiments to propose algorithmic engineering techniques for energy efficient software.