HyperPower: Power‐ and Memory‐Constrained Hyper‐Parameter Optimization for Neural Networks

Dimitrios Stamoulis1,a, Ermao Cai1,b, Da‐Cheng Juan2 and Diana Marculescu1,c
1Department of ECE, Carnegie Mellon University, Pittsburgh, PA
adstamoul@andrew.cmu.edu
bermao@cmu.edu
cdianam@cmu.edu
2Google Research, Mountain View, CA
dacheng@google.com

ABSTRACT


While selecting the hyper‐parameters of Neural Networks (NNs) has been so far treated as an art, the emergence of more complex, deeper architectures poses increasingly more challenges to designers and Machine Learning (ML) practitioners, especially when power and memory constraints need to be considered. In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization and random search in the context of power‐ and memory‐constrained hyperparameter optimization for NNs running on a given hardware platform. HyperPower is the first work (i) to show that power consumption can be used as a low‐cost, a priori known constraint, and (ii) to propose predictive models for the power and memory of NNs executing on GPUs. Thanks to HyperPower, the number of function evaluations and the best test error achieved by a constraint‐unaware method are reached up to 112.99× and 30.12× faster, respectively, while never considering invalid configurations. HyperPower significantly speeds up the hyperparameter optimization, achieving up to 57.20× more function evaluations compared to constraint‐unaware methods for a given time interval, effectively yielding significant accuracy improvements by up to 67.6%.



Full Text (PDF)