Crayon: saving power through shape and color approximation on next-generation displays

We present Crayon, a library and runtime system that reduces display power dissipation by acceptably approximating displayed images via shape and color transforms. Crayon can be inserted between an application and the display to optimize dynamically generated images before they appear on the screen....

Full description

Bibliographic Details
Main Authors: Estellers, Virginia, Stanley-Marbell, Phillip, Rinard, Martin C
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery 2018
Online Access:http://hdl.handle.net/1721.1/113653
https://orcid.org/0000-0001-7752-2083
https://orcid.org/0000-0001-8095-8523
Description
Summary:We present Crayon, a library and runtime system that reduces display power dissipation by acceptably approximating displayed images via shape and color transforms. Crayon can be inserted between an application and the display to optimize dynamically generated images before they appear on the screen. It can also be applied offline to optimize stored images before they are retrieved and displayed. Crayon exploits three fundamental properties: the acceptability of small changes in shape and color, the fact that the power dissipation of OLED displays and DLP pico-projectors is different for different colors, and the relatively small energy cost of computation in comparison to display energy usage. We implement and evaluate Crayon in three contexts: a hardware platform with detailed power measurement facilities and an OLED display, an Android tablet, and a set of cross-platform tools. Our results show that Crayon's color transforms can reduce display power dissipation by over 66% while producing images that remain visually acceptable to users. The measured whole-system power reduction is approximately 50%. We quantify the acceptability of Crayon's shape and color transforms with a user study involving over 400 participants and over 21,000 image evaluations.