Autotuning algorithmic choice for input sensitivity

A daunting challenge faced by program performance autotuning is input sensitivity, where the best autotuned configuration may vary with different input sets. This paper presents a novel two-level input learning algorithm to tackle the challenge for an important class of autotuning problems, algorith...

Full description

Bibliographic Details
Main Authors: Ding, Yufei, Ansel, Jason, Veeramachaneni, Kalyan, Shen, Xipeng, O'Reilly, Una-May, Amarasinghe, Saman P.
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2015
Online Access:http://hdl.handle.net/1721.1/99720
https://orcid.org/0000-0002-7231-7643
_version_ 1826195917017972736
author Ding, Yufei
Ansel, Jason
Veeramachaneni, Kalyan
Shen, Xipeng
O'Reilly, Una-May
Amarasinghe, Saman P.
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Ding, Yufei
Ansel, Jason
Veeramachaneni, Kalyan
Shen, Xipeng
O'Reilly, Una-May
Amarasinghe, Saman P.
author_sort Ding, Yufei
collection MIT
description A daunting challenge faced by program performance autotuning is input sensitivity, where the best autotuned configuration may vary with different input sets. This paper presents a novel two-level input learning algorithm to tackle the challenge for an important class of autotuning problems, algorithmic autotuning. The new approach uses a two-level input clustering method to automatically refine input grouping, feature selection, and classifier construction. Its design solves a series of open issues that are particularly essential to algorithmic autotuning, including the enormous optimization space, complex influence by deep input features, high cost in feature extraction, and variable accuracy of algorithmic choices. Experimental results show that the new solution yields up to a 3x speedup over using a single configuration for all inputs, and a 34x speedup over a traditional one-level method for addressing input sensitivity in program optimizations.
first_indexed 2024-09-23T10:17:52Z
format Article
id mit-1721.1/99720
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T10:17:52Z
publishDate 2015
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/997202022-09-30T20:11:16Z Autotuning algorithmic choice for input sensitivity Ding, Yufei Ansel, Jason Veeramachaneni, Kalyan Shen, Xipeng O'Reilly, Una-May Amarasinghe, Saman P. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Ansel, Jason Veeramachaneni, Kalyan O'Reilly, Una-May Amarasinghe, Saman P. A daunting challenge faced by program performance autotuning is input sensitivity, where the best autotuned configuration may vary with different input sets. This paper presents a novel two-level input learning algorithm to tackle the challenge for an important class of autotuning problems, algorithmic autotuning. The new approach uses a two-level input clustering method to automatically refine input grouping, feature selection, and classifier construction. Its design solves a series of open issues that are particularly essential to algorithmic autotuning, including the enormous optimization space, complex influence by deep input features, high cost in feature extraction, and variable accuracy of algorithmic choices. Experimental results show that the new solution yields up to a 3x speedup over using a single configuration for all inputs, and a 34x speedup over a traditional one-level method for addressing input sensitivity in program optimizations. United States. Dept. of Energy (Award DE-SC0005288) United States. Dept. of Energy (Award DE-SC0008923) United States. Dept. of Energy (Early Career Award) National Science Foundation (U.S.) (Grant 1464216) National Science Foundation (U.S.) (Grant 1320796) National Science Foundation (U.S.) (CAREER Award) 2015-11-04T17:53:23Z 2015-11-04T17:53:23Z 2015-06 Article http://purl.org/eprint/type/ConferencePaper 9781450334686 http://hdl.handle.net/1721.1/99720 Yufei Ding, Jason Ansel, Kalyan Veeramachaneni, Xipeng Shen, Una-May O’Reilly, and Saman Amarasinghe. 2015. Autotuning algorithmic choice for input sensitivity. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2015). ACM, New York, NY, USA, 379-390. https://orcid.org/0000-0002-7231-7643 en_US http://dx.doi.org/10.1145/2737924.2737969 Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2015) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Ding, Yufei
Ansel, Jason
Veeramachaneni, Kalyan
Shen, Xipeng
O'Reilly, Una-May
Amarasinghe, Saman P.
Autotuning algorithmic choice for input sensitivity
title Autotuning algorithmic choice for input sensitivity
title_full Autotuning algorithmic choice for input sensitivity
title_fullStr Autotuning algorithmic choice for input sensitivity
title_full_unstemmed Autotuning algorithmic choice for input sensitivity
title_short Autotuning algorithmic choice for input sensitivity
title_sort autotuning algorithmic choice for input sensitivity
url http://hdl.handle.net/1721.1/99720
https://orcid.org/0000-0002-7231-7643
work_keys_str_mv AT dingyufei autotuningalgorithmicchoiceforinputsensitivity
AT anseljason autotuningalgorithmicchoiceforinputsensitivity
AT veeramachanenikalyan autotuningalgorithmicchoiceforinputsensitivity
AT shenxipeng autotuningalgorithmicchoiceforinputsensitivity
AT oreillyunamay autotuningalgorithmicchoiceforinputsensitivity
AT amarasinghesamanp autotuningalgorithmicchoiceforinputsensitivity