Format abstraction for sparse tensor algebra compilers
<jats:p>This paper shows how to build a sparse tensor algebra compiler that is agnostic to tensor formats (data layouts). We develop an interface that describes formats in terms of their capabilities and properties, and show how to build a modular code generator where new formats can be added...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Association for Computing Machinery (ACM)
2021
|
Online Access: | https://hdl.handle.net/1721.1/135077 |
_version_ | 1811083558646710272 |
---|---|
author | Chou, Stephen Kjolstad, Fredrik Amarasinghe, Saman |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Chou, Stephen Kjolstad, Fredrik Amarasinghe, Saman |
author_sort | Chou, Stephen |
collection | MIT |
description | <jats:p>This paper shows how to build a sparse tensor algebra compiler that is agnostic to tensor formats (data layouts). We develop an interface that describes formats in terms of their capabilities and properties, and show how to build a modular code generator where new formats can be added as plugins. We then describe six implementations of the interface that compose to form the dense, CSR/CSF, COO, DIA, ELL, and HASH tensor formats and countless variants thereof. With these implementations at hand, our code generator can generate code to compute any tensor algebra expression on any combination of the aforementioned formats.</jats:p>
<jats:p>To demonstrate our technique, we have implemented it in the taco tensor algebra compiler. Our modular code generator design makes it simple to add support for new tensor formats, and the performance of the generated code is competitive with hand-optimized implementations. Furthermore, by extending taco to support a wider range of formats specialized for different application and data characteristics, we can improve end-user application performance. For example, if input data is provided in the COO format, our technique allows computing a single matrix-vector multiplication directly with the data in COO, which is up to 3.6× faster than by first converting the data to CSR.</jats:p> |
first_indexed | 2024-09-23T12:34:58Z |
format | Article |
id | mit-1721.1/135077 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T12:34:58Z |
publishDate | 2021 |
publisher | Association for Computing Machinery (ACM) |
record_format | dspace |
spelling | mit-1721.1/1350772023-12-12T15:48:28Z Format abstraction for sparse tensor algebra compilers Chou, Stephen Kjolstad, Fredrik Amarasinghe, Saman Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory <jats:p>This paper shows how to build a sparse tensor algebra compiler that is agnostic to tensor formats (data layouts). We develop an interface that describes formats in terms of their capabilities and properties, and show how to build a modular code generator where new formats can be added as plugins. We then describe six implementations of the interface that compose to form the dense, CSR/CSF, COO, DIA, ELL, and HASH tensor formats and countless variants thereof. With these implementations at hand, our code generator can generate code to compute any tensor algebra expression on any combination of the aforementioned formats.</jats:p> <jats:p>To demonstrate our technique, we have implemented it in the taco tensor algebra compiler. Our modular code generator design makes it simple to add support for new tensor formats, and the performance of the generated code is competitive with hand-optimized implementations. Furthermore, by extending taco to support a wider range of formats specialized for different application and data characteristics, we can improve end-user application performance. For example, if input data is provided in the COO format, our technique allows computing a single matrix-vector multiplication directly with the data in COO, which is up to 3.6× faster than by first converting the data to CSR.</jats:p> 2021-10-27T20:10:37Z 2021-10-27T20:10:37Z 2018 2019-05-03T18:32:23Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/135077 en 10.1145/3276493 Proceedings of the ACM on Programming Languages Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Association for Computing Machinery (ACM) ACM |
spellingShingle | Chou, Stephen Kjolstad, Fredrik Amarasinghe, Saman Format abstraction for sparse tensor algebra compilers |
title | Format abstraction for sparse tensor algebra compilers |
title_full | Format abstraction for sparse tensor algebra compilers |
title_fullStr | Format abstraction for sparse tensor algebra compilers |
title_full_unstemmed | Format abstraction for sparse tensor algebra compilers |
title_short | Format abstraction for sparse tensor algebra compilers |
title_sort | format abstraction for sparse tensor algebra compilers |
url | https://hdl.handle.net/1721.1/135077 |
work_keys_str_mv | AT choustephen formatabstractionforsparsetensoralgebracompilers AT kjolstadfredrik formatabstractionforsparsetensoralgebracompilers AT amarasinghesaman formatabstractionforsparsetensoralgebracompilers |