On-the-fly pipeline parallelism

Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeli...

Täydet tiedot

Bibliografiset tiedot
Päätekijät: Lee, I-Ting Angelina, Leiserson, Charles E., Sukha, Jim, Zhang, Zhunping, Schardl, Tao Benjamin
Muut tekijät: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Aineistotyyppi: Artikkeli
Kieli:en_US
Julkaistu: Association for Computing Machinery (ACM) 2014
Linkit:http://hdl.handle.net/1721.1/90258
https://orcid.org/0000-0003-0198-3283
_version_ 1826204728749457408
author Lee, I-Ting Angelina
Leiserson, Charles E.
Sukha, Jim
Zhang, Zhunping
Schardl, Tao Benjamin
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Lee, I-Ting Angelina
Leiserson, Charles E.
Sukha, Jim
Zhang, Zhunping
Schardl, Tao Benjamin
author_sort Lee, I-Ting Angelina
collection MIT
description Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism. Whereas most concurrency platforms that support pipeline parallelism use a "construct-and-run" approach, this paper investigates "on-the-fly" pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the Piper algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The Piper algorithm automatically throttles the parallelism, precluding "runaway" pipelines. Given a pipeline computation with T[subscript 1] work and T[subscript ∞] span (critical-path length), Piper executes the computation on P processors in T[subscript P]≤ T[subscript 1]/P + O(T[subscript ∞] + lg P) expected time. Piper also limits stack space, ensuring that it does not grow unboundedly with running time. We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as lazy enabling and dependency folding. We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors.
first_indexed 2024-09-23T13:00:28Z
format Article
id mit-1721.1/90258
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:00:28Z
publishDate 2014
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/902582022-10-01T12:26:41Z On-the-fly pipeline parallelism Lee, I-Ting Angelina Leiserson, Charles E. Sukha, Jim Zhang, Zhunping Schardl, Tao Benjamin Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Lee, I-Ting Angelina Leiserson, Charles E. Schardl, Tao Benjamin Zhang, Zhunping Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism. Whereas most concurrency platforms that support pipeline parallelism use a "construct-and-run" approach, this paper investigates "on-the-fly" pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the Piper algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The Piper algorithm automatically throttles the parallelism, precluding "runaway" pipelines. Given a pipeline computation with T[subscript 1] work and T[subscript ∞] span (critical-path length), Piper executes the computation on P processors in T[subscript P]≤ T[subscript 1]/P + O(T[subscript ∞] + lg P) expected time. Piper also limits stack space, ensuring that it does not grow unboundedly with running time. We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as lazy enabling and dependency folding. We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors. National Science Foundation (U.S.) (Grant CNS-1017058) National Science Foundation (U.S.) (Grant CCF-1162148) National Science Foundation (U.S.). Graduate Research Fellowship 2014-09-22T16:51:38Z 2014-09-22T16:51:38Z 2013-07 Article http://purl.org/eprint/type/JournalArticle 9781450315722 http://hdl.handle.net/1721.1/90258 I-Ting Angelina Lee, Charles E. Leiserson, Tao B. Schardl, Jim Sukha, and Zhunping Zhang. 2013. On-the-fly pipeline parallelism. In Proceedings of the twenty-fifth annual ACM symposium on Parallelism in algorithms and architectures (SPAA '13). ACM, New York, NY, USA, 140-151. https://orcid.org/0000-0003-0198-3283 en_US http://dx.doi.org/10.1145/2486159.2486174 Proceedings of the 25th ACM symposium on Parallelism in algorithms and architectures (SPAA '13) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) MIT web domain
spellingShingle Lee, I-Ting Angelina
Leiserson, Charles E.
Sukha, Jim
Zhang, Zhunping
Schardl, Tao Benjamin
On-the-fly pipeline parallelism
title On-the-fly pipeline parallelism
title_full On-the-fly pipeline parallelism
title_fullStr On-the-fly pipeline parallelism
title_full_unstemmed On-the-fly pipeline parallelism
title_short On-the-fly pipeline parallelism
title_sort on the fly pipeline parallelism
url http://hdl.handle.net/1721.1/90258
https://orcid.org/0000-0003-0198-3283
work_keys_str_mv AT leeitingangelina ontheflypipelineparallelism
AT leisersoncharlese ontheflypipelineparallelism
AT sukhajim ontheflypipelineparallelism
AT zhangzhunping ontheflypipelineparallelism
AT schardltaobenjamin ontheflypipelineparallelism