Tailbench: a benchmark suite and evaluation methodology for latency-critical applications

Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techni...

Full description

Bibliographic Details
Main Authors: Kasture, Harshad, Sanchez, Daniel
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2017
Online Access:http://hdl.handle.net/1721.1/112803
https://orcid.org/0000-0002-3964-9064
_version_ 1826207724105367552
author Kasture, Harshad
Sanchez, Daniel
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Kasture, Harshad
Sanchez, Daniel
author_sort Kasture, Harshad
collection MIT
description Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads.
first_indexed 2024-09-23T13:53:58Z
format Article
id mit-1721.1/112803
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:53:58Z
publishDate 2017
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1128032022-10-01T17:53:19Z Tailbench: a benchmark suite and evaluation methodology for latency-critical applications Kasture, Harshad Sanchez, Daniel Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Kasture, Harshad Sanchez, Daniel Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads. National Science Foundation (U.S.) (CCF-1318384) Qatar Computing Research Institute Google (Firm) (Google Research Award) 2017-12-19T18:02:01Z 2017-12-19T18:02:01Z 2016-09 Article http://purl.org/eprint/type/ConferencePaper 978-1-5090-3896-1 978-1-5090-3895-4 978-1-5090-3897-8 http://hdl.handle.net/1721.1/112803 Kasture, Harshad, and Daniel Sanchez. “Tailbench: a Benchmark Suite and Evaluation Methodology for Latency-Critical Applications.” 2016 IEEE International Symposium on Workload Characterization (IISWC) (September 2016). IEEE, 2016, pp. 1–10. https://orcid.org/0000-0002-3964-9064 en_US http://dx.doi.org/10.1109/IISWC.2016.7581261 2016 IEEE International Symposium on Workload Characterization (IISWC) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) MIT Web Domain
spellingShingle Kasture, Harshad
Sanchez, Daniel
Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title_full Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title_fullStr Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title_full_unstemmed Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title_short Tailbench: a benchmark suite and evaluation methodology for latency-critical applications
title_sort tailbench a benchmark suite and evaluation methodology for latency critical applications
url http://hdl.handle.net/1721.1/112803
https://orcid.org/0000-0002-3964-9064
work_keys_str_mv AT kastureharshad tailbenchabenchmarksuiteandevaluationmethodologyforlatencycriticalapplications
AT sanchezdaniel tailbenchabenchmarksuiteandevaluationmethodologyforlatencycriticalapplications