Advancing throughput of HEP analysis work-flows using caching concepts

High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and...

Full description

Bibliographic Details
Main Authors: Caspart Rene, Fischer Max, Giffels Manuel, Heidecker Christoph, Kühn Eileen, Quast Günter, Sauter Martin, Schnepf Matthias J., von Cube R. Florian
Format: Article
Language:English
Published: EDP Sciences 2019-01-01
Series:EPJ Web of Conferences
Online Access:https://www.epj-conferences.org/articles/epjconf/pdf/2019/19/epjconf_chep2018_04007.pdf
_version_ 1818740988739321856
author Caspart Rene
Fischer Max
Giffels Manuel
Heidecker Christoph
Kühn Eileen
Quast Günter
Sauter Martin
Schnepf Matthias J.
von Cube R. Florian
author_facet Caspart Rene
Fischer Max
Giffels Manuel
Heidecker Christoph
Kühn Eileen
Quast Günter
Sauter Martin
Schnepf Matthias J.
von Cube R. Florian
author_sort Caspart Rene
collection DOAJ
description High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX.
first_indexed 2024-12-18T01:49:29Z
format Article
id doaj.art-a5a1f95224954a04942dd7209a078764
institution Directory Open Access Journal
issn 2100-014X
language English
last_indexed 2024-12-18T01:49:29Z
publishDate 2019-01-01
publisher EDP Sciences
record_format Article
series EPJ Web of Conferences
spelling doaj.art-a5a1f95224954a04942dd7209a0787642022-12-21T21:25:05ZengEDP SciencesEPJ Web of Conferences2100-014X2019-01-012140400710.1051/epjconf/201921404007epjconf_chep2018_04007Advancing throughput of HEP analysis work-flows using caching conceptsCaspart ReneFischer MaxGiffels ManuelHeidecker ChristophKühn EileenQuast GünterSauter MartinSchnepf Matthias J.von Cube R. FlorianHigh throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX.https://www.epj-conferences.org/articles/epjconf/pdf/2019/19/epjconf_chep2018_04007.pdf
spellingShingle Caspart Rene
Fischer Max
Giffels Manuel
Heidecker Christoph
Kühn Eileen
Quast Günter
Sauter Martin
Schnepf Matthias J.
von Cube R. Florian
Advancing throughput of HEP analysis work-flows using caching concepts
EPJ Web of Conferences
title Advancing throughput of HEP analysis work-flows using caching concepts
title_full Advancing throughput of HEP analysis work-flows using caching concepts
title_fullStr Advancing throughput of HEP analysis work-flows using caching concepts
title_full_unstemmed Advancing throughput of HEP analysis work-flows using caching concepts
title_short Advancing throughput of HEP analysis work-flows using caching concepts
title_sort advancing throughput of hep analysis work flows using caching concepts
url https://www.epj-conferences.org/articles/epjconf/pdf/2019/19/epjconf_chep2018_04007.pdf
work_keys_str_mv AT caspartrene advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT fischermax advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT giffelsmanuel advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT heideckerchristoph advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT kuhneileen advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT quastgunter advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT sautermartin advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT schnepfmatthiasj advancingthroughputofhepanalysisworkflowsusingcachingconcepts
AT voncuberflorian advancingthroughputofhepanalysisworkflowsusingcachingconcepts