Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations

In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computatio...

Full description

Bibliographic Details
Main Authors: Sterling G. Baird, Ramsey Issa, Taylor D. Sparks
Format: Article
Language:English
Published: Elsevier 2023-10-01
Series:Data in Brief
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352340923005875
_version_ 1797660461034897408
author Sterling G. Baird
Ramsey Issa
Taylor D. Sparks
author_facet Sterling G. Baird
Ramsey Issa
Taylor D. Sparks
author_sort Sterling G. Baird
collection DOAJ
description In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of ``Turing test'' by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset's explored boundaries. This objective necessitates a large quantity of data. This data encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days’ worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The data was logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate data. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world.
first_indexed 2024-03-11T18:31:09Z
format Article
id doaj.art-9f35dd7862024c88957c5b6194967c41
institution Directory Open Access Journal
issn 2352-3409
language English
last_indexed 2024-03-11T18:31:09Z
publishDate 2023-10-01
publisher Elsevier
record_format Article
series Data in Brief
spelling doaj.art-9f35dd7862024c88957c5b6194967c412023-10-13T11:04:48ZengElsevierData in Brief2352-34092023-10-0150109487Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulationsSterling G. Baird0Ramsey Issa1Taylor D. Sparks2Materials Science & Engineering, 122 S. Central Campus Drive, #304 Salt Lake City, UT 84112-0056, United States; Corresponding author.Materials Science & Engineering, 122 S. Central Campus Drive, #304 Salt Lake City, UT 84112-0056, United StatesMaterials Science & Engineering, 122 S. Central Campus Drive, #304 Salt Lake City, UT 84112-0056, United States; Chemistry Department, University of Liverpool, Liverpool, L7 3NY, United KingdomIn scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of ``Turing test'' by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset's explored boundaries. This objective necessitates a large quantity of data. This data encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days’ worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The data was logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate data. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world.http://www.sciencedirect.com/science/article/pii/S2352340923005875Adaptive designPhysics-basedLubachevsky–StillingerForce-biased algorithmsParticle packingPacking generation
spellingShingle Sterling G. Baird
Ramsey Issa
Taylor D. Sparks
Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
Data in Brief
Adaptive design
Physics-based
Lubachevsky–Stillinger
Force-biased algorithms
Particle packing
Packing generation
title Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
title_full Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
title_fullStr Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
title_full_unstemmed Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
title_short Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
title_sort materials science optimization benchmark dataset for multi objective multi fidelity optimization of hard sphere packing simulations
topic Adaptive design
Physics-based
Lubachevsky–Stillinger
Force-biased algorithms
Particle packing
Packing generation
url http://www.sciencedirect.com/science/article/pii/S2352340923005875
work_keys_str_mv AT sterlinggbaird materialsscienceoptimizationbenchmarkdatasetformultiobjectivemultifidelityoptimizationofhardspherepackingsimulations
AT ramseyissa materialsscienceoptimizationbenchmarkdatasetformultiobjectivemultifidelityoptimizationofhardspherepackingsimulations
AT taylordsparks materialsscienceoptimizationbenchmarkdatasetformultiobjectivemultifidelityoptimizationofhardspherepackingsimulations