Technical note: A prototype transparent-middle-layer data management and analysis infrastructure for cosmogenic-nuclide exposure dating

<p>Geologic dating methods for the most part do not directly measure ages. Instead, interpreting a geochemical observation as a geologically useful parameter – an age or a rate – requires an interpretive middle layer of calculations and supporting data sets. These are the subject of active res...

Full description

Bibliographic Details
Main Author: G. Balco
Format: Article
Language:English
Published: Copernicus Publications 2020-07-01
Series:Geochronology
Online Access:https://gchron.copernicus.org/articles/2/169/2020/gchron-2-169-2020.pdf
Description
Summary:<p>Geologic dating methods for the most part do not directly measure ages. Instead, interpreting a geochemical observation as a geologically useful parameter – an age or a rate – requires an interpretive middle layer of calculations and supporting data sets. These are the subject of active research and evolve rapidly, so any synoptic analysis requires repeated recalculation of large numbers of ages from a growing data set of raw observations, using a constantly improving calculation method. Many important applications of geochronology involve regional or global analyses of large and growing data sets, so this characteristic is an obstacle to progress in these applications. This paper describes the ICE-D (Informal Cosmogenic-Nuclide Exposure-age Database) database project, a prototype computational infrastructure for dealing with this obstacle in one geochronological application – cosmogenic-nuclide exposure dating – that aims to enable visualization or analysis of diverse data sets by making middle-layer calculations dynamic and transparent to the user. An important aspect of this concept is that it is designed as a forward-looking research tool rather than a backward-looking archive: only observational data (which do not become obsolete) are stored, and derived data (which become obsolete as soon as the middle-layer calculations are improved) are not stored but instead calculated dynamically at the time data are needed by an analysis application. This minimizes “lock-in” effects associated with archiving derived results subject to rapid obsolescence and allows assimilation of both new observational data and improvements to middle-layer calculations without creating additional overhead at the level of the analysis application.</p>
ISSN:2628-3719