Summary: | <p>The skill of weather forecasts has improved dramatically over the past 30 years. This improvement has depended to a large degree on developments in supercomputing, which have allowed models to increase in complexity and resolution with minimal technical effort. However, the nature of supercomputing is undergoing a significant change, with the advent of extremely parallel and heterogeneous architectures. This paradigm shift threatens the continual increase of forecast skill and prompts a reevaluation of how Earth-System models are developed. In this thesis we explore the notion of reduced-precision arithmetic to accelerate Earth-System models, specifically those used in data assimilation and in numerical weather prediction.</p> <p>We first conduct data assimilation experiments with the Lorenz '96 toy atmospheric system, using the ensemble Kalman filter to perform assimilation. We reduce precision in the forecast and analysis steps of the ensemble Kalman filter and measure how this affects the quality of the data assimilation product, the analysis. We find that the optimal choice of precision is intimately linked with the degree of uncertainty from noisy observations and infrequent assimilation. We also find that precision can be traded for more ensemble members, and that this trade-off delivers a more accurate analysis than otherwise.</p> <p>We then consider the SPEEDY intermediate complexity atmospheric general circulation model, again with the ensemble Kalman filter. In this case we find that, in a perfect model setting, reducing precision in the forecast model gives an unacceptable degradation in the data assimilation product. However, we then show that even a modest degree of model error can mask the errors introduced by reducing precision.</p> <p>We consider also a precision reduction in the 4D-Var data assimilation scheme. We find that reducing precision increases the asymmetry between the tangent-linear and adjoint models, and that this retards the convergence of the minimisation scheme. However, with a standard reorthogonalisation procedure we are able to use single-precision, and even lower levels of precision, successfully.</p> <p>Finally, we consider the use of reduced-precision arithmetic to accelerate the Legendre transforms of an operational global weather forecasting model. We find that, with a few considerations of the algorithmic structure of the transforms and the physical meaning of the different components, we are able to use even half-precision without affecting the forecast skill of the model.</p> <p>In conclusion, we find that the errors introduced by reducing precision are negligible with respect to inherent errors in the forecasting system. In order to make optimal use of future supercomputers, reduced-precision arithmetic will be key.</p>
|