Multilevel-multiscale ensembles for uncertainty quantification with application to geophysical models
Stefanescu, Elena Ramona
MetadataShow full item record
Disaster response managers routinely use numerical modeling to assist in hazard response and mitigation making the reliability and the accuracy of these models crucial to the decision making process. Dealing with uncertainties and multiple scenarios are important challenges for decision making and are usually dealt with by using ensembles representative of the uncertainty. In geophysical mass flow problems (e.g. landslides, volcanic debris flows), many flow characteristics such as material properties, the size or location of failing mass, o the terrain over which the flow occurs at every point in the domain are difficult, if not impossible to characterize because of their large dimensionality. The goal of the present work is to characterize these uncertainties introduced by large dimensional inputs such as terrain and windfields and construct hazard maps in a computationally efficient manner. To do so, this work first explores existing approaches based on standard statistical approaches and then introduces novel methodology based on multilevel and multiscale approximations. The novel methodology for constructing digital terrain ensemble and thus characterizing the uncertainty in Digital Elevation Models is illustrated by propagating the ensembles through a numerical model of dry blocks and ash flows over natural terrain (TITAN2D). The basic approach is also used with ensembles of a model of volcanic ash transport (PUFF) used in the construction of a hazard map. There are large uncertainties associated with the construction of the DEMs. For surface elevation, data at any given pixel in the DEM tends to be similar to data from nearby pixels. If more than one DEM obtained through different techniques of the same location are available, then error maps can be constructed. Most error maps are spatially autocorrelated, and random fields can be used to represent spatially autocorrelated error. We show that using graph-based algorithms and low-rank approximations of the adjacency matrix we obtain representative data points in the space of interest. This approach can be used with success in characterizing the uncertainty in DEMs and to create accurate and efficient hazard maps, with minimum changes to the algorithm or implementation. Connecting data points according to local similarities to obtain reliable scale-dependent global properties, arising from these local similarities, implies using algorithms for finding coherent regions that display similar features. On the coarse version of the original graph, multilevel hierarchies can be formed, which allows rapid calculation of low-rank approximations. It is further shown that created hierarchies can be used to accelerate the DEM ensemble creation process. The benefits of using randomized projection algorithms for computing low rank matrix approximations are their simple implementation, applicability to large scale problems, and the existence of theoretical bounds for the approximation errors. Often the processes required to construct a simulation-based probabilistic hazard map for volcanoes, leads to large amount of data and intensive computational cost. Here, we present a novel approach - Multilevel Approximation (MLA) in creating a fast surrogate of the simulator which will improve the speed of hazard map creation. Multilevel-multiscale methods are successfully applied in developing a complete probabilistic forecast for the ash concentration at a given time and location. Randomized low-rank approximation methods are used in efficiently finding a sparse representation in the space of interest. We represent both the parameter space (sample points at which the numerical model is evaluated) and physical space (ash concentration covering a parcel) by a weight graph. We follow by generating a sequence of approximations at the given function (e.g. ash concentration) on the data, as well as their extensions to any newly-arrived data point. The subsampling is done by interpolative decomposition of the associated Gaussian kernel matrix in each scale in the hierarchical procedure. Results obtained show significant computational advantages over standard Monte Carlo sampling while preserving the output quality. Compared to other weighted sampling methods the cost of computations is similar, but this does not suffer the disadvantage of having to compute weights or fail if any sample is lost.