Computational Math

Some description

Hybrid collocation-perturbation

Consider a linear elliptic PDE defined over a stochastic stochastic geometry a function of N random variables. In many application, quantify the uncertainty propagated to a quantity of interest (QoI) is an important problem. The random domain is split into large and small variations contributions. The large variations are approximated by applying a sparse grid stochastic collocation method. The small variations are approximated with a stochastic collocation-perturbation method and added as a correction term to the large variation sparse grid component. Convergence rates for the variance of the QoI are derived and compared to those obtained in numerical experiments. Our approach significantly reduces the dimensionality of the stochastic problem making it suitable for large dimensional problems. The computational cost of the correction term increases at most quadratically with respect to the number of dimensions of the small variations. Moreover, for the case that the small and large variations are independent the cost increases linearly.

Stochastic collocation approach

In this article we analyze the linear parabolic partial differential equation with a stochastic domain deformation. In particular, we concentrate on the problem of numerically approximating the statistical moments of a given Quantity of Interest (QoI). The geometry is assumed to be random. The parabolic problem is remapped to a fixed deterministic domain with random coefficients and shown to admit an extension on a well defined region embedded in the complex hyperplane. The stochastic moments of the QoI are computed by employing a collocation method in conjunction with an isotropic Smolyak sparse grid. Theoretical sub-exponential convergence rates as a function to the number of collocation interpolation knots are derived. Numerical experiments are performed and they confirm the theoretical error estimates.

Massive Medical Data Records

It has long been a recognized problem that many datasets contain significant levels of missing numerical data. A potentially critical predicate for application of machine learning methods to datasets involves addressing this problem. However, this is a challenging task. In this paper, we apply a recently developed multi-level stochastic optimization approach to the problem of imputation in massive medical records. The approach is based on computational applied mathematics techniques and is highly accurate. In particular, for the Best Linear Unbiased Predictor (BLUP) this multi-level formulation is exact, and is significantly faster and more numerically stable. This permits practical application of Kriging methods to data imputation problems for massive datasets. We test this approach on data from the National Inpatient Sample (NIS) data records, Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. Numerical results show that the multi-level method significantly outperforms current approaches and is numerically robust. It has superior accuracy as compared with methods recommended in the recent report from HCUP. Benchmark tests show up to 75% reductions in error. Furthermore, the results are also superior to recent state of the art methods such as discriminative deep learning.

Wavelet Matrix Operations and Quantum Transforms

The currently studied version of the quantum wavelet transform implements the Mallat pyramid algorithm, calculating wavelet and scaling coefficients at lower resolutions from higher ones, via quantum computations. However, the pyramid algorithm cannot replace wavelet transform algorithms, which obtain wavelet coefficients directly from signals. The barrier to implementing quantum versions of wavelet transforms has been the fact that the mapping from sampled signals to wavelet coefficients is not canonically represented with matrices. To solve this problem, we introduce new inner products and norms into the sequence space, based on wavelet sampling theory. We then show that wavelet transform algorithms using inner product operations can be implemented in infinite matrix forms, directly mapping discrete function samples to wavelet coefficients. These infinite matrix operators are then converted into finite forms for computational implementation. Thus, via singular value decompositions of these finite matrices, our work allows implementation of the standard wavelet transform with a quantum circuit. Finally, we validate these wavelet matrix algorithms on MRAs involving spline and Coiflet wavelets, illustrating some of our theorems.

CONTACT

Stochastic Machine Learning Group

© 2024 – 2025, Stochastic Machine Learning Group

Scroll to Top