Bayesian Inversion and Adaptive Low-Rank Tensor Decomposition AGUQ Dortmund 12.3 - 14.3 2018 Martin Eigel Manuel Marschall Research Group 4 Nonlinear Optimization and Inverse Problems Head: Prof. Dr. Dietmar Hömberg 4 We present a Bayesian inversion method with functional representations of all quantities. The posterior density is given in terms of a polynomial basis, based on an adaptive stochastic Galerkin discretization. The sampling-free approach, using tensor trains, alleviates the curse of dimensionality by hierarchical subspace approximations of the respective low-rank manifolds. All computations are adjusted adaptively based on a posteriori error estimators or indicators. Convergence of the posterior can be shown with respect to the discretization parameters. Explicit and parametric Bayesian inversion applies in parameter identification and upscaling parametrized model operator Ξ y → G(y)= ∑ μ∈Λ g μ (x)P μ (y), g μ = G, P μ finite measurements δ ∈ R K of indirect quantity, observed by linear operator O prior measure π 0 on parameters y and noise measure N (0, Γ) on measurement error η Statistical inverse problem: Find y ∈ Ξ from δ s.t. δ =(O◦ G)(y)+ η, η ∼N (0, Γ) Bayes’ theorem yields existence of posterior measure π δ in functional representation [1]: dπ δ dπ 0 (y)= E π δ [1] -1 exp - 1 2 δ - (O◦ G)(y), Γ -1 (δ - (O◦ G)(y)) = X μ∈Λ α μ P μ (y). Model reduction: Tensor formats high dimensional problem, curse of dimensionality: O n M HT/TT allows for polynomial complexity: O (r 2 M n) via U[ x 1 ,...,x M ]= r X k M Y m=1 U m [ k m-1 ,x m ,k m ] Features: + separation of variables and closedness of rank r manifold, - indirect access to tensor elements by hierarchical basis Creation by tensor recovery/reconstruction [2] or cross-approximation B {1,2,3,4,5} B {1,2,3} B {4,5} B {1,2} U 3 U 4 U 5 U 1 U 2 Figure: dimension partition tree n 1 n 2 n 3 n 4 n 5 U 1 U 2 U 3 U 4 U 5 r 1 r 2 r 3 r 4 Figure: schematical tensor train (TT) tree of order 5 with subspaces, dimensions, ranks Adaptive Stochastic Galerkin FEM using Tensor Trains random coefficient, parametrized and represented in functional/extended tensor train format a(x, y)= r X k N a X i=1 A 0 [ i, k 0 ]ϕ i (x) M Y m=1 n m X μ m =1 A m [ k m-1 ,μ m ,k m ]P μ m (y m ), (P μ m ) μ m polynomial basis weak PDE formulation obtained using tensor train operators and system solved by preconditioned ALS A(u N ,v ) := E [ u N ,v a ]= E [ f,v ] , ∀v ∈V N , u N Galerkin solution A-posteriori adaptivity in physical mesh, stochastic polynomial space and choice of rank r, see [3] u - w N 2 A . est all (w N ) := est det (w N ) + est param (w N ) + est disc (w N ) 2 + est disc (w N ) 2 0 200 500 1,000 1,500 2,000 2,500 3,000 10 -13 10 -10 10 -7 10 -4 10 -1 10 2 micro-iterations of ALS error err w/o precon error w precon Figure: Realisation of coefficient forward Darcy flow model Figure: Realisation of solution measurement fit to model Figure: marginal density esimtation of parameter for various measurements 1 3 degree 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930 5 19 stochastic dimension rank 10 3 10 4 10 5 10 0 10 1 10 2 n.d.o.f: stochastic dim M p1 p2 p3 10 3 10 4 10 5 5 10 15 20 n.d.o.f: max rank r p1 p2 p3 10 3 10 4 10 5 10 -3 10 -2 10 -1 n.d.o.f: energy error est p1 error p1 est p3 error p3 Sampling free Bayesian inversion using Tensor Trains explicit forward solver yields surrogate model in TT format G N,M (x, y)= N X i=1 X μ∈Λ M U [ i, μ]ϕ i (x)P μ (y) approximation of Bayesian potential in closed TT form by exact and anisotropic interpolation 0 5 10 15 20 25 30 35 10 -17 10 -15 10 -13 10 -11 10 -9 10 -7 10 -5 10 -3 adaptive refinement step mean square error of Bayesian potential r =4 r =8 r = 16 r = 32 r = 64 Figure: Rank dependency of Bayesian potential. exponential of TT tensor by Runge-Kutta method convergence in Hellinger distance d Hell (π δ ,π N,M δ,L,τ )= E ( N, M , L, σ, τ ) → 0 FEM error truncation error interpolation nodes tensor approx. Runge-Kutta time step functional representation of posterior density dπ N,M δ,L,τ dπ 0 (y)= X μ∈Λ M Π[ μ]P μ (y) fast access to Q.o.I., e.g. the mean posterior as new prior Inverse scattering: Helmholtz problem Consider two random media D 1 (ω ), D 2 (ω ), D 1 (ω ) ∪ D 2 (ω )= R d separated by interface Γ(ω ). Transmission and reflection problem for plane-wave incidence and known material parameters given by transformed Helmholtz equation -∇ · (a(Γ(ω ), ·)∇q ) - κ 2 (Γ(ω ), ·)q =0 in R d boundary condition radiation condition Figure: triangulation of 2D interface Figure: reflected efficiency depending on incident angle and wave length Outlook and references Adaptive functional representation combined with hierarchical model reduction: Statistical parameter identification for shape reconstruction in scattering applications Reconstruction of shapes of blood-cells from measured reflection intensities (with PTB) [1] M. E IGEL , M. M ARSCHALL , R. S CHNEIDER, ”Sampling-free Bayesian inversion with adaptive hierarchical tensor representations”, Inverse Problems, Feburary 2018 [2] M. E IGEL , J. N EUMANN, R. S CHNEIDER, S. WOLF , ”Non-intrusive tensor reconstruction for high dimensional random PDEs”, WIAS Preprint 2444 [3] M. E IGEL , M. M ARSCHALL , M. P FEFFER, R. S CHNEIDER, ”Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representation”, in preparation Contact Manuel Marschall T +49 30 20372 0 E [email protected] WIAS Mohrenstr. 39 10117 Berlin Funding Coorperation