First name Grade Last name Department Computer Name Legi No. Date 12.08.2016 1 2 3 4 5 Total • Fill in this cover sheet first. • Always keep your Legi visible on the table. • Keep your phones, tablets and computers turned off in your bag. • Start each handwritten problem on a new sheet. • Put your name on each sheet. • Do not write with red/green/pencil. • Write your solutions clearly and work carefully. • Write all your solutions only in the folder results! • Any other location will not be backed-up and will be discarded. • Files in resources may be overridden at any time. • Make sure to regularly save your solutions. • Time spent on restroom breaks is considered examination time. • Never turn off or log off from your computer!
27
Embed
First name Grade - Seminar for Applied Mathematicsgrsam/HS16/NumCSE/old_exams/final_exam... · 1 Vector solveLSE ( const Vector & c, const Vector & b, 2 unsignedint i 0 , unsignedint
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
First name GradeLast name
DepartmentComputer Name
Legi No.Date 12.08.2016
1 2 3 4 5 Total
• Fill in this cover sheet first.
• Always keep your Legi visible on the table.
• Keep your phones, tablets and computers turned off in your bag.
• Start each handwritten problem on a new sheet.
• Put your name on each sheet.
• Do not write with red/green/pencil.
• Write your solutions clearly and work carefully.
• Write all your solutions only in the folder results!
• Any other location will not be backed-up and will be discarded.
• Files in resources may be overridden at any time.
• Make sure to regularly save your solutions.
• Time spent on restroom breaks is considered examination time.
• Never turn off or log off from your computer!
Final examNumerical Methods for CSE
R. Hiptmair, G. Alberti, F. Leonardi
August 12, 2016
A “CMake” file is provided with the templates. To generate a “Makefile” for allproblems, type “cmake .” in the folder “~/results”. Compile your programs with“make”.
In order to compile and run the C++ code related to a single problem, e.g. Problem 3, type“make problem3”. Execute the program using “./problem3”.
If you want to manually compile your code, use:
1 g++ − I / u s r / i n c l u d e / e i g e n 3 − s t d =c ++11−Wno−d e p r e c a t e d − d e c l a r a t i o n s −Wno− i g n o r e d − a t t r i b u t e sf i l e n a m e . cpp −o programname
or
1 c l a n g ++ − I / u s r / i n c l u d e / e i g e n 3 − s t d =c ++11−Wno−d e p r e c a t e d − d e c l a r a t i o n s −Wno− i g n o r e d − a t t r i b u t e sf i l e n a m e . cpp −o programname
The flags -Wno-deprecated-declarations -Wno-ignored-attributes sup-press some unwanted EIGEN warning.
For each problem requiring C++ implementation, a template file named problemX.cppis provided. For your own convenience, there is a marker TODO in the places where you aresupposed to write your own code. All templates should compile even if left unchanged.
1
Problem 1 Symmetric Gauss-Seidel iteration (20 pts.)For a square matrix A ∈ Rn,n, define DA,LA,UA ∈ Rn,n by:
(DA)i,j ∶=
⎧⎪⎪⎨⎪⎪⎩
(A)i,j, i = j,
0, i ≠ j, (LA)i,j ∶=
⎧⎪⎪⎨⎪⎪⎩
(A)i,j, i > j,
0, i ≤ j, (UA)i,j ∶=
⎧⎪⎪⎨⎪⎪⎩
(A)i,j, i < j,
0, i ≥ j.
(1)
The symmetric Gauss-Seidel iteration associated with the linear system of equations Ax =
where x(k),x,b ∈ Rn (k ≥ 0) and x(0) is a given initial guess.
(1a) Give a necessary and sufficient condition on A such that the iteration (2) is well-defined.
Solution: The matrices UA+DA and LA+DA must be invertible, which is equivalent toDA being invertible. This is equivalent to the matrix A having no zeros on the diagonal.
(1b) Assume that (2) is well-defined. Show that a fixed point x ∈ Rn of the iteration (2)is a solution of the linear system of equations Ax = b.
Solution: Let x be such fixed point, then
x = (UA +DA)−1b − (UA +DA)−1LA(LA +DA)−1(b −UAx) (3)
Left-multiply by UA +DA =A −LA (invertible!):
(UA +DA)x = b −LA(LA +DA)−1(b −UAx) (4)= b − (LA +DA −DA)(LA +DA)−1(b −UAx) (5)=UAx +DA(LA +DA)−1(b −UAx) (6)
Hence,
DAx =DA(LA +DA)−1(b −UAx) (7)x = (LA +DA)−1(b −UAx) (8)
(LA +DA)x = b −UAx, (9)
which is equivalent to Ax = b, since LA +DA +UA =A by construction.
2
(1c) Implement a C++ function (in problem1.cpp)
1 us ing Ma t r ix = Eigen : : MatrixXd ;2 us ing V ec to r = Eigen : : VectorXd ;3
4 void GSIt ( c o n s t Ma t r ix & A, c o n s t V ec to r & b , V ec to r & x ,double r t o l ) ;
solving the linear system Ax = b using the iterative scheme (2). To that end, apply theiterative scheme to an initial guess x(0) provided in x. The approximated solution givenby the final iterate is then stored in x as an output.
Use a correction based termination criterion with relative tolerance rtol using the Eu-clidean vector norm.
Solution: See implementation in problem1_solution.cpp.
(1d) Test your implementation (use n = 9 and rtol = 10e-8) with the linear sys-tem given by
A =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
3 1 0 . . . 02 ⋱ ⋱ ⋱ ⋮
0 ⋱ ⋱ ⋱ 0⋮ ⋱ ⋱ ⋱ 10 . . . 0 2 3
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
∈ Rn,n, b =
⎡⎢⎢⎢⎢⎢⎣
1⋮
1
⎤⎥⎥⎥⎥⎥⎦
∈ Rn. (10)
Output the l2-norm of the residual of the approximated solution. Use b as your initialguess.
Solution: See implementation in problem1_solution.cpp.
(1e) Using the same matrix A and the same r.h.s. vector b as above (1d), in Table 1 wehave tabulated the quantity ∥x(k) −A−1b∥2 for k = 1, . . . ,20.
Describe qualitatively and quantitatively the convergence of the iterative scheme with re-spect to the number of iterations k.
Solution: Asymptotic linear convergence can be expected after an initial phase where theerror decreases much faster. The quotient (rate of convergence) ∣x(k) − x∗∣/∣x(k−1) − x∗∣approaches 0.679048.
Table 1: Euclidean norms of error vectors for 20 iterates
4
5 us ing Ma t r ix = Eigen : : MatrixXd ;6 us ing V ec to r = Eigen : : VectorXd ;7
8 //! \brief Use symmetric Gauss-Seidel iterations toapproximate the solution of the system Ax = b
9 //! \param[in] A system matrix to be decomposed, must beinvertible (L + D + U)
10 //! \param[in] b r.h.s. vector11 //! \param[in,out] x initial guess as input and last value of
iteration as output12 //! \param[in] rtol relative tolerance for termination criteria13 void GSIt ( c o n s t Ma t r ix & A, c o n s t V ec to r & b , V ec to r & x ,
double r t o l ) {14
15 // Extract strictly triangular part16 auto U = Eigen : : T r i a n g u l a r V i ew < M a t r i x ,
Eigen : : S t r i c t l y U p p e r >(A) ;17 auto L = Eigen : : T r i a ng u l a r V i e w < M a t r i x ,
Eigen : : S t r i c t l y L o w e r >(A) ;18
19 // Extract non-strictly triangular parts (we only requireL+D and U+D)
20 auto UpD = Eigen : : T r i a n g u l a r V i ew < M a t r i x , Eigen : : Upper >(A) ;21 auto LpD = Eigen : : T r i an g u l a rV i e w < M a t r i x , Eigen : : Lower >(A) ;22
4
23 // Temporary storage24 V ec to r temp ( x . s i z e ( ) ) ;25 Vec to r* xo ld = &x ;26 Vec to r* xnew = &temp ;27
28 // Counter for iterations29 unsigned i n t k = 0 ;30 // Error of this and previous iteration31 double e r r ;32 // double errold = 0.;33 do {34 // Apply iteration scheme35 *xnew = UpD . s o l v e ( b ) − UpD . s o l v e ( L*LpD . s o l v e ( b −
U**xold ) ) ;36 //Compute error37 e r r = ( *xo ld − *xnew ) . norm ( ) ;38 // Output error and rate for problem 1e39 // std::cout << k++ << "\t& " << (A*(*xnew) -
41 // Proceed to next step42 s t d : : swap ( x o l d , xnew ) ;43 // errold = err;44 } whi le ( e r r > r t o l * ( *xnew ) . norm ( ) ) ; // Termination
criteria when tolerance reached45
46 x = *xnew ; // Output was last value of iterations47
48 re turn ;49 }50
51 i n t main ( i n t , ch a r * * ) {52 // Build matrix and r.h.s.53 unsigned i n t n = 9 ;54
55 Ma t r ix A = Ma t r i x : : Zero ( n , n ) ;56 f o r ( unsigned i n t i = 0 ; i < n ; ++ i ) {57 i f ( i > 0 ) A( i , i −1) = 2 ;58 A( i , i ) = 3 ;59 i f ( i < n−1) A( i , i +1) = 1 ;
5
60 }61 V ec to r b = V ec to r : : C o n s t a n t ( n , 1 ) ;62
63 //// PROBLEM 1d64 s t d : : c o u t << " *** PROBLEM 1d : " << s t d : : e n d l ;65
66 V ec to r x = b ;67 GSIt ( A, b , x , 10e −8) ;68
69 double r e s i d u a l = ( A*x − b ) . norm ( ) ;70
71 s t d : : c o u t << " R e s i d u a l = " << r e s i d u a l << s t d : : e n d l ;72 }
Problem 2 Efficient sparse solver (24 pts.)Fix n ∈ N, n ≥ 2, and let 1 ≤ i0, j0 ≤ n, i0 > j0 and ci ∈ R for i = 1, . . . , n− 1. We consider an × n linear system of equations Ax = b, where A ∈ Rn,n is given by
(A)i,j =
⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩
1 if i = j,ci if i = j − 1,1 if i = i0, j = j0,0 otherwise,
for 1 ≤ i, j ≤ n.
(2a) Efficiently construct the matrix A defined above as an EIGEN matrix of typeEigen::SparseMatrix<double> using an intermediate triplet format. To that end,implement a function
1 us ing V ec to r = Eigen : : VectorXd ;2 us ing Ma t r ix = Eigen : : S p a r s e M a t r i x <double >;3
4 Ma t r ix bu i ldA ( c o n s t V ec to r & c , unsigned i n t i 0 , unsigned i n tj 0 ) ;
whose, given the coefficients ci in c and the indices i0 and j0, returns the sparse matrix A.
Solution: See implementation in problem2_solution.cpp.
(2b) Implement a function
6
1 V ec to r solveLSE ( c o n s t V ec to r & c , c o n s t V ec to r & b ,2 unsigned i n t i 0 , unsigned i n t j 0 ) ;
that computes the solution x ∈ Rn of the system Ax = b with optimal asymptotic com-plexity O(n).
HINT: If (A)i0,j0 were zero, A would be a banded upper triangular matrix and the taskwould be easy. Regard A as a small modification of such a matrix.
Solution: We can either employ a rank-1 modification technique using the Sherman-Morrison-Woodbury formula, or use an elementary Gaussian elimination. See implemen-tation in problem2_solution.cpp.
1 # i n c l u d e < i o s t r e a m >2
3 # i n c l u d e < v e c t o r >4
5 # i n c l u d e < E i g e n / S p a r s e >6 # i n c l u d e <Eigen/SparseLU >7
8 us ing T r i p l e t = Eigen : : T r i p l e t <double >;9 us ing T r i p l e t s = s t d : : v e c t o r < T r i p l e t > ;
10
11 us ing V ec to r = Eigen : : VectorXd ;12 us ing Ma t r ix = Eigen : : S p a r s e M a t r i x <double >;13
14 //! \brief Efficiently construct the sparse matrix A given c,i_0 and j_0
15 //! \param[in] c contains entries c_i for matrix A16 //! \param[in] i0 row index i_017 //! \param[in] j0 column index j_018 //! \return Sparse matrix A19 Ma t r ix bu i ldA ( c o n s t V ec to r & c , unsigned i n t i 0 , unsigned i n t
j 0 ) {20 a s s e r t ( i 0 > j 0 ) ;21
22 unsigned i n t n = c . s i z e ( ) + 1 ;23 Ma t r ix A( n , n ) ;24 T r i p l e t s t r i p l e t s ;25
26 unsigned i n t n t r i p l e t s = 2*n ;
7
27
28 // Reserve space29 t r i p l e t s . r e s e r v e ( n t r i p l e t s ) ;30
31 // Build triplets vector32 f o r ( unsigned i n t i = 0 ; i < n ; ++ i ) {33 t r i p l e t s . push_back ( T r i p l e t ( i , i , 1 ) ) ;34 i f ( i < n−1) t r i p l e t s . push_back ( T r i p l e t ( i , i +1 , c [ i ] ) ) ;35 }36 t r i p l e t s . push_back ( T r i p l e t ( i 0 , j 0 , 1 ) ) ;37
38 // Construct sparse matrix from its triplets39
40 A. s e t F r o m T r i p l e t s ( t r i p l e t s . b e g i n ( ) , t r i p l e t s . end ( ) ) ;41 re turn A;42 }43
44 //! \brief Solve system Ax = b with optimal complexity O(n)45 //! \param[in] c Entries for matrix A46 //! \param[in] b r.h.s. vector47 //! \param[in] i0 index48 //! \param[in] j0 index49 //! \return Solution x, s.t. Ax = b50 V ec to r solveLSE ( c o n s t V ec to r & c , c o n s t V ec to r & b , unsigned
i n t i 0 , unsigned i n t j 0 ) {51 a s s e r t ( c . s i z e ( ) == b . s i z e ( ) −1 && " S i z e mismatch ! " ) ;52 a s s e r t ( i 0 > j 0 ) ;53
54 //// PROBLEM 2b55
56 // Allocate solution vector57 V ec to r r e t ( b . s i z e ( ) ) ;58
59 unsigned i n t n = b . s i z e ( ) ;60
61 T r i p l e t s t r i p l e t s ;62
63 unsigned i n t n t r i p l e t s = 2*n ;64
65 // Reserve space66 t r i p l e t s . r e s e r v e ( n t r i p l e t s ) ;
8
67
68 // Build triplets vector69 f o r ( unsigned i n t i = 0 ; i < n ; ++ i ) {70 t r i p l e t s . push_back ( T r i p l e t ( i , i , 1 ) ) ;71 i f ( i < n−1) t r i p l e t s . push_back ( T r i p l e t ( i , i +1 , c [ i ] ) ) ;72 }73
74 // Construct sparse matrix from its triplets75 Ma t r ix A( n , n ) ;76 A. s e t F r o m T r i p l e t s ( t r i p l e t s . b e g i n ( ) , t r i p l e t s . end ( ) ) ;77
78 // Vectors u and v79 V ec to r u = V ec to r : : Zero ( n ) ;80 Eigen : : MatrixXd v = Eigen : : MatrixXd : : Zero (1 , n ) ;81 u ( i 0 ) = 1 ;82 v ( 0 , j 0 ) = 1 ;83
84 // Apply SMW formula85 V ec to r Ainv_b = A. t r i a n g u l a r V i e w < Eigen : : Upper > ( ) . s o l v e ( b
) ;86 r e t = Ainv_b − A. t r i a n g u l a r V i e w < Eigen : : Upper > ( ) . s o l v e ( u *
( v * Ainv_b ) ( 0 ) ) / ( 1 . + ( v *A. t r i a n g u l a r V i e w < Eigen : : Upper > ( ) . s o l v e ( u ) ) ( 0 ) ) ;
87
88 re turn r e t ;89 }90
91 i n t main ( i n t , ch a r * * ) {92 // Setup data for problem93 unsigned i n t n = 150 ;94
95 unsigned i n t i 0 = 5 , j 0 = 4 ;96
97 V ec to r b = V ec to r : : Random ( n ) ; // Random vector for b98 V ec to r c = V ec to r : : Random ( n−1) ; // Random vector for c99
100 //// PROBLEM 2a101 s t d : : c o u t << " *** PROBLEM 2 a : " << s t d : : e n d l ;102
103 Ma t r ix A = bui ldA ( c , i 0 , j 0 ) ;104
9
105 // Solve sparse system using sparse LU and our own routine106 A. makeCompressed ( ) ;107 Eigen : : SparseLU < Matr ix > s p l u ;108 s p l u . a n a l y z e P a t t e r n (A) ;109 s p l u . f a c t o r i z e (A) ;110
111 s t d : : c o u t << " E r r o r : " << s t d : : e n d l << (solveLSE ( c , b , i 0 , j 0 ) − s p l u . s o l v e ( b ) ) . norm ( ) <<s t d : : e n d l ;
112 }
Problem 3 Not-a-knot cubic spline interpolation (24 pts.)We are given an interval [a, b] ⊂ R and a knot set
M = {a = t0 < t1 < ⋅ ⋅ ⋅ < tn−1 < tn = b}
for some n ∈ N ∖ {0}. Let S3,M denote the space of cubic spline functions on M, anddefine
S3,M = {s ∈ S3,M ∶ s′′′ is continuous at t1 and at tn−1}.
(3a) Derive a linear system of equations whose solution yields the slopes s′(tj), j =
0, . . . , n, of s ∈ S3,M satisfying the interpolation conditions s(tj) = yj for given yj ∈ R,j = 0, . . . , n.
Solution: We know that (see (3.5.9) of the lecture notes), for general spline interpolants,the slopes cj ∶= s′(tj) satisfy
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
b0 a1 b1 0 ⋯ 00 b1 a2 b2 ⋱ 0⋮ ⋱ ⋱ ⋱ ⋮
⋮ bn−3 an−2 bn−2 00 ⋯ 0 bn−2 an−1 bn−1
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
⎡⎢⎢⎢⎢⎢⎣
c0⋮
cn
⎤⎥⎥⎥⎥⎥⎦
=
⎡⎢⎢⎢⎢⎢⎢⎢⎣
3 (y1−y0h21
+y2−y1h22
)
⋮
3 (yn−1−yn−2h2n−1
+yn−yn−1h2n
)
⎤⎥⎥⎥⎥⎥⎥⎥⎦
,
wherehi = ti − ti−1, ai =
2
hi+
2
hi+1and bi =
1
hi+1.
This is an underdetermined linear system with n + 1 unknowns and n − 1 equations. Thetwo missing equations determining the splines in S3,M will be obtained by imposing thecontinuity conditions on s′′′ at t1 and at tn−1.
10
By differentiating three times the expression in (3.5.5) of the lecture notes for s∣[tj−1,tj], weobtain
3 void n a t s p l i n e s l o p e s ( c o n s t V ec to r & t , c o n s t V ec to r &y , V ec to r&c )
computes the slopes cj ∶= s′(tj), j = 0, . . . , n, of the natural cubic spline interpolants ∈ S3,M through the data points (tj, yj), j = 0, . . . , n.
Based on natsplineslopes, implement a corresponding function (in problem3.cpp)
1 void n o t k n o t s l o p e s ( c o n s t V ec to r & t , c o n s t V ec to r &y , V ec to r&c )
that computes the slopes cj ∶= s′(tj), j = 0, . . . , n, of s ∈ S3,M satisfying s(tj) = yj forgiven yj ∈ R, j = 0, . . . , n.
Solution: See the implementation in problem3_solution.cpp.
11
1 # i n c l u d e < v e c t o r >2 # i n c l u d e < i o s t r e a m >3 # i n c l u d e < Eigen /Dense >4 # i n c l u d e < E i g e n / S p a r s e >5 # i n c l u d e < c a s s e r t >6
7 us ing V ec to r = Eigen : : VectorXd ;8 us ing Ma t r ix = Eigen : : MatrixXd ;9
10 //! \brief Computes the slopes for natural cubic splineinterpolation
11 //! \param[in] t Vector of nodes12 //! \param[in] y Vector of values at the nodes13 //! \param[out] c Vector of slopes at the nodes14 void n a t s p l i n e s l o p e s ( c o n s t V ec to r & t , c o n s t V ec to r &y , V ec to r
&c ) {15 // Size check16 a s s e r t ( ( t . s i z e ( ) == y . s i z e ( ) ) && " E r r o r : mismatched
s i z e o f t and y ! " ) ;17
18 // n+1 is the number of conditions (t goes from t_0 to t_n)19 i n t n = t . s i z e ( ) − 1 ;20
21 V ec to r h ( n ) ;22 // Vector containing increments (from the right)23 f o r ( i n t i = 0 ; i < n ; ++ i ) {24 h ( i ) = t ( i +1) − t ( i ) ;25 // Check that t is sorted26 a s s e r t ( ( h ( i ) > 0 ) && " E r r o r : a r r a y t must be
s o r t e d ! " ) ;27 }28
29 // System matrix and rhs as in (3.5.9), we remove firstand last row (known by natural contition)
30 Eigen : : S p a r s e M a t r i x <double > A( n+1 , n +1) ;31 V ec to r b ( n +1) ;32
33 // WARNING: sparse reserve space34 A. r e s e r v e ( 3 ) ;35
12
36 // Fill in natural conditions (3.5.10) for matrix37 A. c o e f f R e f (0 , 0 ) = 2 . / h ( 0 ) ;38 A. c o e f f R e f (0 , 1 ) = 1 . / h ( 0 ) ;39 A. c o e f f R e f ( n ,n −1) = 1 . / h ( n−1) ;40 A. c o e f f R e f ( n , n ) = 2 . / h ( n−1) ;41
42 // Reuse computation for rhs43 double bo ld = ( y ( 1 ) − y ( 0 ) ) / ( h ( 0 ) *h ( 0 ) ) ;44 b ( 0 ) = 3 . *bo ld ; // Fill in natural conditions (3.5.10)45 // Fill matrix A and rhs b46 f o r ( i n t i = 1 ; i < n ; ++ i ) {47 // Fill in a48 A. c o e f f R e f ( i , i −1) = 1 . / h ( i −1) ;49 A. c o e f f R e f ( i , i ) = 2 . / h ( i −1) + 2 . / h ( i ) ;50 A. c o e f f R e f ( i , i +1) = 1 . / h ( i ) ;51
52 // Reuse computation for rhs b53 double bnew = ( y ( i +1) − y ( i ) ) / ( h ( i ) *h ( i ) ) ;54 b ( i ) = 3 . * ( bnew + bo ld ) ;55 bo ld = bnew ;56 }57 b ( n ) = 3 . *bo ld ; // Fill in natural conditions (3.5.10)58 // Compress the matrix59 A. makeCompressed ( ) ;60 s t d : : c o u t << A;61
62 // Factorize A and solve system A*c(1:end) = b63 Eigen : : SparseLU < Eigen : : S p a r s e M a t r i x <double >> l u ;64 l u . compute (A) ;65 c = l u . s o l v e ( b ) ;66 }67
68 //! \brief Computes the slopes for not-a-knot cubic splineinterpolation
69 //! \param[in] t Vector of nodes70 //! \param[in] y Vector of values at the nodes71 //! \param[out] c Vector of slopes at the nodes72 void n o t k n o t s p l i n e s ( c o n s t V ec to r & t , c o n s t V ec to r &y , V ec to r
&c ) {73 // Size check74 a s s e r t ( ( t . s i z e ( ) == y . s i z e ( ) ) && " E r r o r : mismatched
13
s i z e o f t and y ! " ) ;75
76 // n+1 is the number of conditions (t goes from t_0 to t_n)77 i n t n = t . s i z e ( ) − 1 ;78
79 V ec to r h ( n ) ;80 // Vector containing increments (from the right)81 f o r ( i n t i = 0 ; i < n ; ++ i ) {82 h ( i ) = t ( i +1) − t ( i ) ;83 // Check that t is sorted84 a s s e r t ( ( h ( i ) > 0 ) && " E r r o r : a r r a y t must be
s o r t e d ! " ) ;85 }86
87 // System matrix and rhs as in (3.5.9), adding the tworemaining conditions
88 Eigen : : S p a r s e M a t r i x <double > A( n+1 , n +1) ;89 V ec to r b ( n +1) ;90
91 // WARNING: sparse reserve space92 A. r e s e r v e ( 3 ) ;93
94 // Fill in additional equations95 A. c o e f f R e f (0 , 0 ) = 1 . / ( h ( 0 ) *h ( 0 ) ) ;96 A. c o e f f R e f (0 , 1 ) = 1 . / ( h ( 0 ) *h ( 0 ) ) − 1 . / ( h ( 1 ) *h ( 1 ) ) ;97 A. c o e f f R e f (0 , 2 ) = − 1 . / ( h ( 1 ) *h ( 1 ) ) ;98 A. c o e f f R e f ( n ,n −2) = 1 . / ( h ( n−2) *h ( n−2) ) ;99 A. c o e f f R e f ( n ,n −1) = 1 . / ( h ( n−2) *h ( n−2) ) −
1 . / ( h ( n−1) *h ( n−1) ) ;100 A. c o e f f R e f ( n , n ) = − 1 . / ( h ( n−1) *h ( n−1) ) ;101
102 // Reuse computation for rhs103 double bo ld = ( y ( 1 ) − y ( 0 ) ) / ( h ( 0 ) *h ( 0 ) ) ;104 b ( 0 ) = 2 . * ( ( y ( 1 )−y ( 0 ) ) / ( h ( 0 ) *h ( 0 ) *h ( 0 ) ) +
( y ( 1 )−y ( 2 ) ) / ( h ( 1 ) *h ( 1 ) *h ( 1 ) ) ) ;105 // Fill matrix A and rhs b106 f o r ( i n t i = 1 ; i < n ; ++ i ) {107 // Fill in a108 A. c o e f f R e f ( i , i −1) = 1 . / h ( i −1) ;109 A. c o e f f R e f ( i , i ) = 2 . / h ( i −1) + 2 . / h ( i ) ;110 A. c o e f f R e f ( i , i +1) = 1 . / h ( i ) ;
14
111
112 // Reuse computation for rhs b113 double bnew = ( y ( i +1) − y ( i ) ) / ( h ( i ) *h ( i ) ) ;114 b ( i ) = 3 . * ( bnew + bo ld ) ;115 bo ld = bnew ;116 }117 b ( n ) = 2 . * ( ( y ( n−1)−y ( n−2) ) / ( h ( n−2) *h ( n−2) *h ( n−2) ) +
( y ( n−1)−y ( n ) ) / ( h ( n−1) *h ( n−1) *h ( n−1) ) ) ;118 // Compress the matrix119 A. makeCompressed ( ) ;120 s t d : : c o u t << A;121
122 // Factorize A and solve system A*c(1:end) = b123 Eigen : : SparseLU < Eigen : : S p a r s e M a t r i x <double >> l u ;124 l u . compute (A) ;125 c = l u . s o l v e ( b ) ;126 }127
128 i n t main ( ) {129 s t d : : c o u t << " Noth ing t o do ! " << s t d : : e n d l ;130 }
Problem 4 On the Gauss points (24 pts.)Given f ∈ C0([0,1]), the integral
g(x) ∶= ∫1
0e∣x−y∣f(y)dy
defines a function g ∈ C0([0,1]). Throughout this problem, we approximate the integralby means of n-point Gauss quadrature, which yields a function gn(x).
(4a) Let {ξnj }nj=1 be the nodes for the Gauss quadrature on the interval [0,1] and write
wnj for the corresponding weights. Assume that the nodes are ordered, namely ξnj < ξnj+1for every j = 1, . . . , n − 1. We can write
(gn(ξnl ))
nl=1 =M (f(ξnj ))
nj=1,
for a suitable matrix M ∈ Rn,n. Give a formula for the entries of M.
Solution: Applying Gauss quadrature to the expression for g(ξnl ) we obtain gn(ξnl ) =
∑nj=1w
nj e
∣ξnl −ξnj ∣f(ξnj ), whence Mlj = wnj e
∣ξnl −ξnj ∣ for l, j = 1, . . . , n.
15
(4b) Implement a function (in problem4.cpp)
1 us ing V ec to r = Eigen : : VectorXd ;2
3 template <typename Func t ion >4 V ec to r comp_g_gaussp t s ( F u n c t i o n f , unsigned i n t n )
that computes the n-vector (gn(ξnl ))nl=1 with optimal complexityO(n) (excluding the com-
putation of the Gauss nodes and weights), where f is an object with an evaluation operatordouble operator()(double x), e.g. a lambda function, that represents the func-tion f .
HINT: You may use the provided function (in problem4.cpp)
1 void g a u s s r u l e ( i n t n , Ve c t o r & w, Ve c t o r & x i )
that computes the weights w and the ordered nodes xi relative to the n-point Gauss quadra-ture on the interval [−1,1].
Solution: In order to write a function with optimal complexity, we need to use the struc-ture of the matrix M. Write
gn(ξnl ) =
n
∑j=1
wnj e∣ξnl −ξ
nj ∣f(ξnj ) =
l−1
∑j=1
wnj eξnl e−ξ
nj f(ξnj ) +
n
∑j=l
wnj e−ξnl eξ
nj f(ξnj ).
Setting pnl = ∑l−1j=1w
nj e
−ξnj f(ξnj ) and qnl = ∑nj=lw
nj e
ξnj f(ξnj ), this identity can be rewrittenas
gn(ξnl ) = e
ξnl pnl + e−ξnl qnl .
This expression for l = 1, . . . , n can be computed withO(n) operations. It remains to com-pute pnl and qnl in O(n) operations. This can be done by using these recursive formulas,that immediately follow from the definition: for every l = 1, . . . , n − 1 we have
pn1 = 0, qnn = wnne
ξnnf(ξnn),
pnl+1 = pnl +w
nl e
−ξnl f(ξnl ), qnl = qnl+1 +w
nl e
ξnl f(ξnl ).
See the implementation in problem4_solution.cpp.
(4c) Test your implementation by computing g(ξ2111) (ξ2111 = 1/2) for f(y) = e−∣0.5−y∣.What result do you expect?
Solution: For reasons of symmetry, we have ξ2111 = 1/2, and so g21(ξ2111) should be equalto 1 for f(y) = e−∣0.5−y∣, since Gauss quadrature is exact for constant functions.
16
1 # i n c l u d e < v e c t o r >2 # i n c l u d e < i o s t r e a m >3 # i n c l u d e < Eigen /Dense >4
5 us ing V ec to r = Eigen : : VectorXd ;6
7 //! \brief Golub-Welsh implementation 5.3.358 //! \param[in] n number of Gauss nodes9 //! \param[out] w weights for interval [-1,1]
10 //! \param[out] xi ordered nodes for interval [-1,1]11 void g a u s s r u l e ( i n t n , Eigen : : VectorXd & w, Eigen : : VectorXd &
x i ) {12 w. r e s i z e ( n ) ;13 x i . r e s i z e ( n ) ;14 i f ( n == 0 ) {15 x i ( 0 ) = 0 ;16 w( 0 ) = 2 ;17 } e l s e {18 V ec to r b ( n−1) ;19 Eigen : : MatrixXd J = Eigen : : MatrixXd : : Zero ( n , n ) ;20
21 f o r ( i n t i = 1 ; i < n ; ++ i ) {22 double d = ( i ) / s q r t ( 4 . * i * i − 1 . ) ;23 J ( i , i −1) = d ;24 J ( i −1 , i ) = d ;25 }26
27 Eigen : : E i g e n S o l v e r < Eigen : : MatrixXd > e i g ( J ) ;28
29 x i = e i g . e i g e n v a l u e s ( ) . r e a l ( ) ;30 w = 2 * e i g . e i g e n v e c t o r s ( ) . r e a l ( )31 . topRows <1 >()32 . c w i s e P r o d u c t (33 e i g . e i g e n v e c t o r s ( )34 . r e a l ( )35 . topRows <1 >()36 ) ;37 }38
39 s t d : : v e c t o r < s t d : : p a i r < d o u b l e , d o u b l e >> P ;
17
40 P . r e s e r v e ( n ) ;41 f o r ( unsigned i n t i = 0 ; i < n ; ++ i ) {42 P . push_back ( s t d : : p a i r < d o u b l e , d o u b l e >( x i ( i ) ,w ( i ) ) ) ;43 }44 s t d : : s o r t ( P . b e g i n ( ) , P . end ( ) ) ;45 f o r ( unsigned i n t i = 0 ; i < n ; ++ i ) {46 x i ( i ) = s t d : : ge t <0 >(P [ i ] ) ;47 w( i ) = s t d : : ge t <1 >(P [ i ] ) ;48 }49 }50
51 //! \brief Compute the function g in the Gauss nodes52 //! \param[in] f object with an evaluation operator (e.g. a
lambda function) representing the function f53 //! \param[in] n number of nodes54 //! \param[out] Eigen::VectorXd containing the function g
calculated in the Gauss nodes.55 template <typename Func t ion >56 Eigen : : VectorXd comp_g_gaussp t s ( F u n c t i o n f , unsigned i n t n ) {57
58 V ec to r g ( n ) ;59 V ec to r w( n ) , x i ( n ) , p ( n ) , q ( n ) , e x p x i ( n ) , f x i ( n ) ;60 g a u s s r u l e ( n , w, x i ) ; // Compute Gauss nodes and weights
relative to [-1,1]61 w = w/2 . ; // Rescale the weights to [0,1]62 x i = ( x i + Eigen : : MatrixXd : : Ones ( n , 1 ) ) / 2 . ; // Rescale the
nodes to [0,1]63
64 f o r ( i n t l =0 ; l <n ; l ++) {65 f x i ( l ) = f ( x i ( l ) ) ;66 e x p x i ( l ) = exp ( x i ( l ) ) ;67 }68
69 // Initialize the vectors p and q70 p ( 0 ) = 0 ;71 q ( n−1) = w( n−1) * e x p x i ( n−1) * f x i ( n−1) ;72
73 // Fill in the vectors p and q (O(n) complexity)74 f o r ( i n t l =0 ; l <n−1; l ++) {75 p ( l +1) = p ( l ) + w( l ) / e x p x i ( l ) * f x i ( l ) ;76 q ( n−2− l ) = q ( n−1− l ) + w( n−2− l ) * ex px i ( n−2− l ) * f x i ( n−2− l ) ;
18
77 }78
79 // Finally, constructing the output (O(n) complexity)80
81 g = ( p . a r r a y ( ) * e x p x i . a r r a y ( ) + q . a r r a y ( ) /e x p x i . a r r a y ( ) ) . m a t r i x ( ) ;
82 re turn g ;83 }84
85 // Test the implementation by calculating g(xi^21_10) forf(y)=exp(-|0.5-y|).
86 i n t main ( ) {87 i n t n = 2 1 ;88 auto f = [ ] ( double y ) { re turn exp (− s t d : : abs ( .5 − y ) ) ; } ;89 Eigen : : VectorXd g = comp_g_gaussp t s ( f , n ) ;90 s t d : : c o u t << " g ( x i ^ " << n << " _ " << ( n +1) / 2 << " ) = " <<
g ( ( n +1) /2 −1) << s t d : : e n d l ;91 }
Problem 5 Construction of an evolution operator (28 pts.)Let Ψh define the discrete evolution of an order p Runge-Kutta single step method for theautonomous ODE y = f(y), f ∶D ⊆ Rd → Rd. We define a new evolution operator:
Ψh ∶=1
1 − 2p(Ψh − 2p ⋅ (Ψh/2 ○Ψh/2)) , (11)
where ○ denotes the composition of mappings.
(5a) Let Ψh be the evolution operator of the explicit Euler method. Give the explicitformula for Ψh.
Solution: Writing the forward Euler method as Ψh(y0) ∶= y0 + hf(y0) (and p = 1), weobtain:
Ψh(y0) =1
1 − 2p(y0 + hf(y0) − 2p ⋅ (y0 +
h
2f(y0) +
h
2f(y0 +
h
2f(y0)))
= y0 − (h(1 − 1)f(y0) − hf(y0 +h
2f(y0)))
= y0 + hf(y0 +h
2f(y0)).
19
Remark 1. This is the evolution operator of the explicit trapezoidal method, a 2-stageexplicit Runge-Kutta method.
(5b) Implement a C++ function (in problem5.cpp)
1 us ing V ec to r = Eigen : : VectorXd ;2
3 t empla te < c l a s s Opera to r >4 V ec to r p s i t i l d e ( c o n s t O p e r a t o r &P s i , unsigned i n t p ,5 double h , c o n s t V ec to r &y0 ) ;
that returns Ψhy0 when given the underlying Ψ.
Objects of type Operator (here and in the following) must provide an evaluation opera-tor with signature:
1 V ec to r operator ( ) ( double h , c o n s t V ec to r &y ) ;
providing the evaluation of Ψh(y). A suitable C++ lambda function satisfies this require-ment.
Solution: See implementation in problem5_solution.cpp.
(5c) Implement a C++ function (in problem5.cpp)
1 t empla te < c l a s s Opera to r >2 s t d : : v e c t o r < Vector > o d e i n t e q u i ( c o n s t O p e r a t o r &P s i ,3 double T, c o n s t V ec to r &y 0 , unsigned i n t N) ;
for the approximation of the solution of the initial value problem y = f(y),y(0) = y0. Thefunction uses a single step method defined by the discrete evolution operator Ψ (given asPsi) on N equidistant timesteps with final time T > 0. The function returns the approxi-mated value at each step (including y0: y0, y1, . . . ) in a std::vector<Vector>.
HINT: You can use std::vector<Vector>::push_back to add elements to theend of a std::vector<Vector>.
Solution: See implementation in problem5_solution.cpp.
(5d) In the case of the IVP
y = 1 + y2, y(0) = 0, (12)
20
with exact solution yex(t) = tan(t), determine empirically (using odeintequi) theorder of the single step method induced by Ψh, when Ψ is the discrete evolution op-erator for the explicit Euler method. Monitor the error ∣yN(1) − yex(1)∣ at final timeT = 1, where yN is the solution approximated through Ψh using N uniforms steps withN = 2q, q = 2, . . . ,12.
Solution: The experimental convergence order is 2 (O(h2)), the order of convergence offorward Euler is p = 1. See implementation in problem5_solution.cpp.
(5e) In general, the method defined by Ψh has order p + 1. Thus, it can be used foradaptive timestep control and prediction.
Complete the implementation of a function (found in problem5.cpp)1 t empla te < c l a s s Opera to r >2 s t d : : v e c t o r < Vector > o d e i n t s s c t r l ( c o n s t O p e r a t o r &P s i ,3 double T, c o n s t V ec to r &y 0 ,4 double h 0 , unsigned i n t p ,5 double r e l t o l , double a b s t o l ,6 double hmin ) ;
for the approximation of the solution of the IVP by means of adaptive timestepping basedon Ψ and Ψ, where Ψ is passed through the argument Psi. Step rejection and stepsizecorrection and prediction has to be employed. The argument T supplies the final time, y0the initial state, h0 an initial stepsize, p the order to the discrete evolution Ψ, reltoland abstol the respective tolerances, and hmin a minimal stepsize that will triggerpremature termination. Compute the solution obtained using this function applied to theIVP (12) up to time T = 1 with the following data: h0 = 1/100, reltol = 10e-8,abstol = 10e-8, hmin = 10e-6. Output the approximated solution at time T = 1.
Solution: See implementation in problem5_solution.cpp.1 # i n c l u d e < i o s t r e a m >2
3 # i n c l u d e < v e c t o r >4
5 # i n c l u d e < Eigen /Dense >6
7 us ing V ec to r = Eigen : : VectorXd ;8 us ing Ma t r ix = Eigen : : MatrixXd ;9
10 //! \brief Evolves the vector y0 using the evolution operator\tilde{\Psi} with step-size y0
21
11 //! \tparam Operator type for evolution operator (e.g. lambdafunction type)
12 //! \param[in] Psi original evolution operator, must haveoperator(double, const Vector&)
13 //! \param[in] p parameter p for construction of Psi tilde14 //! \param[in] h step-size15 //! \param[in] y0 previous step16 //! \return Evolved step Ψhy017 t empla te < c l a s s Opera to r >18 V ec to r p s i t i l d e ( c o n s t O p e r a t o r& P s i , unsigned i n t p , double h ,
c o n s t V ec to r & y0 ) {19 re turn ( P s i ( h , y 0 ) − (2 < <( p ) ) * P s i ( h / 2 . , P s i ( h / 2 . , y 0 ) ) ) /
( 1 . − (2 < <( p ) ) ) ;20 }21
22 //! \brief Evolves the vector y0 using the evolution operatorfrom time 0 to T using equidistant steps
23 //! \tparam Operator type for evolution operator (e.g. lambdafunction type)
24 //! \param[in] Psi original evolution operator, must haveoperator(double, const Vector&)
25 //! \param[in] T final time26 //! \param[in] y0 initial data27 //! \param[in] N number of steps28 //! \return Vector of all steps y_0, y_1, ...29 t empla te < c l a s s Opera to r >30 s t d : : v e c t o r < Vector > o d e i n t e q u i ( c o n s t O p e r a t o r& P s i , double T,
c o n s t V ec to r &y 0 , i n t N) {31 double h = T / N;32 double t = 0 . ;33 s t d : : v e c t o r < Vector > Y;34 Y. r e s e r v e (N+1) ;35 Y. push_back ( y0 ) ;36 V ec to r y = y0 ;37
38 whi le ( t < T ) {39 y = P s i ( h,Y . back ( ) ) ;40
41 Y. push_back ( y ) ;42 t += s t d : : min ( T− t , h ) ;
22
43 }44
45 re turn Y;46 }47
48 //! \brief Evolves the vector y0 using the evolution operatorfrom time 0 to T using adaptive error control
49 //! \tparam Operator type for evolution operator (e.g. lambdafunction type)
50 //! \param[in] Psi original evolution operator, must haveoperator(double, const Vector&)
51 //! \param[in] T final time52 //! \param[in] y0 initial data53 //! \param[in] h0 initial step size54 //! \param[in] p parameter p for construction of Psi tilde55 //! \param[in] reltol relative tolerance for error control56 //! \param[in] abstol absolute tolerance for error control57 //! \param[in] p parameter p for construction of Psi tilde58 //! \param[in] hmin minimal step size59 //! \return Vector of all steps y_0, y_1, ...60 t empla te < c l a s s Opera to r >61 s t d : : v e c t o r < Vector > o d e i n t s s c t r l ( c o n s t O p e r a t o r& P s i , double
T, c o n s t V ec to r &y 0 , double h 0 ,62 unsigned i n t p , double
r e l t o l , double a b s t o l ,double hmin ) {
63 // Time tracker64 double t = 0 . ;65 // Step size tracker66 double h = h0 ;67 // Vector of snapshots68 s t d : : v e c t o r < Vector > Y;69 // Save initial data70 Y. push_back ( y0 ) ;71 // Start with y0 as value72 V ec to r y = y0 ;73
74 // Kill if final time reached or min step size reached75 whi le ( t < T && h > hmin ) {76 // Compute with two methods77 V ec to r y_h igh = p s i t i l d e ( P s i , p , h , y ) ;
23
78 V ec to r y_low = P s i ( h , y ) ;79 // Estimate error80 double e r r _ e s t = ( y_h igh − y_low ) . norm ( ) ;81
82 // Compute tolerance (min of reltol and abstol)83 double t o l = s t d : : max ( r e l t o l * y _ h i g h . norm ( ) , a b s t o l ) ;84
85 // Accept step if true (error smaller than tol)86 i f ( e r r _ e s t < t o l ) {87 t += h ;88
89 y = y_h igh ;90 Y. push_back ( y_h igh ) ;91
92 }93 // Predict step-size94 h *= s t d : : max ( . 5 , s t d : : min ( 2 . , s t d : : pow ( t o l / e r r _ e s t ,
1 . / ( p + 1 . ) ) ) ) ;95 // Force time if needed96 h = s t d : : min ( T− t , h ) ;97 }98 i f ( t < T ) s t d : : c e r r << " F a i l e d t o r e a c h f i n a l t ime ! " <<
s t d : : e n d l ;99
100 re turn Y;101 }102
103 i n t main ( i n t , ch a r * * ) {104
105 auto f = [ ] ( c o n s t V ec to r &y ) −> Ve c t o r { re turnV ec to r : : Ones ( 1 ) + y*y ; } ;
106 V ec to r y0 = V ec to r : : Zero ( 1 ) ;107 double T = 1 . ;108 auto y_ex = [ ] ( double t ) −> Ve c t o r { Ve c t o r y ( 1 ) ; y <<
t a n ( t ) ; re turn y ; } ;109
110 unsigned p = 1 ;111
112 auto P s i = [& f ] ( double h , c o n s t V ec to r & y0 ) −> Ve c t o r {re turn y0 + h*f ( y0 ) ; } ;
113 auto P s i T i l d e = [& P s i , &f , &p ] ( double h , c o n s t V ec to r &
24
y0 ) −> Ve c t o r { re turn p s i t i l d e ( P s i , p , h , y0 ) ; } ;114
115 //// PROBLEM 5d116 s t d : : c o u t << " *** PROBLEM 5d : " << s t d : : e n d l ;117
118 s t d : : c o u t << " E r r o r t a b l e f o r e q u i d i s t a n t s t e p s : " <<s t d : : e n d l ;
119 s t d : : c o u t << "N" << " \ t " << " E r r o r " << s t d : : e n d l ;120 f o r ( i n t N = 4 ; N < 4096 ; N=N<<1 ) {121 s t d : : v e c t o r < Vector > Y = o d e i n t e q u i ( P s i T i l d e , T , y 0 , N) ;122 double e r r = (Y. back ( ) − y_ex ( 1 ) ) . norm ( ) ;123 s t d : : c o u t << N << " \ t " << e r r << s t d : : e n d l ;124 }125
126 //// PROBLEM 5e127 s t d : : c o u t << " *** PROBLEM 5 e : " << s t d : : e n d l ;128
129 double h0 = 1 . /100 . ;130 s t d : : v e c t o r < Vector > Y;131 double e r r = 0 ;132 Y = o d e i n t s s c t r l ( P s i , T , y 0 , h 0 , p , 10e−8 , 10e−8 , 10e −6) ;133 e r r = (Y. back ( ) − y_ex ( 1 ) ) . norm ( ) ;134 s t d : : c o u t << " A d a p t i v e e r r o r c o n t r o l r e s u l t s : " <<
s t d : : e n d l ;135 s t d : : c o u t << " E r r o r " << " \ t " << "No . S t e p s " << " \ t " <<
" y ( 1 ) " << " \ t " << " y ( ex ) " << s t d : : e n d l ;136 s t d : : c o u t << e r r << " \ t " << Y. s i z e ( ) << " \ t " << Y. back ( )
<< " \ t " << y_ex ( 1 ) << s t d : : e n d l ;137 }