Top Banner

of 12

Honey, I Shrunk the Grids!

Jul 06, 2018

Download

Documents

Melissa Galindo
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/17/2019 Honey, I Shrunk the Grids!

    1/12

    Honey, I Shrunk the Grids!

    A New Approach to CFD Verificiation Studies

    Shelley Hebert   ∗, E. A. Luke  †

    Computational Simulation and Design Center, Mississippi State, MS 39762, USA

    There has been recent interest in the use of the method of manufactured solutions as a technique for verifi-

    cation that computational codes solve the specified partial differential equations. Such verification techniques

    provide a mechanism to verify that a particular code is solves the specified equations to a specific order of accu-

    racy. The technique requires solving for the manufactured solution on a sequence of progressively finer grids

    and measuring the exact error (difference from the exact solution) to establish that this error decreases consis-

    tently with the expected order of accuracy. There are some shortcomings of the approach in that it can become

    expensive to show that an asymptotic result has been found, particularly for 3D problems where each level of 

    refinement is 8 times larger. In addition, it is difficult, and in some cases impossible, to apply the technique

    to generalized grids where self-similar refinements may not be available. We present an alternative method

    to achieve this goal that samples the solution space at different random locations using different scalings of 

    the same grid (shrinking grids). A statistical sample is used to estimate the error at that level. This approach

    has advantages in terms of both cost savings and the evaluation of generalized solvers and meshes that are not

    possible with grid refinement methodologies.

    I. Introduction

    Software verification, in the context of numerical software for computational mechanics, consists of testing nu-

    merical solver software to confirm that it indeed solves the set of partial differential equations that it was intended to

    solve. Many methods1–5 have been advanced for the verification of solvers such as those used in computational fluid

    dynamics (CFD). Typically these approaches involve grid convergence studies where the software is tested to deter-

    mine how error is reduced as the grid (mesh) is refined on problems with known analytic solutions. This approach can

    be used to confirm that the numerical solver does indeed converge to the specified equations, but in addition that the

    numerical solver converges with the specified order of accuracy.

    The difficult part of grid convergence studies is knowing the exact error defined by the difference between the

    approximate discrete solution and the actual continuum solution. This is not the same as estimating the error (veri-

    fication of calculations2) or determining how well the solution compares with experimental results (validation), butdetermining the actual exact error. Here, the Method of Manufactured Solutions (MMS) 2,6 has become an important

    tool because it allows one to know the exact difference between a numerical solution and its analytic counterpart.

    In most real flow problems, there is no known analytic solution. If there were, a numerical solution wouldn’t be

    required. As a result, determining the error E = qreal−qnumeric  is difficult. The MMS allows one to specify the analytic

    solution before a code is run. The solution is “filtered” through the governing equations which are usually of the form

     Dq = 0 where  D  is the differential operator and  q  is the solution variable. When the solution is passed through the

    equations, a leftover source term  f  is obtained. The manufactured solution qmms is the real analytic solution to Dq=   f .

    The source term is added to the code and when run,  qmms  should be approximately recovered. The difference is the

    error.

    Convergence of the numerical method is demonstrated by showing diminishing error as a function of grid spacing

    (usually defined in terms of some  h or ∆ x). The usual approach to controlling h is through grid refinement whereby the

    density of points in the mesh is increased. For structured grids, there is a straightforward approach to grid refinement.

    However, for unstructured grids the task can become more difficult. For example, the refined grid should maintain

    similarity to the original grid, otherwise bias is introduced in the sampling for error. In addition, for three dimensional

    problems, refined grids become large quickly. However, it is desirable that the code is verified using grids that are

    ∗Graduate Research Assistant, Engineering Research Center, Mississippi State University.†Assistant Professor, Department of Computer Science and Engineering, Mississippi State University, AIAA Member.

    1 of 12

    American Institute of Aeronautics and Astronautics

    43rd AIAA Aerospace Sciences Meeting and Exhibit0 - 13 January 2005, Reno, Nevada

    AIAA 2005-685

    Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

  • 8/17/2019 Honey, I Shrunk the Grids!

    2/12

    representative of what will be applied in practice as this will provide an estimate of actual error observed in practice.

    These problems can be made even more severe when considering generalized grids (grids consisting of arbitrary

    polyhedral elements) such as illustrated in figure 1. For generalized grids, there may not be a suitable self-similar

    approach to grid refinement. We desire an approach that will facilitate evaluating a particular grid and its suitability

    for solving a given problem by a given solver. The traditional verification approach falls short of this goal.

    Figure 1. Example of the symmetry plane of a typical adapted generalized grid

    The motivation for this work is to demonstrate a new methodology for performing a grid convergence study using

    shrinking grids instead of refined grids. That is, we will take the same grid but scale it to smaller and smaller dimen-

    sions instead of refining the grid. However, in order to make this approach feasible, we have to estimate error using

    a statistical technique whereby we translate the grid to different regions of a periodic solution space to determine a

    mean error at the given grid scaling. This approach is potentially more economical than traditional grid convergence

    studies where grid doubling is used since, for 3-D problems, each refinement increases the number of grid points by

    a factor of 8. Additionally, it may be difficult to determine if the grids are in the asymptotic range for 3-D problems

    because of the rapid increase in the number of grid points. Further this method should be applicable to structured and

    unstructured grids alike, even unstructured grids for which there is no self-similar method of refinement that preserve

    grid quality.

    In this work we apply the verification technique to the CHEM solver, 7, 8 a chemically reacting compressible flow

    solver that is used for modeling combustion and hypersonic problems. The CHEM solver uses a second-order finite-

    volume algorithm for both steady-state and time accurate solutions to multi-component reacting flows on generalized

    three-dimensional meshes.

    II. Grid Convergence Studies and Theoretical Order of Accuracy

    Most grid convergence studies involve several, successively finer grids where some measure of discretization h (or

    ∆ x, ∆ y, ∆ z) is reduced by adding more and more points. If the code is found to be, say, second-order accurate, then  E 

    will reduce as h2; if  h  is reduced by half, then  E  should be reduced by a factor of four.

    If  E 1 and  E 2  are some measure of error on two grids, with  ∆ x1 = r ∆ x2, then the order of accuracy is

     p   = log E 1 E 2

    log   ∆ xr ∆ x

    (1)

    =log  E 1

     E 2

    − logr .   (2)

    2 of 12

    American Institute of Aeronautics and Astronautics

  • 8/17/2019 Honey, I Shrunk the Grids!

    3/12

    For say, grid doubling,  r  =   12

    . In three dimensions, this means 8×   the number of grid points in the finer grid. Forunstructured grids, r  can be determined using the ratio of the number of grid points or cells in the two grids:

    r  =   d   N 2

     N 1(3)

    where N 1 and  N 2  are the number of cells in the fine and coarse grids, respectively, and  d  is the number of dimensions.

    So for a 2D refinement, r  = 2 means 4× the number of grid points and 8× in 3D as mentioned above.Now typically, the finer grids are produced by using more points in the same space. For a 1-D grid, the coarsest

    grid may divide the interval  [0,1]  into 4 intervals using 5 points at 0, 0.2, . . . , 1.0. The next finer grid may have 9intervals with 10 points spaced at 0, 0.1,. . . , 1.0. Then further divide that into 19 intervals with 20 points at 0, 0.05,

    0.10, 0.15,. . . , 1.0. In this study, instead of doubling the number of grid points to halve h, we simply halve h. Instead

    of dividing the interval  [0,

    1]  with five points, 10 points, 20 points,. . . we divide [0,

    1]  with 10 points,  [0, .

    5]  with 10points, [0,0.25] with 10 points,. . . . Using this grid scaling, we still get a sequence of  ∆ x values of 0.2, 0.1, 0.05, . . . .Note that this eliminates portions of the solution space with each shrinking. The second grid does not include error

    in the interval [0.5,1], the third does not include error in the interval [0.25,1]. If the manufactured solution were, say,ecx, then there would naturally be more error at larger values of  x, simply because the function values are larger there

    (for positive c).

    Since this method eliminates portions of the solution space at each step, portions that may have high error (e.g.,

    high gradient regions), we may see unrepresentative errors in the smaller grids. To correct this problem, we sample

    solution space at N  random points and compute an average error that is representative of error over the entire domain.

    To make sure we sample the same space for all grid scales, we choose a periodic function for the manufactured

    solution, and then we translate the grid to random points within the periodic interval. Due to the selection of the

    periodic function, we don’t have to concern ourselves with the location of the outer boundary of the grid since if the

    grid extends past a periodic boundary we can consider it as if the grid wrapped around automatically to the same

    sampling domain.For example, if we wish to estimate what the  2 norm of error would be for a grid that spanned the entire periodic

    space with a grid spacing similar to our sample grid, we could compute the average error as in

    error 1 =  1

     N 

     N 

    ∑i=1

     E i   (4)

    where the mean error would be computed from the results of random samples taken from within the periodic space.

    This technique has several advantages. First is that in the 3-D case, refining the grid means doubling the number of 

    points in each of three directions, so each finer grid is eight times larger than the previous grid. The downside is that

    while the grids are not any larger, each one is run  N   times at different locations in the solution space. This is easily

    parallelizable, though, and may even be run on a workstation, while the larger, refined grids may require a cluster or

    other large system to run.

    Further, the technique is applicable to grids for which there is no self-similar way to refine them such as mixedelement unstructured or generalized grids. The finer grids are simply scaled copies of the coarser grids. Traditional

    grid convergence studies on unstructured grids must use “refinement” metrics like the ratio of the number of points in

    each grid since there may not be anything like  ∆ x  to halve. The problem is that the grids may no longer be similar

    unless special care was taken to generate the finer grids.

    Unstructured grids often have bad cells. In some cases, they are unavoidable. In a traditional grid convergence

    study, these cells may be refined away or, more likely, in the finer grid they will encompass a smaller proportional

    volume or proportion of the total cells. As a result of the increase in grid quality, the accuracy may go up more than the

    true order would indicate. One could consider such effects under the umbrella of “asymptotic range”, however they

    make the traditional grid refinement approach for unstructured grids more difficult, particularly in three dimensions.

    III. Manufactured Solution

    The equations solved were the viscous steady-state Navier-Stokes equations for  NS  species:

    div(ρn ũ) =   0 for n = 1, . . . , NS 

    div(ρ ũũ+ ˜̃ I p) =   div ˜̃σ 

    div((ρe0 + p)ũ) =   div( ˜̃σ  ·  ũ) + div(k gradT )

    3 of 12

    American Institute of Aeronautics and Astronautics

  • 8/17/2019 Honey, I Shrunk the Grids!

    4/12

    where˜̃σ  =  λ (div ũ) ˜̃ I +µ (grad ũ+ (grad ũ)T)   (5)

    and λ  =  − 23µ . For the inviscid cases, λ  = µ  =  k  = 0.  ρn  is the density of species n  and ρ  = ∑

     NS n=1ρn.

    The solution used was for three species:

    ρ1( x, y, z) =   ρ0

    1 +

      1

    10

    2 + cos x

    2

    2 + cos y

    3

    (2 + cos z)

    ρ2( x, y, z) =   ρ0

    1 +

      1

    10

    2 + cos x

    3

    (2 + cos y)

    2 + cos z

    2

    ρ3( x, y, z) =   ρ0

    1 +

      1

    10 (2 + cos( x+ 1))

    2 + cos y

    22 + cos z

    3

    ρ( x, y, z) =   ρ1( x, y, z) +ρ2( x, y, z) +ρ3( x, y, z)

    u( x, y, z) =   u0

    1 +

      1

    10

    2 + cos x

    2

    2 + sin y

    3

    (2 + sin z)

    v( x, y, z) =   u0

    1 +

      1

    10

    2 + sin x

    2

    2 + cos y

    3

    (2 + sin z)

    w( x, y, z) =   u0

    1 +

      1

    10

    2 + sin x

    2

    2 + sin y

    3

    (2 + cos z)

    T ( x, y, z) =   T 0

    1 +

      1

    10

    2 + sin x

    2

    2 + sin y

    3

    (2 + sin z)

    where ρ0,  u0,  T 0  are base values, we used  ρ0 =  2,  u0 =  150, and  T 0 =  300; the solution is simply some perturbation

    about the base value. Temperature T  was related to pressure  p  through the equation of state p = ρ RsT  and total energye0 consisted of two parts, internal energy  ei = ns RsT  and kinetic energy:

    e0 = 1

    2u2 +

     1

    2v2 +

     1

    2w2 + ei.   (6)

    The source terms were added to the code as an arbitrary distributed source. Dirichlet Boundary conditions were

    imposed, so the value of the manufactured solution was used at the boundary points. The initial conditions were set

    to the solution values to speed convergence. Error was evaluated and quantified using the  1  norm. Both a traditional

    grid convergence study and the shrinking grid study were performed. Additionally, the traditional grid convergence

    study included the 2 norm.

    Several different grids, both structured and unstructured were tested. For structured grid tests we used a simple

    open cube with the same number of points on each side and a curvilinear donut grid made by revolving a rectangular

    section around the  y-axis. For the unstructured grids we consider an unstructured hexahedral element grid that was

    created by refining, through centroid splitting, an isotropic tetrahedral mesh on a sphere. In refinements of this hexa-hedral grid used in the traditional study, the resulting hexahedra were each further split into 8 similar hexahedra using

    centroid splitting. Finally, for the grid shrinking study, a mixed element unstructured grid is used that contains prisms,

    pyramids, and tetrahedrons.

    IV. Traditional Method

    A. Cube Grid

    Five grids from 5 nodes on a side to 80 nodes on a side were used, with side length of 0.125. The results are given in

    table 1.

    The order of accuracy is at least 2 in both cases and seems to be converging to 2. This agrees with the theoretical

    order of the algorithms in the CHEM code.

    B. Donut Grid

    Here, four grids were used, with 10, 20, 40, and 80 points in each of the radial, θ , and z  directions. The inner radiusof the donut was 0.25, the outer radius 0.5. The results are given in table 2. Again, the order of accuracy is at least 2

    and appears to be converging to 2.

    4 of 12

    American Institute of Aeronautics and Astronautics

  • 8/17/2019 Honey, I Shrunk the Grids!

    5/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    6/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    7/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    8/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    9/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    10/12

  • 8/17/2019 Honey, I Shrunk the Grids!

    11/12

    higher than the order demonstrated by the scaled grid studies. The traditional study, in the case of the unstruc-

    tured sphere grid, was inconclusive. It certainly did not appear to be second order like the other two grids, and

    in the density case, looked like it would converge to first order. Nevertheless, with that data, it does not appear

    to be second order. The scaled grid study confirms that; the code is more likely to be first order than second in

    this case.

    •   Pro: We believe that this method does the right thing for unstructured grids. Since there is no mesh “length”like ∆ x for unstructured grids, refinement of unstructured grids is taken from the ratio of the number of cells in

    the grid. But how good a measure of refinement this is that? Even if the grid could be refined self-similarly

    (which is not always the case), one could imagine the case where half of the grid is refined, say, twice, while

    the other half is not refined at all. Simply using the ratio of the number of cells does not give a good indication

    of how much finer the second grid is than the first. Here, there is a clear refinement ratio since there is only one

    grid used in the study and it is simply scaled in three dimensions by the refinement ratio.

    •   Pro: This method is easily parallelizable, even if the code to be verified is not. Each level of scaling couldbe run simultaneously, even each sample at each level. Of course, there’s no reason why the grids used in a

    traditional convergence study could not be run simultaneously, but there is significantly more opportunity for

    parallelization using this method.

    •   Pro: The grid used does not have to be that big. The grids used here were quite small compared to the largestof the grids used in the traditional grid convergence study. The 5 ×5×5 cube grid runs in a matter of secondson a workstation (indeed, about half of the time required was startup time), so even with 400 samples, a single

    workstation is more than adequate to do the scaling grid study. For the 80 × 80× 80 cube grid, a cluster wasrequired to get results at all, much less in a reasonable amount of time. In the time required on the cluster for

    the largest grid, thousands of samples of the smallest grid could be run.

    •   Pro: Real grids can be used. It may not be feasible to refine real production grids due to their size (coarseningis an option, to an extent), and as a result, many traditional grid convergence studies use contrived grids or real

    grids with very simple geometries (e.g., nozzle or a compression ramp) rather than a grid for an engineering

    application.

    We found two items of interest during this study. First, our code had limit-cycle problems for grids composed of 

    only tetrahedral elements. A cube and sphere composed of tetrahedral elements were also studied and in certain cases,

    limit-cycles would prevent the code from converging sufficiently to a solution to perform the grid convergence study.

    As a result, a new method for computing gradients was implemented which seems to have solved the problem. While

    the order of of accuracy did not change as a result, the code is now more robust.

    Second, we found that our code does not have the same order of accuracy for structured and unstructured grids,

    even though it is a solver for generalized grids. We created a set of 10×10×10 cube grids where the interior pointswere randomly offset by some ε  times   ∆ x

    4 . As the magnitude of the offset increased, the order dropped from second to

    first. For a 5% maximum offset (ε  ∈  [0,0.05]), the order of accuracy was approximately 1.5 and for a 10% maximumoffset the order of accuracy was approximately 1.3. Therefore, we suspect that unrealistically ideal mesh quality may

    be required to achieve second order accuracy on unstructured meshes for the algorithms used in the CHEM solver.

    VIII. Conclusion

    We have developed a new methodology that can determine the consistency and order of accuracy of a solver.

    The technique combines the method of manufactured solutions with a statistical approach that estimates how error is

    reduced as mesh spacing is reduced. Since this technique uses the same mesh in all samples, it simplifies significantly

    the process of refining unstructured grids. In some cases, it makes it possible to evaluate a solver on grids for which

    no appropriate refinement method exists. In addition, we find that the methodology is more economical than the

    traditional approach to mesh refinement, particularly for three-dimensional problem domains due to the excessive cost

    of grid refinement. In addition, we suggest that the methodology can be used to validate grid generation techniques

    as well, as the methodology can evaluate the impact of various grid configurations on solution accuracy without the

    requirement of a refinement or coarsening strategy.

    11 of 12

    American Institute of Aeronautics and Astronautics

  • 8/17/2019 Honey, I Shrunk the Grids!

    12/12

    References

    1Stern, F., Wilson, R. V., Coleman, H. W., and Paterson, E. G., “Verification and Validation of CFD Simulations–Part 1: Methodologies and

    Procedures,”  Journal of Fluids Engineering, , No. 4, 2001, pp. 737–958.2Roache, P. J., Verification and Validation in Computational Science and Engineering , Hermosa Publishers, Albequerque, New Mexico, 1998.3Oberkampf, W. L. and Blottner, F. G., “Issues in Computational Fluid Dynamics Code Verification and Validation,”  AIAA Journal, Vol. 36,

    No. 5, May 1998, pp. 687–695.4Rizzi, A. and Vos, J., “Toward Establishing Credibility in Computational Fluid Dynamics Simulations,” AIAA Journal, Vol. 36, No. 5, May

    1998, pp. 668–675.5Marvin, J. G., “Perspective on Computational Fluid Dynamics Validation,” AIAA Journal, Vol. 33, No. 10, October 1995, pp. 1778–1787.6Salari, K. and Knupp, P., “Code Verification by the Method of Manufactured Solutions,” Tech. Rep. SAND2000-1444, Sandia National

    Laboratories, 2000.7Luke, E. A., Tong, X.-L., Wu, J., and Cinnella, P., “CHEM 2: A Finite-Rate Viscous Chemistry Solver – The User Guide,” Tech. Rep.

    MSSU-COE-ERC-04-07, Mississippi State University, 2004.8

    Luke, E. A., Tong, X., Wu, J., Tang, L., and Cinnella, P., “A Step Towards ”Shape-Shifting” Algorithms: Reacting Flow Simulations UsingGeneralized Grids,” Proceedings of the 39th AIAA Aerospace Sciences Meeting and Exhibit , AIAA, January 2001, AIAA-2001-0897.

    9Senguttuvan, V., Chalasani, S., Luke, E., and Thompson, D., “Adaptive Mesh Refinement Using General Elements,”  43rd AIAA Aerospace

    Sciences Meeting and Exhibit , 2005.10Burg, C. O. E. and Murali, V. K., “Efficient Code Verification Using the Residual Formulation of the Method of Manufactured Solutions,”

    34th AIAA Fluid Dynamics Conference and Exhibit , 2004.

    12 of 12

    American Institute of Aeronautics and Astronautics