Chapter: 4 Curve Fitting

Post on 30-Jan-2016

105 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

Chapter: 4 Curve Fitting. Dr. Asaf Varol asvarol@mail.wvu.edu. 4.1 Introduction. Many engineering problems require a functional relationship (curve) among one or more independent variables and a dependent variable for a given set of data points, say ( x i , y i ) for i = 1, 2, 3, ... - PowerPoint PPT Presentation

Transcript

1

Chapter: 4Curve Fitting

Dr. Asaf Varol

2

4.1 Introduction

• Many engineering problems require a functional relationship (curve) among one or more independent variables and a dependent variable for a given set of data points, say (xi, yi) for i = 1, 2, 3, ...

• Determining the coefficients of such a function is the subject matter of this chapter

• The most commonly used functions for curve fitting are polynomials:

 

Pn(x) = b0 + b1x + b2x2 + ... + bnx

n

(4.1.1)

• In the first category the number of data points are exactly equal to (n+1), that is, one more than the degree of polynomial to be fitted

3

4.1 Introduction (Cont`d)

• It is imperative in this procedure that there is no error in the given data points, hence we wish them to be on the curve “exactly” , i.e. exact polynomial fitting

• The second category is such that the number of data points which are to be represented by the curve is greater than (n+1). Then it is impossible to pass a single polynomial of degree n through all of the data points “exactly.”

• There are two alternatives: one is the so-called regression analysis which minimizes some norm of the error vector, ei, where the error is defined as the

difference between the data points and the corresponding point on the curve

• The second alternative is to pass more than one polynomial through the subsets of the given data points by imposing additional conditions at the points joining subsequent curves. (For example, the derivative of the two adjacent polynomials can be set equal to each other at the common point.) This concept leads to the so-called spline curve fitting which is presented in Section 4.3.

4

4.2 Exact Polynomial Fitting

Given (n+1) points: (x0, y0), (x1, y1), (x2, y2), … (xn, yn), there is a unique polynomial of the nth degree which passes through these points.

 

y = Pn(x) = b0 + b1x + b2x2 + … + bn x

n (4.2.1)

The coefficients b0, b1, b2, … bn can be determined by solving the following linear system of equations.

b0 + b1x0 + b2x02 + … +bnx0

n = y0

b0 + b1x1 + b2x12 + … +bnx1

n = y1

b0 + b1x2 + b2x22 + … +bnx2

n = y2 . . . . (4.2.2) . . . … . . . . . . .

b0 + b1xn + b2xn2 + … +bnxn

n = yn  In matrix form these equations can be written as

[A]{B} = {Y} (4.2.3) Where the elements of [A] are given by:

aij = (xi-1)j-1 i = 1, 2, 3, …, n + 1 (4.2.4)

j = 1, 2, 3,…, n + 1

5

Example E4.2.1

Determine the 3rd degree polynomial which passes through the following points: 

Solution:  Substitute these values in Eq. 4.2.2 to obtain: 

b0 + b1(0) + b2(0) + b3(0) = 1

b0 + b1(1) + b2(1) + b3(1) = 0 =

b0 + b1(2) + b2(4) + b3(8) = 1

b0 + b1(3) + b2(9) + b3(27) = 0  This system of linear equations were solved using LU decomposition (program ludecom.for) to determine: 

b0 = 1.0 , b1 = -3.3333 , b2 = 3.0 , b3 = -0.66666 (-10/3) (-2/3)

The polynomial sought is: y = 1- (10/3)x + 3x2 - (2/3)x3

27931

8421

1111

0001

3

2

1

0

b

b

b

b

0

1

0

1

x 0 1 2 3y 1 0 1 0

6

Lagrange Polynomials

Let (4.2.5)

Note that:

 

For example, when n = 2, 

(4.2.6a)

(4.2.6b)

(4.2.6c) 

The nth degree Lagrange polynomial that passes through (n+1) points is then given by: 

(4.2.7) 

Where Lin(x) is given by Eq. 4.2.5 and yi = f(xi) are the given function values corresponding to x = x i.

L in x

x x j

x i x jjj i

n( )

( )

( )

0

Lin i

j

xx x

x x j i( )

1

0

)(

)(

)(

)()(

20

2

10

120 xx

xx

xx

xxxL

L xx x

x x

x x

x x12 0

1 0

2

1 2

( )( )

( )

( )

( )

L xx x

x x

x x

x x22 0

2 0

1

2 1

( )( )

( )

( )

( )

7

Cont`d

The nth degree Lagrange polynomial that passes through (n+1) points is then given by:

 

(4.2.7)

 

Where Lin(x) is given by Eq. 4.2.5 and yi = f(xi) are the given function values

corresponding to x = xi.

n

i

nnn

nnnii

n xLyxLyxLyxLyxL0

1100 )(...)()()()(

8

Example E4.2.2 Determine the third degree Lagrange polynomial that passes through the points (0, 1); (1, 0); (2, 1); (3,

0). That is, the same points as in Ex. E4.2.1: Solution: 

L03(x) = = = - (x-1)(x-2)(x-3)

L13(x) = = = (x)(x-2)(x-3)

L23(x) = = = - (x)(x-1)(x-3)

 L33(x) = = = (x)(x-1)(x-2)

 

))()((

))()((

302010

321

xxxxxx

xxxxxx

)30)(20)(10(

)3)(2)(1(

xxx

6

1

))()((

))()((

312101

320

xxxxxx

xxxxxx

)31)(21)(01(

)3)(2)(0(

xxx

))()((

))()((

321202

310

xxxxxx

xxxxxx

)32)(12)(02(

)3)(1)(0(

xxx

2

1

))()((

))()((

231303

210

xxxxxx

xxxxxx

)23)(13)(03(

)2)(1)(0(

xxx

6

1

2

1

9

Cont`d The third degree polynomial is then found from,

= (1)(-1/6)(x - 1)(x - 2)(x - 3) + 0 + (1) (-1/2)x(x - 1)(x - 3) + 0  or 

L3(x) = -1/6(x - 1)(x - 2)(x - 3) – 1/2x(x - 1)(x - 3)  It is left to the reader to show that this polynomial is the same as that found in

Example E4.2.1 using the direct method.

)()()()()( 333

322

311

300

3 xLyxLyxLyxLyxL

10

Newton`s Polynomials Another way of constructing simple polynomials for a small number of data points is to use

Newton’s divided difference polynomials (also called Newton’s interpolating polynomials). These have the following form:

 

y = f(x) = b0 + b1(x - x0) + b2(x - x0)(x - x1) +…+ bn(x - x0)(x - x1)…(x - xn-1) (4.2.8) 

with the given (n+1) data points (x0, y0), (x1, y1)…(xn, yn). The coefficients bi (i = 0, 1, 2, … , n) are calculated using the finite divided difference formulae.

  Let us define divided differences as: 

y[x0, x1] = (y1 - y0)/(x1 - x0) (4.2.9a) 

y[x1, x2] = (y2 – y1)/(x2 – x1)

(4.2.9b)

021021210

,,,,

xx

xxyxxyxxxy

13

2132321,,

,,xx

xxyxxyxxxy

11

Cont`d

(4.2.9c)

y[x1, x2, x3, x4] = similarly, and so on . . . 

The coefficients of the Newton polynomials are given by: 

b0 = y0

b1 = y[x0, x1]

b2 = y[x0, x1, x2] (4.2.10) . .

. . . .

bn = y[x0, x1,…, xn]  The calculations can be simplified greatly by preparing a divided difference table as shown in

the following example.

y x x x x

y x x x y x x x

x x0 1 2 31 2 3 0 1 2

3 0

, , ,, , , ,

12

Example E4.2.3 Determine the third degree Newton polynomial which passes through the data points (0,1); (1, 0);

(2, 1); (3, 0), again the same points as in Ex. E4.1.1 & E4.1.2.

 

Solution:

 

Construct the table given below using Eq. 4.2.9. The coefficients are equal to the divided differences given at the top row. Hence,

 

f(x) = 1- (x - 0) + (x - 0)(x - 1) -2/3(x - 0)(x - 1)(x - 2)

or

f(x) = 1-x + x(x - 1) - 2/3x(x - 1)(x - 2)

Interested readers may show that this is the same as the polynomial found in Example E4.1.2

 

13

Cont`d

14

4.3 Least Squares Regression

Linear Regression

 

Consider, for example, the data given in Table 4.3.1 which is collected from a survey of randomly selected people between the ages of 10 - 50 in a small town.

15

Cont`d

50556065

70758085

1.5 1.7 1.9

Height (m)

Weig

ht (k

g)

1.5

1.6

1.7

1.8

1.9

2

50 60 70 80

Weight (kg)

Hei

ght (

m)

h = a + bw w = c + dh

Figure 4.3.1 Weight vs. height of a group of people in a small town

Least squares regression is a method that calls for minimization of the sum of the squares of the components of the error vector Ei.

16

Cont`d

(i) Assume that there is no error in w, then the function to be minimized is defined by: 

(4.3.3a) 

where the error in h at each point is given by: 

Ehi = hi - (a + bwi) ; i = 1, 2, 3, ..., N (4.3.3b)

 

(ii) Assume that there is no error in h, then the function to be minimized is defined by 

(4.3.4)

where the error in w at each point is given by: 

Ewi = wi - (c + dhi) ; i = 1, 2, 3, ..., N (4.3.4b)

 

S a b h a bwh i ii

N( , ) [ ( )]

2

1

S a b w c dhw i ii

N( , ) [ ( )]

2

1

17

Cont`d

(iii) Assume that both variables may involve error. In this case one can minimize the sum of the combined error in h and w simultaneously. That is, if we solve Eq. 4.3.1 for w, we obtain:

w = -(a/b) +(1/b)h (4.3.5)

 

If the error in h and w were given equal weights (or importance) then it follows intuitively that the function that should be minimized should be:

 

(4.3.6)

 

The minimization of Sh or Sw given by Eqs. 4.3.3 & 4.3.4, respectively, is straightforward

as will be shown shortly. The minimization of Shw on the other hand, is not trivial and left

as an advance topic.

S a b h a bw wa

b bhhw i i i i

i

N

( , ) {[ ( )] [ ( )] } 2

1

21

18

Cont`d But first, we formulate the problem of linear regression in terms of generic variables, x and y. That

is,

y = a0 + a1x (4.3.7)

and (4.3.8)

To find the minimum of S we take the derivative of it with respect to first a0, then with respect to a1, and set them equal to zero. Thus

  (4.3.9a)

  (4.3.9b)

  Comment:It can be shown that due to the nature of the function S at the point where Eqs. 4.2.9a &

b are satisfied S has a minimum (not a maximum). This can also be shown a posteri by substituting the values of a0 and a1 in Eq. 4.3.8 and observing that indeed the solution set corresponds to the minimum of S.

 

N

iii xaayaaS

1

21010 )]([),(

S

ay a a xi i

i

N

0

0 11

2 0 { [ ( )]( )}

S

ay a a x xi i i

i

N

1

0 11

2 0 { [ ( )]( )}

19

Cont`d By rearranging 4.3.9a & b and simplifying, we arrive at:

 

Na0 + (xi)a1 = y (4.3.10a)

 

(xi)a0 + (xi2)a1 = xiyi (4.3.10b)

 

where all summations are implied to be from i = 1 to N. These two equations are sufficient for determining the coefficients a0 and a1.

N = 7 ; (xi) = -2.0 ; (yi) = -8.0 ; (xi2) = 56.0 ; (xi yi) = 30.0

  Hence, the set of equations to be solved are:  7a0 + -2.0a1 = - 8.0 a0 = -1.0

-2.0a0 + 56.0a1 = 30.0 a1 = 0.5

20

Example E4.2.4 Fit a straight line to the data given in Table 4.3.1, first assuming that there is no error in w (weight)

values, and then assuming that there is no error in (height) values. Compare the results from both equations.

  Solution:  From Table 4.3.1, we see that:

N = 11 ; (hi) = 18.75 ; (wi) = 775 ; 

(hi2) = 32.0853 ; (wi

2) = 55,131 ; (hiwi) = 1,328.05 

The equations to be solved for h = a + bw are obtained by letting x = w, y = h, a0 = a,

and a1 = b in Eqs. 4.2.10a & b: 

11a + 775b = 18.75775a + 55,131b = 1,328.05

  The solution is:

a = 0.768; b = 0.0133 Hence, h = 0.768 + 0.0133w

21

Cont`d The equations to be solved for w = c + d h are obtained by letting x = h, y = w, a0 = c,

a1 = d in Eqs. 4.3.10a & b:

11c + 18.75d = 775.0

18.75c + 32.0853d = 1,328.05 The solution is: c = -25.32; d = 56.19 Hence,

w = -25.32 + 56.19 h  To compare the results from the two cases we solve h = a + bw for w which is:

w = -(a/b) + (1/b)h

If there were no errors in w we must have c = -(a/b) = -57.74 and d = (1/b) = 75.19, which are significantly different than what we found by assuming that w values had errors but h values did not. The true answer must lie somewhere between these.

22

Cont`d Here we suggest that equal weight is given to both sets of values (a, b) and (c, d).

Adding Eqs. 4.2.1 & 4.2.2 side by side and solving for h yields: h = [(a-c)/(1+d)] + [(1+b)/(1+d)]w

This yields:

h = 0.456 + 0.018w or w = -25.33 + 55.56h  

(Interested readers may solve the problem using non-linear analysis which requires minimization of the sum of the squares of the error Ei

2 = [Ehi2 + Ewi

2]1/2.

23

Pseudo Non-Linear Regression Exponential and power law functions of the form 

y = a ebx ; y = a xb (4.3.11)  are commonly used in engineering to approximate measured data. For example fitting a simple exponential curve given by Equation (4.3.11) requires the minimization

of the error function given by 

S(a,b) = [ yi -( a ebxi) ] 2 (4.3.13) Taking the derivatives of S with respect to a and b and setting them equal to zero yield the following

non linear equations to be solved to determine the two parameters a and b.

[ yi -( a ebxi) ] (-2 ebxi) = 0 (4.3.14a)

[ yi -( a ebxi) ] (-2axiebxi ) = 0 (4.3.14b)

  These equations can be simplified to obtain  

a zi2 = zi yi (4.3.15a)

a xi zi2 = xi zi yi (4.3.15b)

where

zi = ebxi (4.3.15c)

24

Cont`d

We like to simplify the problem observing that the exponential function given by Equation (4.3.11) can be linearized by taking the natural logarithm of both sides, that is

 

ln(y) = ln(a) +b x (4.3.16a)

 

We then define new variables say by starred variables

 

y* = ln(y); a* = ln(a); b*=b; x*=x

Hence

y* = a* + b* x* (4.3.16b)

 

This is called the pseudo non-linear regression

25

Example E4.3.2 Fit an exponential function of the form of Equation (4.3.11) to the data given below using the pseudo

non-linear method. Compare the results for the direct non-linear method which obtains the parameters a and b from Equations 4.2.15 a&b.

y = a ebx

i: 1 2 3 4 5 6 7 ------------------------------------------------------ x: 0.05 0.40 0.80 1.20 1.60 2.00 2.40 ------------------------------------------------------ y: 0.55 0.75 1.00 1.40 2.00 2.70 3.75 Taking the natural logarithm of both sides of the above equations yields

ln(y) = ln (a) + b x  Let y* = ln(y); a* = ln(a); b* = b, and x* = x, then prepare a modified table involving the

transformed variables i: 1 2 3 4 5 6 7 ----------------------------------------------------------------- x*: 0.05 0.40 0.80 1.20 1.60 2.00 2.40 ----------------------------------------------------------------- y*: -0.598 -0.288 0.00 0.336 0.693 0.993 1.322 

N = 7; x*i = 8.45; (x*i)2 = 14.5625; y*i = 2.4580; (x*i y*i ) = 6.5266

26

Cont`d The equations to be solved are 

7 a* + 8.45 b* = 2.4580 

8.45 a* + 14.5625 b* = 6.5266  The solution of which is a* = -0.63334; b* = 0.81568 Now we need to calculate the original parameter by back transformation, i.e. 

a = exp(a*) = 0.5308 , b = b* = 0.81568   The desired curve is:  

y = 0.5308 exp( 0.81568 x )  Fully non-linear regression solution can be obtained by solving equations 4.3.15 a,b,&c using

the methods (such as the bisection method) surveyed in Chapter 2. This solution to the non-linear regression is:

 a = 0.5335; b=0.81284

27

Cont`dTable 4.3.2 compares the two solutions.

Sum of the square of the errors:

 

Ei2 = 2.75E-03 for pseudo-nonlinear regression

 

Ei2 = 2.70E-03 for fully non-linear regression

28

Hints on Linearization

It is not possible to find a general procedure to linearize an arbitrarily given function but here we mention a few commonly used functions in engineering practice in a tabulated form to obtain a linear function

* Note ln(A + B) lnA + lnB, but ln(AB) = lnA + lnB; ln(Ab) = b lnA

29

Iterative Regression

Consider

 

y = a + bxm (4.3.18)

. The linearization can be achieved as

 

ln(y-a) = ln(b) + m ln(x) (4.3.19)

y* = ln(y-a), a0 = ln(b), a1 = m, x* = x

 

Here “a” is the intercept of the function to be found (i.e. y = a at x = 0 for m > 0) This situation is best handled by the following iterative procedure: First make a wise (educated) guess for the value of “a”, then determine b & m. Later check if indeed the sum of the square of the errors is minimal. If not change the value of “a” and repeat the process until the error function is minimized.

30

4.4 Polynomial Regression In this section we generalize the procedure presented in Section 4.3 for linear

regression to any polynomial of degree m given by

 

(4.4.1)

 

 

For brevity let us derive the equations for a 2nd order polynomial and then extend them to the general case.

A second order polynomial is given by

 

(4.4.2)

 

The sum of the squares of the errors is defined as

 

(4.4.3)

where N is the total number of data points.

y a a x a x 0 1 22

y a a x a x a xmm 0 1 2

2 ...

N

1i

22i2i10i xaxaayS

31

Cont`d

The minimum of this error function S is determined by solving the following equations.

 

(4.4.4a)

 

(4.4.4b)

 

(4.4.4c)

 

021

2210

0

N

iiii xaxaay

a

S

021

2210

1

i

N

iiii xxaxaay

a

S

02 2

1

2210

2

i

N

iiii xxaxaay

a

S

32

Cont`d For a polynomial of degree m we have (m+1) of such equations and they can be

written in a compact matrix form as

 

 

 

. . . . .

. . . . . (4.4.5)

. . . . .

 

 

In Equation (4.4.5) all summations are performed from i=1 to N. It should be noted that the coefficient matrix is a symmetric matrix.

im

miii yaxaxaxNa ...2

210

iim

miiii yxaxaxaxax 1

23

12

0 ...

iim

miiii yxaxaxaxax 22

24

13

02 ...

yxaxaxaxax mim

mmi

mi

mi

mi

...22

11

0

33

Example E4.4.1Fit a cubic polynomial to the data from Ex. E4.3.2.

Solution: Using the program POLYREG the following set of equations are obtained

  7.000000 a0 + 8.450001 a1 + 14.562500 a2 + 28.224130 a3 = 12.150000 

8.450001 a0 + 14.562500 a1 + 28.224130 a2 + 58.240010 a3 = 20.407500 

14.562500 a0 + 28.224130 a1 + 58.240010 a2 + 124.938300 a3 = 40.297380 

28.224130 a0 + 58.240010 a1 + 124.938300 a2 + 275.132500 a3 = 84.611270 

The solution of which is  

a0 = 0.5297; a1 = 0.4676; a2 = 0.07936; a3 = 0.1182 

The results from the cubic polynomial are compared with the actual data below. It is seen that the agreement is very good. The sum of the squares of the error is significantly less than those of exponential fits (see Example E4.3.2). Hence a cubic polynomial would be a good choice for this problem.

34

Cont`d

35

4.5 Multidimensional Linear Regression

For brevity we shall consider only two independent variables e.g.

 

F = F(x,y); e.g. F(x,y) = a + bx + cy (4.5.1)

 

To fit a least squares function of the form (4.5.1) to a given set of data points we need to minimize the error function given by

 

S(a,b,c) = (4.5.2)

F a bx cyi i ii

N

2

1

36

Cont`d We set the derivative of S with respect to a, b and c to zero to obtain the following

equations:

(4.5.3)

After rearranging and simplifying we obtain:

(4.5.4)

which are the equations for determining three unknowns a, b and c.

02 iii cybxaF

a

S

02 iiii xcybxaF

b

S

02 iiii ycybxaF

c

S

iii FcybxaN

x a x b x y c x Fi i i i i i 2

y a x y b y c y Fi i i i i i 2

37

Cont`d

We note that power functions in multidimensional problems can be reduced to a linear form as well. Let:

 

F = axbyc (4.5.5)

 

ln(F) = ln(a) + bln(x) + cln(y) (4.5.6)

 

Let F* = ln(F); x* = ln(x); y* = ln(y); a* = ln(a); b* = b; and c* = c to obtain

 

F* = a* + b*x* + c*y* (4.5.7)

 

which is in the form of the linear function given by Equation 4.5.1

38

Example E4.5.1 The following data is obtained from measurements of pressure, P, Temperature, T, and density,

, of an unknown gas. Find out by using multidimensional regression analysis of the state equation P = T to determine what this species could be and whether it is an ideal gas.

Solution The power law expression given above can be linearized by taking the logarithm of both sides of

the equation to yield

  Let

F = ln P, x = ln , y = ln T, a = ln , b = , c = .  Hence we have a linear equation with two independent variables. 

F = a + bx + cy which is the same as Equation (4.5.1). The set of equations to be solved are given by Equation

(4.5.4). Inserting N = 20 and the values of the sums given in Tables E4.5.1 into Equation (4.5.4) gives

  20a + 27.08282b + 135.605975c = 267.5132 27.08282a + 42.95078b + 181.5625c = 366.4602

135.605975a + 181.5625b + 921.51621c = 1813.819

ln ln ln lnP T

39

Cont`d

The solution is obtained using Gaussian elimination method which is 

a = 5.2438, b = 0.99994, c = 0.99964 

Hence 

= ea = 189.39; = b = 0.99994; = c = 0.99964 

P T

 R = = Gas Constant; R = 189.39 J/kgK, Ru = 8314 J/kmolK

  The molecular weight is given by 

M = = = 43.90 

This gas is most probably CO2 which has a molecular weight of 44 and it is an ideal gas because the equation of state closely approximates P = RT, .

Ru

R

8314

189 39.

0.1,0.1

40

Cont`d

41

Cont`d

42

4.6 Spline Curve Fitting and Interpolation

Imagine a given data set containing 50 points, how would a polynomial of 49 th degree look like and how difficult would it be to determine the coefficients. An alternative method is to fit piece-wise polynomials to a sub set of the whole data (say two, three or four points at a time), then patch these at the point that is common to two adjacent intervals as illustrated in Figure 4.6.1. The constraints at the points joining the two adjacent intervals can be selected as desired but our purpose is to obtain “smooth” curves.

43

Cont`d

The degree of smoothness can be improved by making the derivatives of the functions at the common (junction) point equal. The name “splines” refers to the smoothness of the curves and it is derived from the elastic rods, called, splines, which were used by engineers for a long time to fit “smooth” curves through given points on a plane.

44

Quadratic Splines Here, the polynomials to be determined for each interval are 2nd degree and given by

 

Pi (x) = ai + bix + cix2 (4.6.1)

i = 1, 2, 3, …, n 

Where n is the number of intervals.  

First we require that each polynomial must pass through the two end points of that interval, that is, for each ith interval.

 

yi-1 = ai + bixi-1 + cixi-12 (4.6.2a)

yi = ai + bixi + cixi2 (4.6.2b)

  For i = 1, 2, 3, …, n

 

The two end points of the ith interval are (xi-1, yi-1) and (xi, yi) as seen in Figure 4.6.1 .

45

Cont`d

Equation 4.6.2 constitutes 2n equations but for n polynomials of degree 3, the number of unknown coefficients are 3n. Hence we need to find n more equations.

 

Next to make the combined curve smooth we require the first derivative of two neighboring polynomials be equal at the connecting points that is.

 

bi +2cixi = bi+1+2ci+1xi+1

(4.5.2c)

 

This gives us additional n-1 equations. The last equation needed is usually derived by requiring the 2nd derivative of either the first polynomial or the last polynomial be zero at the first or last point respectively. This means of course either the first or the last curve is a straight line. Hence we take either c1 = 0 or cN = 0 whichever is more

appropriate for our problem

46

Example E4.6.1Fit a quadratic spline curve to the 3 intervals of the data given below which is generated from f(x) = 1/(1+x2).

Solution:

The Equations 4.6.2 a and 4.6.2b yield (note that for the first polynomial, the second derivative is taken to be zero. That is =0 )

 

i = 1 a1 + b1x0 = y0

a1 + b1x1 = y1

 

i = 2 a2 + b2x1 + c2x12 = y1

a2 + b2x2 + c2x22 = y2

 

i = 3 a3 + b3x2 + c3x22 = y2

a3 + b3x3 + c3x32 = y3

 

i 0 1 2 3

x 0.0 0.5 1.0 1.5

y 1.0 0.8 0.5 4/13

1c

47

Cont`dFurther Equation 4.6.2c yields

 

i = 1 b1 + 0 = b2 + 2c2x1

i = 2 b2 + 2c2x2 = b3 + 2c3x2

These are the 8-equations to be solved to determine the remaining 8-unknown coefficients.

 

Substituting the x and the y values from the table into the above equations and rearranging we obtain the following matrix equation

 

[A] {C} = {R}

where

48

Cont`d 

 

 

The solution is obtained using MATLAB 4.2 as

 

a1 = 1.0, b1 = -0.4, a2 = 0.9, b2 = 0.0, c2 = -0.4 a3 = 2.1308, b3 = -2.4615, c3 = 0.8308

 

The polynomials for each interval are

 

i = 1 y = 1.0 – 0.4x = p1(x)

 

i = 2 y = 0.9 + 0.0x – 0.4x2 = p2(x)

 

i = 3 y = 2.1308 – 2.4615x + 0.8308x2 = p3(x)

C a b a b c a b cT 1 1 2 2 2 3 3 3, , , , , , ,

RT 10 0 8 0 8 0 5 0 5 4

13 0 0. , . , . , . , . , , ,

49

References

• Celik, Ismail, B., “Introductory Numerical Methods for Engineering Applications”, Ararat Books & Publishing, LCC., Morgantown, 2001

• Fausett, Laurene, V. “Numerical Methods, Algorithms and Applications”, Prentice Hall, 2003 by Pearson Education, Inc., Upper Saddle River, NJ 07458

• Rao, Singiresu, S., “Applied Numerical Methods for Engineers and Scientists, 2002 Prentice Hall, Upper Saddle River, NJ 07458

• Mathews, John, H.; Fink, Kurtis, D., “Numerical Methods Using MATLAB” Fourth Edition, 2004 Prentice Hall, Upper Saddle River, NJ 07458

• Varol, A., “Sayisal Analiz (Numerical Analysis), in Turkish, Course notes, Firat University, 2001

• http://math.uww.edu/faculty/mcfarlat/inverse.htm

top related