Top Banner
Classical Optimization Theory B S V P Surya Teja K Rohit B Surya Tej Mrudul Nekkanti
16

Classical optimization theory Unconstrained Problem

Aug 06, 2015

Download

Education

Surya Teja
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Classical optimization theory Unconstrained Problem

Classical Optimization Theory

B S V P Surya TejaK Rohit

B Surya TejMrudul Nekkanti

Page 2: Classical optimization theory Unconstrained Problem

Find Out the type of extreme points in the following figures.

Page 3: Classical optimization theory Unconstrained Problem

Now what about this?

• This one has no minima or maxima.

• The minima or maxima are defined in a specific region which, in other words, we call as constrained problems.

Page 4: Classical optimization theory Unconstrained Problem

What is minimum or maximum?

▪ Minimum, in mathematical terms, for a function can be defined as

▪ f (X0 + h) > f (X0) V X0 ∈ domain

▪ Similarly, maximum can be defined, for a function, as the following

▪ f (X0 + h) < f (X0) V X0∈ domain

▪ Here “h” is a small value and tends to zero.

▪ These are local maxima and minima, because we are not basing them for the whole domain.

Page 5: Classical optimization theory Unconstrained Problem

▪ But if we take the smallest value of all the local minimas, then the value is called a Global Minima.

▪ If we take the largest of all the local maximas, then it is called as Global Maxima.

Page 6: Classical optimization theory Unconstrained Problem

Necessary Conditions

▪ We are going to develop necessary and sufficient conditions for an n-variable function f(X) to have extrema.

▪ It is assumed that the first and second order partial derivatives of f(X) are continuous for all X.

▪ Theorem 1: A necessary condition for X0 to be an extreme point of f(X) is that ∇ f(X0) = 0

Page 7: Classical optimization theory Unconstrained Problem

Sufficient conditions

▪ Theorem: A sufficient condition for a stationary point X0 to be an extreme point is that the Hessian matrix H evaluated at X0 satisfy the following conditionsi. H is positive definite, if X0 is a minimum point

ii. H is negative definite, if X0 is a maximum point

Page 8: Classical optimization theory Unconstrained Problem

Hessian Matrix

Page 9: Classical optimization theory Unconstrained Problem

The Newton Raphson Method

▪ The necessary condition, sometimes ∇ f(X) = 0 , can be difficult to solve numerically.

▪ So we use an iterative method called Newton Raphson method, which helps solving simultaneous nonlinear equations.

▪ The method is mentioned in the next slides.

Page 10: Classical optimization theory Unconstrained Problem

▪ Consider the simultaneous equation f i(X) = 0, i = 1,2,3 … m

▪ By Taylor’s expression at a given point Xk , we can write the whole expression in the following form

f i(X) ~ f i(Xk) + ∇ fi(Xk)(X - Xk)Changing the equation will give us the following expression

f i(Xk) + ∇ fi(Xk)(X - Xk) = 0

This can be written as Ak +Bk(X - Xk) = 0

OR X = Xk - B-1k Ak (Bkis non

singular)

Page 11: Classical optimization theory Unconstrained Problem

▪ The whole idea of this method is to start from an initial point and then move on by using the above equation to find a point until it converges.

▪ This process is done until 2 successive points are almost equal.

▪ For a single variable function this can be shown as

xk + 1 = xk -

Page 12: Classical optimization theory Unconstrained Problem

Example

Demonstrate Newton Raphson Method on the following

g(x) = (3x - 2) 2(2x - 3)2

First find out f(x) = g’(x) = 72x3 - 234x2 + 241x – 78

Then follow the newton Raphson equation for a single variable that is shown below.

xk + 1 = xk -

Page 13: Classical optimization theory Unconstrained Problem

Solving

k xk f(xk)/f'(xk) xk+1

0 10 2.967892314 7.032107686

1 7.032107686 1.97642875 5.055678936

2 5.055678936 1.314367243 3.741311693

3 3.741311693 0.871358025 2.869953668

4 2.869953668 0.573547408 2.29640626

5 2.29640626 0.371251989 1.925154272

6 1.925154272 0.230702166 1.694452106

7 1.694452106 0.128999578 1.565452528

8 1.565452528 0.054156405 1.511296123

9 1.511296123 0.010864068 1.500432055

10 1.500432055 0.000431385 1.50000067

11 1.50000067 6.70394E-07 1.5

Page 14: Classical optimization theory Unconstrained Problem

▪ It converges at 1.5

▪ Taking some other initial value we can converge at the other points. Initial values 1 and 0.5 should give the other 2 extreme points.

Page 15: Classical optimization theory Unconstrained Problem

questions

Page 16: Classical optimization theory Unconstrained Problem

Thank You!