An archive of Mark's Spring 2018 Numerical Analysis course.

Optimize your personal function

mark

(5 pts)

Referring back to your personalized function, apply both the scipy.optimize.minimize command and the gss command from our class optimization notebook to your personalized function. If all goes well, plot your function with the minimum marked. If not, try to explain why.

theoldernoah

Optimizing my function

First we plot my function to, again, see what it looks like.

from numpy import sin, exp, cos
from scipy.optimize import minimize, newton
def f(x): return x**3 + exp(x)+sin(x**3)
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
x = np.arange(-2,2,0.1)
y = f(x)
plt.plot(x,y)

opt1

The graph seems to have a positive slope everywhere, which is not a good sign (though it is a positive one.)
Applying the minimize command to this function gives us the following output:

minimize(lambda x: f(x), -1.4)

# Out:
# fun: -9.061091807076984e+26
# hess_inv: array([[ 1.34396778]])
# jac: array([ 0.])
# message: 'Optimization terminated successfully.'
# nfev: 60
# nit: 2
# njev: 20
# status: 0
# success: True
# x: array([ -9.67669034e+08])

We got some output, and the Optimization terminated successfully. but the outputs do seem a bit disconcerting.
We trudge ahead with our unsettledness and apply the function to the gss command.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2
xmin = gss(f,-2,0)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x,y)
plt.plot(xmin,ymin,'ok')

# Out:
# (-1.9999999998494846, -8.8540229618230146)

opt2

As we may have expected, the minimum of out function is at the left end point of the graph. We could have guessed this given that the graph has a positive slope everywhere.

anonymous_user

Let’s take a look at my anonymously personalized function, f(x)=2x^3 + \cos(x).

We first produce a few graphs to visualize the behavior of the function. At most viewing scales, the cubic component of the function dominates, as seen here:

import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
def f(x): return 2*x**3 + np.cos(x)
x = np.linspace(-5,5,5000)
y=f(x)
plt.plot(x,y)
plt.show()

cubicdom

If, however, we look at the portion very close to zero, we can see an interval in which the Cosine term dominates. We’ve got no hope of finding a minimum or maximum where the cubic function is the more significant, but it looks like there’s a local minima somewhere between 0 and 1.

x2 = np.linspace(-0.05,0.2,5000)
plt.plot(x2,f(x2))
plt.show()

cosdom

We’ll start with the scipy.optimize.minimize function, for which we need to provide a starting value. It looks like there is a maximum at x=0, so we pick a starting point a little to the right to avoid issues involving f'(x)=0. Let’s try x=0.1.

minimize(lambda x: f(x), 0.1)
#      fun: 0.9954021971219735
#hess_inv: array([[ 0.99123218]])
# jac: array([ -8.19563866e-08])
#message: 'Optimization terminated successfully.'
#nfev: 21
#nit: 5
#njev: 7
#status: 0
#success: True
#x: array([ 0.16590308])

We now plot this point along with the function, in a window where we can see the cosine-dominant shape.

plt.plot(x2,f(x2))
plt.plot(0.16590308,f(0.16590308),'ok')
plt.show()

Min

We now wish to repeat the same task, but with the gss function instead.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
c = b - (b - a) / gr
d = a + (b - a) / gr 
while abs(c - d) > tol:
    if f(c) < f(d):
        b = d
    else:
        a = c
    c = b - (b - a) / gr
    d = a + (b - a) / gr
return (b + a) / 2
xmin = gss(f,0,2)
ymin = f(xmin)
print((xmin,ymin))
xg = np.linspace(-0.25,0.4,5000)
plt.plot(xg,f(xg))
plt.plot(xmin,ymin,'ok')
plt.show()

#(0.16590316905821892, 0.99540219712196987)

Success! Here’s the plot:

gss

Above this x value, f(x) is monotonically increasing. At values of x<0, f(x) is also increasing, and so there is no global minima to be found, nor any additional local minima outside of the studied interval.

jorho85

My personalized function is f(x) = x^3(x^9 + e^{x^3}). Our goal is to locate the minimum of this function using the two techniques we recently learned in class. Well first let’s graph the function to get an idea of where this minimum might be.

%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
from numpy import exp
def f(x): return x**9*(x**3 + exp(x**3))
x = np.linspace(-1,0,5000)
y=f(x)
plt.plot(x,y) 

image1

From this image, it looks like the minimum might be around -0.8, so let’s zoom in and see if we can get a better idea.

x2 = np.linspace(-0.8, -0.6, 5000)
y2 = f(x2)
plt.plot(x2, y2)

image2

Fantastic, from this we see that there is a minimum between -0.8 and -0.6. Now we can use the minimize function to find the local minimum.

minimize(lambda x: f(x), -0.8)
fun: -0.017565538022913132
hess_inv: array([[ 0.31390026]])
jac: array([ -3.18372622e-06])
message: 'Optimization terminated successfully.'
nfev: 24
nit: 6
njev: 8
status: 0
success: True
    x: array([-0.75030676])

The minimize function found that x = -0.75030676, so lets plot this point on the graph and see if we agree.

plt.plot(x2,y2)
plt.plot(-0.75030676, f(-0.75030676), 'ok')

image3

Looks good to me, now let’s see if we get the same result using the golden section search.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
   c = b - (b - a) / gr
   d = a + (b - a) / gr 
while abs(c - d) > tol:
    if f(c) < f(d):
        b = d
    else:
        a = c
    c = b - (b - a) / gr
    d = a + (b - a) / gr
return (b + a) / 2

xmin = gss(f, -0.8, -0.6)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x2,y2)
plt.plot(xmin,ymin,'ok')
(-0.75030576506340152, -0.017565538024506337)

image3

As expected this looks almost identical to the other minimum, so we can safely say that (-0.75030576506340152, -0.017565538024506337) is a good numerical approximation of the local minimum of the function f(x) = x^3(x^9 + e^{x^3}).

CestBeau

My personalized function was 2cos^3(x). It is symmetric about the y-axis and when restricting it to the interval [-2,2] it achieves its minimum on both endpoints. Lets check it out.

%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from scipy.optimize import minimize 
def f(x):  return (2*(np.cos(x))**3)
x = np.linspace(-2,2,1000)
y = f(x)
plt.plot(x,y)

image

Now we minimize the function with scipy optimize. This gives us the following results,

minimize(lambda x: f(x), 0.1)
minimum = -1.9999999999999987
print(f(minimum),f(-1*minimum))
plt.plot(x,y)
plt.plot(minimum,f(minimum),'ro')

image

As expected we get at least one of the endpoints, -2 in this case. Now if we try gss lets see if you get anything different

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
c = b - (b - a) / gr
d = a + (b - a) / gr 
while abs(c - d) > tol:
    if f(c) < f(d):
        b = d
    else:
        a = c
    c = b - (b - a) / gr
    d = a + (b - a) / gr
return (b + a) / 2
xmin = gss(f,-2,2)
ymin = f(xmin)
print(xmin,ymin)
plt.plot(x,f(x))
plt.plot(xmin,ymin,'ro')

image

This yields -2 as the minimum. Interesting. This has to do with the gss code slightly favoring the right half, c,d,b

nathan

We will need the following packages:

import numpy as np
from numpy import cos, sin
from matplotlib import pyplot as plt
from scipy.optimize import minimize

Let’s see what my personal function looks like.

def f(x):
    return cos(x**9)
x = np.arange(-1.6, 1.6, 0.001) #xmin, xmax, stepsize
y = f(x)
plt.plot(x,y)
plt.show()

plot2

Quite the chaotic function. Now, let’s find a minimum using gss and scipy.optimize.minimize. For simplicity, we will aim for the minimum around x = 1.1 since there are countless minima even in this small domain.

gss is defined here:

gr = (np.sqrt(5) + 1) / 2 #golden ratio
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2

And if we run this on our function…

xmin = gss(f, 0, 1.2)
ymin = f(xmin)
print((xmin,min))
# Out: (1.1356352771830514, -0.99999999999999989)

This seems about right. Next, compare it to scipy.optimize.minimize.

minimize(lambda x: f(x), 1.2)
# Out:
# fun: -0.999999999999995
# hess_inv: array([[ 0.00161112]])
# jac: array([  2.13831663e-06])
# message: 'Optimization terminated successfully.'
# nfev: 30
# nit: 7
# njev: 10
# status: 0
# success: True
# x: array([ 1.13563527])

Fantastic, they agree. Now to plot it.

plt.plot(x,y)
plt.plot(xmin,ymin,'ok')
plt.show()

opt

It appears that both methods have successfully found a minimum.

brian

My randomly generated function was

f(x)=\sin^{9}(x)

Here is how it was defined

from numpy import sin
def f(x): return sin(x)**9

and here is its graph:

import numpy as np
from matplotlib import pyplot as plt
x = np.arange(-2, 2, 0.01) 
y = f(x)
plt.plot(x,y)
plt.savefig('plot.png')
plt.show()

output_4_0

Let’s optimize this function! To begin, let’s use the scipy.optimize.minimize command. Keep in mind that this is the technique based upon applying Newton’s method to the derivative of f.

%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize, newton

minimize(lambda x: f(x), -1.4)
      fun: -0.9999999999967696
 hess_inv: array([[ 0.11116889]])
      jac: array([  7.71135092e-06])
  message: 'Optimization terminated successfully.'
     nfev: 15
      nit: 3
     njev: 5
   status: 0
  success: True
        x: array([-1.57079548])

The output -1.57079548 seems reasonable; however, before we celebrate lets see if this result is consistent with the golden section search method.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2

xmin = gss(f, -2, 2)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x,y)
plt.plot(xmin,ymin,'ok')
(-1.5707963162746252, -1.0)

output_8_2

In this case, gss method returns the minimum of f within [-2, 2] as -1.5707963162746252. This result is approximately equal to the minimum determined through optimization via Newton’s method. In addition, both values are consistent with the location of the mininum as depicted on the graph of f. Success!

funmanbobyjo
   In [2]: from numpy import sin, cos, exp 
          def f(x): return (x**3+exp(x**3))*(exp(x))
  ln[3]: %matplotlib inline
        from matplotlib import pyplot as plt
        import numpy as np
        x = np.linspace(-5,1.2,100)
        y = f(x)
        plt.plot(x,y)
        plt.plot(x,x)
        plt.savefig('my_pic.png')![image|690x465]![image|690x463](upload://hT62xup1Vk1Pf5dT1VfGq6p6jHV.png)

This code was used to define and plot my personalized function. It seems to have a minimum around -3.

Using the minimize command,

  minimize(lambda x: f(x), -1.1)

#fun: -1.3442508458864701
#hess_inv: array([[ 2.21019973]])
#jac: array([ 6.40749931e-06])
#message: ‘Optimization terminated successfully.’
# nfev: 18
# nit: 5
# njev: 6
#status: 0
#success: True
#x: array([-2.99998571])

So there is a minimum at -3 or close to that.
Using the gss command,

 gr = (np.sqrt(5) + 1) / 2
 def gss(f, a, b, tol=10**(-10)):
     c = b - (b - a) / gr
     d = a + (b - a) / gr 
     while abs(c - d) > tol:
         if f(c) < f(d):
             b = d
         else:
              a = c
         c = b - (b - a) / gr
         d = a + (b - a) / gr
     return (b + a) / 2
 xmin = gss(f,-2,0)
 ymin = f(xmin)
 print((xmin,ymin))
 plt.plot(x,y)
 plt.plot(xmin,ymin,'ok')

#(-1.9999999998494846, -1.0826368658815702)

This says that the minimum is at -2 which looks a little off.

Lorentz

Here was my personalized function. You’ll notice that it is an odd function that is strictly increasing so there are no local minimums.
my_pic
Using the minimize command this becomes even more apparent
minimize(lambda x: f(x),-0.5)

fun: -2.4709920916279373e+26
hess_inv: array([[1]])
jac: array([0.])
message: 'Optimization terminated successfully.'
nfev: 51
nit: 1
njev: 17
status: 0
success: True
x: array([-6.27514528e+08])
Essentially f(-\infty)=-\infty. Using the golden section search further supports the existence of zero local minimums.
gss

poster
x^{3} + \sin{\left (\sin{\left (x \right )} \right )}

Graphed:

import numpy as np
from numpy import sin, exp, cos
def f(x): return x**3 + sin(sin(x))
from matplotlib import pyplot as plt
x = np.arange(-2,2,0.1)
y = f(x)
plt.plot(x,y)

download (1)

Since the slope of this function is always positive, the smallest return values should always be the lowest x value in the interval.

from scipy.optimize import minimize, newton
minimize(lambda x: f(x), 0)

#Out: 
# fun: -4.584963095654135e+25
# hess_inv: array([[1]])
# jac: array([ 0.])
# message: 'Optimization terminated successfully.'
# nfev: 48
# nit: 1
# njev: 16
# status: 0
# success: True
# x: array([ -3.57913941e+08])

Applying gss:

 gr = (np.sqrt(5) + 1) / 2
 def gss(f, a, b, tol=10**(-10)):
     c = b - (b - a) / gr
     d = a + (b - a) / gr 
     while abs(c - d) > tol:
         if f(c) < f(d):
             b = d
         else:
              a = c
         c = b - (b - a) / gr
         d = a + (b - a) / gr
     return (b + a) / 2
 xmin = gss(f,-2,0)
 ymin = f(xmin)
 print((xmin,ymin))
 plt.plot(x,y)
 plt.plot(xmin,ymin,'ok')

#Out:
# (-1.9999999998494846, -8.7890723418051806)

download (2)

Cornelius

My personalized function is:

f(x) = cos^2(x^3)

We can define this function in Python like so:

from numpy import sin, cos, exp
def f(x):
return cos(x**3)**2

Graphing it:

import numpy as np
from matplotlib import pyplot as plt
x = np.arange(-2,2,0.01)
y = f(x)
plt.plot(x,y)
plt.savefig('plot.png')
plt.show()data:

MyImage

It looks like the function has four local maxima and six local minima. Let us apply the minimize command on the above function:

from scipy.optimize import minimize, newton
minimize(lambda x: f(x), -1.4)

Here is the output:

fun: 1.4514996708201764e-16
hess_inv: array([[ 0.03045031]])
jac: array([  1.47201245e-07])
message: 'Optimization terminated successfully.'
 nfev: 24
  nit: 6
 njev: 8
status: 0
success: True
x: array([-1.16244735])

Now let us apply the function to the gss comand.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2

xmin = gss(f,-2,0)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x,y)
plt.plot(xmin,ymin,'ok')

(-1.1624473515406879, 1.5855424720944678e-20)

It looks like all went well, so here is the function with the minimum marked:
MyImageII

Spin

To recap, my personal function is e^{x} \cos{\left (x \right )} + \sin{\left (x \right )} and it looks like this over the range [-2,2].

plainFunction

On initial inspection, there appears to be one local minimum somewhere around -1.75. If you’re having trouble seeing it, zoom in and pay attention to the aliasing in the image.

First, I tried to find the minimum using the function using the minimize command from the scipy library.

def f(x): return np.e**x * np.cos(x) + np.sin(x)
minimize(lambda x: f(x), -1.75)

# output:
# fun: -1.0155092910337085
# hess_inv: array([[ 0.7435323]])
# jac: array([ -1.63912773e-07])
# message: 'Optimization terminated successfully.'
# nfev: 12
# nit: 3
# njev: 4
# status: 0
# success: True
# x: array([-1.72134646])

The only output values we care about here are “fun” (the y value of the minimum) and “x” (the x value of the minimum).

x = np.linspace(-2,2,100)
y = f(x)
minimizeX = -1.72134646
minimizeY = -1.0155092910337085
plt.plot(x,y)
plt.plot(minimizeX,minimizeY,'ok')
plt.savefig('minimizeFunction.png')  

minimizeFunction

Plotting the function confirms what we already know; the minimum is exactly where we expect it to be.

We can also find the location of the minimum by using several golden ratio cuts. This “gss” function provided on the course page does just that.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
c = b - (b - a) / gr
d = a + (b - a) / gr 
while abs(c - d) > tol:
    if f(c) < f(d):
        b = d
    else:
        a = c
    c = b - (b - a) / gr
    d = a + (b - a) / gr
return (b + a) / 2

goldenX = gss(f,-2.0,-1.5)
goldenY = f(goldenX)
print((goldenX,goldenY))

# output:
# (-1.7213463252668175, -1.01550929103372)

Voila! The gss function grants us the exact same value for x, and plugging our x value into f(x) grants us a similarly familiar y value.

opernie

My function is f(x)=x^3 + x + e^x + cos(x).
First, we plot the function

from numpy import cos, exp
def f(x): return x**3 + x + exp(x) + cos(x)
import numpy as np
from matplotlib import pyplot as plt
x = np.arange(-2,2,.001)
y = f(x) 
plt.plot(x,y)
plt.plot(x,x)
plt.savefig('myplot.png') 

image

Note that the function seems to have a positive slope everywhere, so this should be fun.
Applying the code:

from scipy.optimize import minimize
minimize(lambda x: f(x), -0.5)

I received the following output:

fun: -4.723892082017134e+25
hess_inv: array([[1]])
jac: array([0.])
message: 'Optimization terminated successfully.'
nfev: 48
nit: 1
njev: 16
status: 0
success: True
x: array([-3.61493081e+08])

Note that the optimization terminated successfully, the returned value could represent a minimum slope in our graph, although our slope is always positive so it could be a load of crap.
Lets see what the gss command has to say about this.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2
xmin = gss(f,-2,0)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x,y)
plt.plot(xmin,ymin,'ok')

This bit of code yields:

(-1.9999999998494846, -10.280811551196598)

image

Unfortunately, although producing a minimum in our plot range, this solution is still disconcerning.

dumptruckman

My random function is:

x^{3} + \sin{\left (x^{3} \right )} + \cos{\left (x^{3} \right )}

This is defined in Python as:

from numpy import sin, cos
def f(x):
    return x**3 + sin(x**3) + cos(x**3)

A plot of the function over the interval [-2,2] using matplotlib looks like:

import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt

x = np.arange(-2, 2, 0.001)
y = f(x)
plt.plot(x,y)
plt.show()

image

To optimize this function we’ll first try scipy’s minimize. We’ll use an initial value of -1.5 which seems like a reasonable guess.

from scipy.optimize import minimize

minimize(lambda x: f(x), -1.5)

# OUTPUT
#      fun: -4.141592653589793
# hess_inv: array([[ 0.02416632]])
#      jac: array([  2.38418579e-07])
#  message: 'Optimization terminated successfully.'
#     nfev: 18
#      nit: 4
#     njev: 6
#   status: 0
#  success: True
#        x: array([-1.46459189])

The single x value of -1.46459189 looks promising. Let’s just double check using the golden section search function.

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2

gss(f, -2, 2)

# OUTPUT
# -1.4645918827794868

It looks like the gss method confirms the previous minimum found. Let’s graph it!

x_min = gss(f, -2, 2)
y_min = f(x_min)
x = np.arange(-2, 2, 0.001)
y = f(x)
plt.plot(x,y)
plt.plot(x_min, y_min, 'o')
plt.show()

image

Sampson

My personalized function is: \left(e^{x} + \sin{\left (x \right )}\right) \sin{\left (x \right )} I then used the following python code to define my function and plot it.

from numpy import sin, exp
from scipy.optimize import minimize, newton
from matplotlib import pyplot as plt
def f(x): return (exp(x)+sin(x))*sin(x)
import numpy as np
x = np.arange(-2,2,.01)
y = f(x)
plt.ylim(-1,3)
plt.plot(x,y)
plt.grid(True)
plt.show()

Personal_Function

Then I used scipy’s minimize function to find the relative minimum of the function. My initial point was -1.

minimize(lambda x: f(x), -1)
# Out
# fun: -0.13252243074682216
# hess_inv: array([[ 0.31530841]])
# jac: array([ -3.07336450e-07]) 
# message: 'Optimization terminated successfully.'
# nfev: 24
# nit: 5
# njev: 8
# status: 0
#success: True
#    x: array([-0.27565859])

We can see that this says there is a minimum at y=-0.13252243074682216 and x=-0.27565859. We can back up this conclusion by using the gss command:

gr = (np.sqrt(5) + 1) / 2
def gss(f, a, b, tol=10**(-10)):
    c = b - (b - a) / gr
    d = a + (b - a) / gr 
    while abs(c - d) > tol:
        if f(c) < f(d):
            b = d
        else:
            a = c
        c = b - (b - a) / gr
        d = a + (b - a) / gr
    return (b + a) / 2
xmin = gss(f, -1, 1)
ymin = f(xmin)
print((xmin,ymin))
plt.plot(x,y)
plt.plot(xmin,ymin,'ok')
plt.grid(True)
fig = plt.gcf()
plt.show()
fig.savefig('Personal_Function_2')
# Out: (-0.27565848211975663, -0.13252243074683936)

Personal_Function_2

Yay! Looks like it worked.

dakota

First, let us recall my original personal function and what it looked like over the interval [-2,2].

from numpy import cos, exp
from scipy.optimize import minimize, newton
def f(x): return exp(cos(x))*(cos(x))**3
%matplotlib inline
from matplotlib import pyplot as plt
x = np.linspace(-2,2,100)
y = f(x)
plt.plot(x,y)
plt.plot(x,x)

my_pic

Looking at my function, it appears that the possibilities for local mins are the endpoints x = \{-2, 2\}. I decided to zoom in for a better look around the right endpoint x = 2.

x2 = np.linspace(1.8, 2.0, 5000)
y2 = f(x2)
plt.plot(x2, y2)

zoom

This image maintains that a minimum occurs around the endpoints. The next step is to use the minimize command to see where the relative minimum occurs.

minimize(lambda x: f(x), 1.9)
 Out[]:     fun: -0.3678794411713762
       hess_inv: array([[ 1.36051517]])
            jac: array([  3.16649675e-07])
        message: 'Optimization terminated successfully.'
           nfev: 24
            nit: 3
           njev: 8
         status: 0
        success: True
              x: array([ 3.14159308])

This output tells us that the relative minimum outside of the original interval at which we looked at this function, specifically, at x = \pi. Expanding the right endpoint of the graph gives us a better visual of this.

new

Now, we can use the gss command to take a look at our minimums. The first block runs over the the interval [-2, 2].

In[]: gr = (np.sqrt(5) + 1) / 2
      def gss(f, a, b, tol=10**(-10)):
          c = b - (b - a) / gr
          d = a + (b - a) / gr 
          while abs(c - d) > tol:
             if f(c) < f(d):
                b = d
             else:
                a = c
                c = b - (b - a) / gr
                d = a + (b - a) / gr
          return (b + a) / 2
       xmin = gss(f,0,2)
       ymin = f(xmin)
       print((xmin,ymin))
       xg = np.linspace(1.8,2.0,5000)
       plt.plot(xg,f(xg))
       plt.plot(xmin,ymin,'ok')
       plt.show() 
Out[]: (1.9999999998494846, -0.047534564304671401)

Unknown

This confirms that on the original interval, there is a relative minimum at x = 2. However, we know from the minimize command that this is not the relative minimum of the function. After running the code a second time with the appropriate changes, we get the following:

Out[]: (3.1415926640364682, -0.36787944117144233)

Unknown

MatheMagician

My personalized function was

f(x)=x^ 3+x+2*sin(x)

I am going to graph it to get an idea of the functions general behavior using this code:

import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize
def f(x): return x**3+x +2* np.cos(x)
x = np.linspace(-5,5,5000)
y=f(x)
plt.plot(x,y)
plt.show()

image

With this function we can’t find a true max or min but we can search for the local minimum by zooming in between -2.5,2.5

x2 = np.linspace(-2.5,2.5,5000)
plt.plot(x2,f(x2))
plt.show()

image
This function is monotonically increasing for all possible intervals. As a result, there will be no candidates for maximums or minimums.