8
\$\begingroup\$

Here is a python function I wrote to implement the Newton method for optimization for the case where you are trying to optimize a function that takes a vector input and gives a scalar output. I use numdifftools to approximate the hessian and the gradient of the given function then perform the newton method iteration.

import numpy as np
import numdifftools as nd
class multivariate_newton(object):
 def __init__(self,func,start_point,step_size=0.8,num_iter=100,tol=0.000001):
 '''
 func: function to be optimized. Takes a vector argument as input and returns
 a scalar output
 step_size: step size in newton method update step
 num_iter: number of iterations for newton method to run
 tol: tolerance to determine convergence
 '''
 self.func=func
 self.start_point=np.array(start_point)
 self.num_iter=num_iter
 self.step_size=step_size
 self.tol=tol
 def newton_method(self):
 '''
 perform multivariate newton method for function with vector input
 and scalar output
 '''
 x_t=self.start_point
 #Get an approximation to hessian of function
 H=nd.Hessian(self.func)
 #Get an approximation of Gradient of function
 g=nd.Gradient(self.func)
 for i in range(self.num_iter):
 x_tplus1=x_t-self.step_size*np.dot(np.linalg.inv(H(x_t)),g(x_t))
 #check for convergence
 if abs(max(x_tplus1-x_t))<self.tol:
 break
 x_t=x_tplus1
 self.crit_point=x_tplus1
 self.max_min=self.func(x_t)
 return self
 def critical_point(self):
 '''
 print critical point found in newton_method function. newton_method function
 must be called first.
 '''
 print self.crit_point
200_success
145k22 gold badges190 silver badges478 bronze badges
asked Jan 2, 2014 at 17:42
\$\endgroup\$
5
  • \$\begingroup\$ Can you explain what was wrong with the functions in scipy.optimize? \$\endgroup\$ Commented Jan 2, 2014 at 17:43
  • \$\begingroup\$ I could not find one for the case of a function with vector input and scalar output. I tried fsolve but it would only work with vector input/output functions. Now that I look again, 'root' may work. I will give it a try. \$\endgroup\$ Commented Jan 2, 2014 at 17:50
  • \$\begingroup\$ nevermind, 'root' required the input and output dimensions to be the same which is not what I want \$\endgroup\$ Commented Jan 2, 2014 at 17:56
  • 5
    \$\begingroup\$ This would better be a function instead of a class. \$\endgroup\$ Commented Jan 2, 2014 at 19:26
  • \$\begingroup\$ I'm not sure what is commonly done here but I think you can avoid computing the inverse of the Hessian directly, which may be more costly and less numerically accurate (see here). \$\endgroup\$ Commented May 21, 2018 at 0:48

1 Answer 1

2
\$\begingroup\$

Pathological cases exist where Newton's method will not converge on a solution. Yet, there is no obvious way for the caller to tell whether convergence was achieved. You should distinguish between whether the loop terminated via the break (success) or by exhaustion of the loop condition (failure).

for _ in range(self.num_iter):
 x_tplus1 = x_t - self.step_size * np.dot(np.linalg.inv(H(x_t)), g(x_t))
 #check for convergence
 if abs(max(x_tplus1-x_t))<self.tol:
 break
 x_t = x_tplus1
else:
 raise SolutionNotFound, "No convergence after %d iterations" % (self.num_iter)
answered Jan 20, 2014 at 11:16
\$\endgroup\$
0

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.