next up previous contents
Next: 1D Newton's search Up: Annexes Previous: Notions of constrained optimization   Contents


The secant equation

Let us define a general polynomial of degree 2:

$\displaystyle q(x)=q(0)+<g(0),x>+\frac{1}{2}<x,H(0) x>$ (13.27)

where $ H(0),g(0),q(0)$ are constant. From the rule for differentiating a product, it can be verified that:

$\displaystyle \nabla(<u,b>)=<\nabla u,
v>+<\nabla v,u > $

if $ u$ and $ v$ depend on $ x$. It therefore follows from 13.27 (using $ u=x, v= H(0)x$) that

$\displaystyle \nabla
 q(x)= g(x) = H(0) x + g(0)$ (13.28)

$\displaystyle \nabla^2 q(x)=
H(0) $

A consequence of 13.28 is that if $ x_{(1)}$ and $ x_{(2)}$ are two given points and if $ g_{(1)}=\nabla q(x_{(1)})$ and $ g_{(2)}=\nabla q(x_{(2)})$ (we simplify the notation $ H:=H(0)$), then

$\displaystyle g_{(2)}-g_{(1)}=H(x_{(2)}-x_{(1)})$ (13.29)

This is called the ``Secant Equation''. That is the Hessian matrix maps the differences in position into differences in gradient.

Frank Vanden Berghen 2004-04-19