next up previous contents
Next: The bound . Up: Unconstrained Optimization Previous: About the choice of   Contents


The CONDOR unconstrained algorithm.

I strongly suggest the reader to read first the Section 2.4 which presents a global, simplified view of the algorithm. Thereafter, I suggest to read this section, disregarding the parallel extensions which are not useful to understand the algorithm.

Let $ n$ be the dimension of the search space.
Let $ f(x)$ be the objective function to minimize.
Let $ x_{start}$ be the starting point of the algorithm.
Let $ \rho_{start}$ and $ \rho_{end}$ be the initial and final value of the global trust region radius.
Let $ noise_a$ and $ noise_r$, be the absolute and relative errors on the evaluation of the objective function.
  1. Set $ \Delta= \rho$, $ \rho=\rho_{start}$ and generate a first interpolation set $ \{ \boldsymbol {x}_{(1)}, \ldots , \boldsymbol {x}_{(N)} \}$ around $ x_{start}$ (with $ N=(n+1)(n+2)/2$), using technique described in section 3.4.5 and evaluate the objective function at these points.

    Parallel extension: do the $ N$ evaluations in parallel in a cluster of computers
  2. Choose the index $ k$ of the best (lowest) point of the set $ \mbox{$\cal J$}$$ = \{ \boldsymbol{x}_{(1)}, \ldots, \boldsymbol{x}_{(N)} \}$. Let $ x_{(base)} :=
\boldsymbol{x}_{(k)}$. Set $ F_{old}:=f(x_{(base)})$. Apply a translation of $ -x_{(base)}$ to all the dataset $ \{ \boldsymbol {x}_{(1)}, \ldots , \boldsymbol {x}_{(N)} \}$ and generate the polynomial $ q(x)$ of degree 2, which intercepts all the points in the dataset (using the technique described in Section 3.3.2 ).
  3. Parallel extension: Start the parallel process: make a local copy $ q_{copy}(x)$ of $ q(x)$ and use it to choose good sampling site using Equation 3.38 on $ q_{copy}(x)$.
  4. Parallel extension: Check the results of the computation made by the parallel process. Update $ q(x)$ using all these evaluations. We will possibly have to update the index $ k$ of the best point in the dataset and $ F_{old}$. Replace $ q_{copy}$ with a fresh copy of $ q(x)$.
  5. Find the trust region step $ s^*$, the solution of $ \displaystyle \min_{s \in \Re^n} q(\boldsymbol{x}_{(k)}+s) $ subject to $ \Vert s \Vert _2 < \Delta$, using the technique described in Chapter 4.
    In the constrained case, the trust region step $ s^*$ is the solution of:

    \begin{displaymath}\begin{array}{l} \displaystyle \min_{s \in \Re^n}
 q(\boldsym...
...ldots,l \\  \Vert s \Vert _2 <
 \Delta \end{cases}
 \end{array}\end{displaymath} (6.1)

    where $ b_l, b_u$ are the box constraints, $ A x \geq b$ are the linear constraints and $ c_i(x) \geq$ are the non-linear constraints.
  6. If $ \displaystyle \Vert s \Vert< \frac{\rho}{2}$, then break and go to step 16: we need to be sure that the model is valid before doing a step so small.
  7. Let $ R:=q(\boldsymbol{x}_{(k)})-q(\boldsymbol{x}_{(k)}+s^*) \geq 0$, the predicted reduction of the objective function.
  8. Let $ noise: = \frac{1}{2} \max [ noise_a*(1+noise_r),
noise_r \vert f(\boldsymbol{x}_{(k)})\vert ]$. If $ (R<noise)$, break and go to step 16.
  9. Evaluate the objective function $ f(x)$ at point $ x_{(base)}+\boldsymbol{x}_{(k)}+s^*$. The result of this evaluation is stored in the variable $ F_{new}$.
  10. Compute the agreement $ r$ between $ f(x)$ and the model $ q(x)$:

    $\displaystyle r=\frac{F_{old}-F_{new}}{R}$ (6.2)

  11. Update the local trust region radius: change $ \Delta$ to:

    \begin{displaymath}\begin{cases}\max[\Delta, \frac{5}{4} \Vert s\Vert, \rho+\Ver...
...\\  \frac{1}{2} \Vert s\Vert & \text{ if } r
 < 0.1 \end{cases}\end{displaymath} (6.3)

    If $ \displaystyle (\Delta<\frac{1}{2}\rho)$, set $ \Delta:=\rho$.
  12. Store $ \boldsymbol{x}_{(k)}+s^*$ inside the interpolation dataset: choose the point $ \boldsymbol {x}_{(t)}$ to remove using technique of Section 3.4.3 and replace it by $ \boldsymbol{x}_{(k)}+s^*$ using the technique of Section 3.4.4. Let us define the $ ModelStep := \Vert \boldsymbol{x}_{(t)} - (\boldsymbol{x}_{(k)}+s^*) \Vert$
  13. Update the index $ k$ of the best point in the dataset. Set $ F_{new} := \min [ F_{old}, F_{new} ]$.
  14. Update the value of $ M$ which is used during the check of the validity of the polynomial around $ \boldsymbol {x}_{(k)}$ (see Section 3.4.1 and more precisely Equation 3.34).
  15. If there was an improvement in the quality of the solution OR if $ (\Vert s^* \Vert > 2 \rho)$ OR if $ ModelStep > 2 \rho$ then go back to point 4.
  16. Parallel extension: same as point 4.
  17. We must now check the validity of our model using the technique of Section 3.4.2. We will need, to check this validity, a parameter $ \epsilon $: see Section 6.1 to know how to compute it.
  18. If $ \rho=\rho_{end}$, we have nearly finished the algorithm: go to step 21, otherwise continue to the next step.
  19. Update of the global trust region radius.

    $\displaystyle \rho_{new} =
 \begin{cases}\rho_{end} & \text{ if } \rho_{end} < ...
...eq 250 \rho_{end} \\  0.1 \rho & \text{ if } 250
 \rho_{end} < \rho \end{cases}$ (6.4)

    Set $ \displaystyle \Delta:=\max[\frac{\rho}{2}, \rho_{new}]$. Set $ \rho:=\rho_{new}$.
  20. Set $ x_{(base)} := x_{(base)} + \boldsymbol{x}_{(k)}$. Apply a translation of $ - \boldsymbol{x}_{(k)}$ to $ q(x)$, to the set of Newton polynomials $ P_i$ which defines $ q(x)$ (see Equation 3.26) and to the whole dataset $ \{ \boldsymbol {x}_{(1)}, \ldots , \boldsymbol {x}_{(N)} \}$. Go back to step 4.
  21. The iteration are now complete but one more value of $ f(x)$ may be required before termination. Indeed, we recall from step 6 and step 8 of the algorithm that the value of $ f(x_{(base)}+\boldsymbol{x}_{(k)}+s^*)$ has maybe not been computed. Compute $ F_{new}:=f(x_{(base)}+\boldsymbol{x}_{(k)}+s^*)$.
Notice the simplified nature of the trust-region update mechanism of $ \rho $ (step 16). This is the formal consequence of the observation that the trust-region radius should not be reduced if the model has not been guaranteed to be valid in the trust region $ \delta_k \leq \Delta_k$.

Subsections
next up previous contents
Next: The bound . Up: Unconstrained Optimization Previous: About the choice of   Contents
Frank Vanden Berghen 2004-04-19