next up previous contents
Next: The Lagrange Interpolation inside Up: Multivariate Lagrange Interpolation Previous: A small reminder about   Contents

Subsections

Multivariate Lagrange interpolation.

We want to interpolate an hypersurface $ y=f(x), x \in \Re^n$ in the dimension $ n+1$. We have a set of interpolation points $ (\boldsymbol{x}_{(i)}, f(\boldsymbol{x}_{(i)}) ), \quad i=1, \ldots, N \quad \boldsymbol{x}_{(i)}
\in \Re^n$ on the surface. We must be sure that the problem is well poised: The number $ N$ of points is $ N=r_n^d$ and the Vandermonde determinant is not null.

The Lagrange polynomial basis $ \{P_1(x), \ldots , P_N(x)\}$.

We will construct our polynomial basis $ P_i(x) \quad i=1,\ldots,N$ iteratively. Assuming that we already have a polynomial $ \mbox{$\cal Q$}$$ _k =
\sum_{i=1}^k f(\boldsymbol{x}_{(i)}) P_i(x)$ interpolating $ k$ points, we will add to it a new polynomial $ P_{k+1}$ which doesn't destruct all what we have already done. That is the value of $ P_{k+1} (x)$ must be zero for $ x= \boldsymbol{x}_{(1)}, \boldsymbol{x}_{(2)}, \ldots, \boldsymbol{x}_{(k)}$. In other words, $ P_i (\boldsymbol{x}_{(j)})= \delta_{(i,j)}$. This is easily done in the univariate case: $ P_k(x)= (x-\boldsymbol{x}_{(1)}) \ldots
(x-\boldsymbol{x}_{(k)})$, but in the multivariate case, it becomes difficult.
We must find a new polynomial $ P_{k+1}$ which is somewhat ''perpendicular'' to the $ P_i \quad i=1,\ldots,k$ with respect to the $ k$ points $ \boldsymbol{x}_{(1)}, \ldots, \boldsymbol{x}_{(k)}$. Any multiple of $ P_{k+1}$ added to the previous $ P_i$ must leave the value of this $ P_i$ unchanged at the $ k$ points $ \boldsymbol{x}_{(1)}, \ldots, \boldsymbol{x}_{(k)}$. We can see the polynomials $ P_i$ as ``vectors'', we search for a new ``vector'' $ P_{k+1}$ which is ``perpendicular'' to all the $ P_i$. We will use a version of the Gram-Schmidt othogonalisation procedure adapted for the polynomials. The original Gram-Schmidt procedure for vectors is described in the annexes in Section 13.2.
We define the scalar product with respect to the dataset $ \cal K$ of points $ \{ \boldsymbol{x}_{(1)}, \ldots, \boldsymbol{x}_{(k)} \} $ between the two polynomials $ P$ and $ Q$ to be:

$\displaystyle <P,Q>_{\cal K}=\sum_{j=1}^k
 P(\boldsymbol{x}_{(j)}) Q(\boldsymbol{x}_{(j)})$ (3.19)

We have a set of independent polynomials $ \{ P_{1\_old},
P_{2\_old}, \ldots, P_{N\_old} \}$. We want to convert this set into a set of orthonormal vectors $ \{ P_1, P_2 , \ldots, P_N \}$ with respect to the dataset of points $ \{ \boldsymbol {x}_{(1)}, \ldots , \boldsymbol {x}_{(N)} \}$ by the Gram-Schmidt process:
  1. Initialization k=1;
  2. Normalisation

    $\displaystyle P_k(x)=\frac{ P_{k\_old}(x)}{ \vert
 P_{k\_old}( \boldsymbol{x}_{(k)}) \vert}$ (3.20)

  3. Orthogonalisation
    For $ j=1$ to $ N$ , $ j \neq k $ do:

    $\displaystyle P_{j\_old}(x) =
 P_{j\_old}(x) - P_{j\_old}(\boldsymbol{x}_{(k)}) P_k(x)$ (3.21)

    We will take each $ P_{j\_old}$ and remove from it the component parallel to the current polynomial $ P_k$.
  4. Loop increment $ k$. If $ k<N$ go to step 2.
After completion of the algorithm, we discard all the $ P_{j\_old}$'s and replace them with the $ P_{j}$'s for the next iteration of the global optimization algorithm.
The initial set of polynomials $ \{ P_1, \ldots, P_N \}$ can simply by initialized with the monomials of a polynomial of dimension $ n$. For example, if $ n=2$, we obtain: $ P_1(x)=1$, $ P_2(x)=x_1$, $ P_3(x)=x_2$, $ P_4(x)=x_1^2$, $ P_5(x)=x_1 \; x_2$, $ P_6(x)=x_2^2$, $ P_7(x)=x_2^3$, $ P_8(x)=x_2^2 \; x_1$, $ \ldots$
In the Equation 3.22, there is a division. To improve the stability of the algorithm, we must do ``pivoting''. That is: select a salubrious pivot element for the division in Equation 3.22). We should choose the $ \boldsymbol {x}_{(k)}$ (among the points which are still left inside the dataset) so that the denominator of Equation 3.22 is far from zero:

$\displaystyle \vert P_{k\_old}(x)\vert$    as great as possible. (3.22)

If we don't manage to find a point $ \boldsymbol{x}$ such that $ Q_k(\boldsymbol{x}) \neq
0 $, it means the dataset is NOT poised and the algorithm fails.
After completion of the algorithm, we have:

$\displaystyle P_i(\boldsymbol{x}_{(j)})=
 \delta_{(i,j)} \quad i,j=1, \ldots, N$ (3.23)


The Lagrange interpolation polynomial $ L(x)$.

Using Equation 3.25, we can write:

$\displaystyle L(x)=\sum_{j=1}^N
 f(\boldsymbol{x}_{(j)}) P_j(x)$ (3.24)

The multivariate Horner scheme

Lets us rewrite a polynomial of degree $ d$ and dimension $ n$ using the following notation ($ N=r_n^d$):

$\displaystyle P(x)= \sum_{i=1}^N c_i
 \prod_{j=1}^n x_j^{\alpha(i,j)}$    with $\displaystyle \max_{i,j}
 \alpha(i,j)=d$ (3.25)

$ \alpha $ is a matrix which represents the way the monomials are ordered inside the polynomial. Inside our program, we always use the ``order by degree'' type. For example, for $ n=2, d=2$, we have: $ P(x)= c_1 +c_2 x_1 + c_3 x_2 + c_4 x_1^2 + c_5 x_1 x_2 +
c_6 x_2^2$. We have the following matrix $ \alpha $:

$\displaystyle \alpha= \begin{pmatrix}
0 & 0 \\  1 & 0 \\  0 & 1 \\  2 & 0 \\  1 & 1 \\  0 & 2
\end{pmatrix} $


We can also use the ``inverse lexical order''. For example, for $ n=2, d=2$, we have: $ P(x)= c_1 x_1^2 + c_2 x_1 x_2 + c_3 x_1
+ c_4 x_2^2 + c_5 x_2 + c_6 $. We have the following matrix $ \alpha'$ (the $ '$ is to indicate that we are in ``inverse lexical order'') :

$\displaystyle \begin{array}{c\vert c}
\alpha' ( \text{ for } n=2 \text{ and } ...
...0 &3 \\
0 &0 &2 \\
0 &0 &1 \\
0 &0 &0
\end{pmatrix} \\  \end{array}
$

This matrix is constructed using the following property of the ''inverse lexical order'':

$\displaystyle \exists j : \alpha'(i+1,j)<
\alpha'(i, j)$    and $\displaystyle \forall k<j \quad \alpha'(i+1,k) =
\alpha'(i, k)
$


The ``inverse lexical order'' is easier to transform in a multivariate horner scheme. For example: for the polynomial $ P(x)= ( c_1 x_1 + c_2 x_2 + c_3 ) x_1 + ( c_4 x_2 + c_5 ) x_2 +
c_6 $, we have: You can see that we retrieve inside this decomposition of the algorithm for the evaluation of the polynomial $ P(x)$, the coefficient $ c_i$ in the same order that they appear when ordered in ``inverse lexical order''.

Let us define the function $ TR( i' )= i $. This function takes, as input, the index of a monomial inside a polynomial ordered by ''inverse lexical order'' and gives, as output, the index of the same monomial but, this time, placed inside a polynomial ordered ''by degree''. In other words, This function makes the TRansformation between index in inverse lexical order and index ordered by degree.
We can now define an algorithm which computes the value of a multivariate polynomial ordered by degree by multivariate horner scheme:
  1. Declaration
    $ n$ : dimension of the space
    $ N=r_n^{d}$ : number of monomial inside a polynomial of dimension $ n$ and degree $ d$.
    $ r_0, \ldots, r_n$ : registers for summation ($ \in \Re$)
    $ a_1, \ldots, a_n$ : counters ($ \in N_0$)
    $ c_i, \quad i=1,\ldots,N$ : the coefficients of the polynomial ordered by degree.
  2. Initialization
    Set $ r_0:=c_{Tr(1)}$
    set $ a_j:=\alpha'(1,j) \quad j=1, \ldots, n$
  3. For $ i=2, \ldots, N$
    1. Determine $ \displaystyle k=\max_j \{ 1\leq j \leq n:
\alpha'(i,j) \neq \alpha'(i-1, j) \} $
    2. Set $ a_k:=a_k-1$
    3. Set $ r_k:=x_k (r_0+r_1+ \ldots +r_k)$
    4. Set $ r_0:=c_{Tr(i)}, r1:= \ldots :=r_{k-1}:= 0 $
    5. Set $ a_j:=\alpha(i,j) \quad j=1, \ldots, k-1$
  4. Return $ r0+ \ldots + r_n$
In the program, we are caching the values of k and the function TR, for a given $ n$ and $ d$. Thus, we compute these values once, and use pre-computed values during the rest of the time. This lead to a great efficiency in speed and in precision for polynomial evaluations.
next up previous contents
Next: The Lagrange Interpolation inside Up: Multivariate Lagrange Interpolation Previous: A small reminder about   Contents
Frank Vanden Berghen 2004-04-19