The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000 A quick intro to the Newton form ... is a marvellously versatile tool for polynomial interpolation.

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by , 2016-04-29 03:09:02

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October ...

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000 A quick intro to the Newton form ... is a marvellously versatile tool for polynomial interpolation.

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000

A quick intro to the Newton form

The Newton form

(1) j−1

“defNewtonform p(x) = a1 + (x − c1)(a2 + (x − c2)(a3 + · · ·)) = aj (x − ck)

j k=1

is a marvellously versatile tool for polynomial interpolation. As a special case, it includes the power form

(2)“defpowerform p(x) = a1 + a2x + a3x2 + · · ·

in which all the centers cj are 0, and the shifted power form

(3)“defshiftedpowerform p(x) = a1 + a2(x − c) + a3(x − c)2 + · · ·

in which all the centers are the same.
It costs just 2n additions and n multiplications to evaluate the Newton form of a polynomial of degree

n at some point z by Horner’s Method or Nested Multiplication, as suggested by its nested version
(the first version in (1)):

v = a(n+1);
for j=n:-1:1

v = (z-c(j))*v + a(j);
end

Further, its coefficients are entirely determined by the values of p ‘at’ its centers cj, in a sense we now
make precise.

Directly from (1), we deduce that
a1 = p(c1).

As to a2, we observe that

p(x) = a1 + a2(x − c1) + (x − c1)(x − c2)q(x)

for some polynomial q, hence

pc1:2 (x) := a1 + a2(x − c1)

is a straight line (or linear polynomial) that agrees with p ‘at’ c1 and c2 in the sense that p − pc1:2 vanishes
at c1 and c2. If c1 = c2, this simply means that pc1:2 matches p at those two points. It is a straight line that
interpolates p at those two points. In particular, from the high-school point-slope formula or directly,

a2 = p(c2) − p(c1) ,
c2 − c1

the difference quotient or divided difference of p at c1 and c2.

What if c1 = c2? Now p(x) = a1 + a2(x − c1) + (x − c1)2q(x),

hence

p (x) = a2 + 2(x − c1)q(x) + (x − c1)2q (x) = (x − c1)r(x)

for some polynomial r and, in particular, now

a2 = p (c1) = lim p(c2) − p(c1) .
c2 − c1
c2 →c1

1

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000

In either case, a2 depends only on p and the two numbers, c1 and c2, leading us to the definition

 p(c2) − p(c1) , if c1 = c2; .
 c2 − c1 if c1 = c2.

p[c1, c2] :=

 p (c1),

Note that, as we observed before, a1 only depends on p and c1, so, in analogy, we also define

p[c1] := p(c1).

Are you ready for the leap to the general case? Setting

pc1:k (x) := a1 + a2(x − c1) + · · · + ak(x − c1) · · · (x − ck−1),

we find that

(4)“defpoton p(x) = pc1:k (x) + (x − c1) · · · (x − ck)q(x)

for some polynomial q whose precise details don’t matter. What matters is the factor (x − c1) · · · (x − ck)
which tells us that p and pc1:k ‘agree at c1, . . . , ck’ in the sense that their difference vanishes at c1, . . . , ck
counting multiplicities.

This, together with the fact that pc1:k is a polynomial of degree < k, determines pc1:k uniquely. Indeed,
if also r is a polynomial of degree < k for which

p(x) = r(x) + (x − c1) · · · (x − ck)s(x)
for some polynomial s, then, subtracting this equation from (4) and reorganizing, we find that

pc1:k (x) − r(x) = (x − c1) · · · (x − ck)t(x)

for some polynomial t multiplying that kth degree polynomial (x − c1) · · · (x − ck). However, on the left-hand
side, we have a polynomial of degree < k, and this makes sense only if t is the zero polynomial.

In other words, our polynomial pc1:k is the unique polynomial of degree < k that satisfies (4). In
particular, its leading coefficient, ak, depends only on p and on c1, . . . , ck. This entitles us to denote it by

p[c1, . . . , ck] := ak

and to observe that p[c1, . . . , ck] is symmetric in the cj, meaning that it is independent of the order in
which we write down the cj’s. E.g.,

p[1, 3, 1, 2] = p[1, 1, 2, 3] = p[3, 2, 1, 1] = p[2, 1, 3, 1] = · · · .

Moreover, we can easily provide a formula for it in the special case that c1 = c2 = · · · = ck: In that
case, (4) becomes

p(x) = a1 + a2(x − c1) + · · · + ak(x − c1)k−1 + (x − c1)kq(x),
and, differentiating this equation k − 1 times, we find that

p(k−1)(x) = (k − 1)!ak + (x − c1)r(x)

for some polynomial r. Hence

p[c1, . . . , ck] = p(k−1)(c1)/(k − 1)!, if c1 = · · · = ck.

To compute ak = p[c1, . . . , ck] otherwise, we need a so-called divided difference table, and this table
comes about as follows.

2

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000

We pay attention to the intermediate results in Horner’s Method. Specifically, let’s save them in an
array b, as in the following modified version:

b(n+1) = a(n+1);
for j=n:-1:1

b(j) = (z-c(j))*b(j+1) + a(j);
end

Then bj , j = n + 1;
bj + bj+1(cj − z), j ≤ n.
aj =

Let’s substitute these expressions for the aj in the Newton form for p = p .c1:n+1 We obtain

p = b1 + b2(c1 − z)
+ (x − c1)(b2 + b3(c2 − z))
+ (x − c1)(x − c2)(b3 + b4(c3 − z))
+
+ (x − c1) · · · (x − cn−1)(bn + bn+1(cn − z))
+ (x − c1) · · · (x − cn) bn+1.

Now notice that, except for b1, each bj occurs exactly twice. The first time, it is multiplied by

(x − c1) · · · (x − cj−2)(cj−1 − z).
The second time, it is multiplied by

(x − c1) · · · (x − cj−2)(x − cj−1).
Hence, on combining these two terms, we see that bj is actually multiplied by

(x − c1) · · · (x − cj−2)(x − z) = (x − z)(x − c1) · · · (x − cj−2).

In particular, b2 is multiplied by (x − z) and b1 is multiplied by 1. We conclude that

n+1 j−2

p(x) = pc1:n+1 (x) = bj (x − ck),

j=1 k=0

with c0 := z. In other words, Nested Multiplication is really a conversion algorithm. We feed in the
coefficients a1, . . . , an+1, the centers c1, . . . , cn, and a number z, and obtain, in return, the coefficients
b1, . . . , bn+1 and the centers z, c1, · · · , cn−1 for the closely related but different Newton form for p,

j−1

p(x) = b1 + (x − c0)(b2 + (x − c1)(b3 + · · ·)) = bj (x − ck−1).

j k=1

In particular, the last coefficient we compute, namely b1, is evidently the value of p at x = c0 = z, as we
knew all along.

By the uniqueness of pc1:k proved earlier, this implies that

(5)“ddrec p[c0, c1, . . . , cj−1] = bj = (c0 − cj)bj+1 + aj = (c0 − cj)p[c0, c1, . . . , cj] + p[c1, . . . , cj].

Hence, for c0 = cj, we obtain

p[c0, . . . , cj−1] − p[c1, . . . , cj] = p[c0, . . . , cj].
c0 − cj

3

newton(.tex) (as of 29oct00) TEX’ed at 21:14 on 29 October 2000

Since p[ci, . . . , cj] does not depend on the order in which we write down its arguments, ci, . . . , cj, we
may write down the recurrence relation for divided differences just proved succinctly as:

(6) p[M, a, b] = p[M, a] − p[M, b]
a−b
“ddrec

with the understanding that M is some sequence of points, and a and b are distinct points.
The recurrence relation permits calculation of a divided difference as a divided difference of two divided

differences each involving one less point, with the denominator the difference of the points that are in one
but not the other. In particular, if we know the numbers p[ci, . . . , ci+k−1] for all relevant i, then we can
compute from them the numbers

(7) p[ci, . . . , ci+k] = p[ci+1, . . . , ci+k] − p[ci , . . . , ci+k−1 ] , if ci = ci+k.
ci+k − ci
“ddnotequal

What if ci = ci+k? In that case, we could, of course, look for some two distinct numbers in the sequence
ci, . . . , ci+k and, if we find such, use the recipe (6) with a, b those two distinct numbers. But that would
force us to have in hand already divided differences involving k points other than the ones we assumed to

have. So, instead, we assume that the entries of the sequence c are so ordered that this case can only occur

when we don’t have this out, i.e., when ci = · · · = ci+k. In that case, we do already have a quite different
recipe, namely

(8)“allequal p[ci, . . . , ci+k] = p(k)(ci)/k!, if ci = · · · = ci+k.

Note that this case is guaranteed to occur when k = 0, i.e., when we want to ‘compute’ p[ci] for all i.
With this, we can organize the calculation of p[ci, . . . , ci+k] in a table, the socalled divided difference

table, in which p[ci, . . . , ci+k] occurs in column k, on the line ‘between’ p[ci, . . . , ci+k−1] and p[ci+1, . . . , ci+k].
We provide these divided differences in order, for k = 0, 1, 2, . . . , using the two recipes, (7) and (8), as needed.

If we start with some sequence (c1, . . . , cn+1), then we will, in the end, have available, in particular, the
divided differences p[c1, . . . , cj] for j = 1:n+1, i.e., exactly the first n + 1 coefficients in the Newton form for
p with those centers. If, moreover, p is a polynomial of degree ≤ n, then, by the uniqueness proved earlier,

the resulting polynomial pc1:n+1 must equal p.
In other words, we have recovered the polynomial p of degree ≤ n from certain numerical information

about it, namely all the numbers we had to supply whenever we had to use the recipe (8). This is guaranteed

to occur when we start off, i.e., when we have to supply p(ci) for all i. Wether it happens also in later stages
depends entirely on the multiplicity with which a number occurs in the sequence c. Precisely, we will have
supplied p(z), p (z), . . . , p(j)(z) in case the number z occurs in the sequence c at least j + 1 times.

Here’s the easy way to think of it. Assuming still that ci−j = ci implies that ci−j = ci−j+1 = · · · = ci,
we will be exactly right if, for each i, we supply the number p(j)(ci), with j := ji := max{k : ci−k = ci}.

It follows that we can recover a polynomial of degree ≤ n from n + 1 such pieces of information about

it, and uniquely so.

But this, finally, also says that if we use n + 1 arbitrary numbers y1, . . . , yn+1 here, then the polynomial
constructed from the divided difference table entries will have values and derivatives as specified by these

numbers. In other words, we have information matching, or interpolation.

In particular, the numbers we supplied could be the corresponding information about some (sufficiently

smooth) function f . In that case, it has become standard to use the notation

f [ci, . . . , ci+k]

for the entry of the table earlier labeled p[ci, . . . , ci+k] and call it the divided difference of f at the
points ci, . . . , ci+k, or, a kth order divided difference of f .

If all the ci are pairwise distinct, the resulting polynomial is called the Lagrange interpolant to f
at the ci. If there are repetitions in the ci and, correspondingly, we also match some first (or even higher)
derivatives, it is called osculatory or Hermite interpolation.

4


Click to View FlipBook Version