border
border

Final Answers
© 2000-2023 Gérard P. Michon, Ph.D.

Power Series

Related articles on this site:

Related Links (Outside this Site)

Complex Variables, Complex Analysis by John H. Mathews (2000).
Complex Variables, Contour Integration by Joceline Lega (1998).

Wikipedia : Power Series | Formal Power Series | Taylor Series | Analytic Continuation
The Bieberbach conjecture (1916) was proven by Louis de Branges in 1985.

border
border

Power Series and Analytic Continuations

Brook Taylor 1685-1731
Brook Taylor (1685-1731) (2009年01月07日) Taylor's Expansion (1712, 1715)
Smooth functions as sums of power series.

Brook Taylor (1685-1731) invented the calculus of finite differences and came up with the fundamental technique of integration by parts.

Nowadays, we call Taylor's Theorem several variants of the following expansion of a smooth function f about a regular point a, in terms of a polynomial whose coefficients are determined by the successive derivatives of the function at that point:

f (a + x) = f (a) + f ' (a) x + f '' (a) x/2 + ... + f (n) (a) xn / n! + Rn (x)

A Taylor expansion about the origin (a = 0) is often called a Taylor-Maclaurin expansion, in honor of Colin Maclaurin (1698-1746) who focused on that special case in 1742.

Other variants of Taylor's theorem differ by the distinct explicit expressions which can be given for the so-called remainder Rn .

Taylor published two versions of his theorem in 1715. In a letter to his friend John Machin (1680-1751) dated July 26, 1712, Taylor gave Machin credit for the idea. Several variants or precursors of the theorem had also been discovered independently by James Gregory (1638-1675), Isaac Newton (1643-1727), Gottfried Leibniz (1646-1716), Abraham de Moivre (1667-1754) and Johann Bernoulli (1667-1748).

The term Taylor series was apparently coined by Simon Lhuilier (1785).

Taylor's theorem

Taylor's theorem by Robin Whitty (Theorem of the Day #217).


(2019年11月25日) Several expressions for the Taylor remainder Rn (x)
Due to Lagrange, Cauchy (1821), Young (1910) and Schlömilch (1923).

The aforementioned difference Rn (x) between the value of a function f and that of its Taylor polynomial (at order n) has a nice exact expression:

Taylor Remainder (Brook Taylor, 1715)
Rn (x) = ò x

0 (x-t)n
Vinculum
n! f (n+1) (a+t) dt

Proof : If n > 0 , we use Taylor's own integration by parts to obtain:

Rn (x) = Rn-1 (x) - f (n) (a) xn / n!

By induction on n, the general formula then follows from the trivial case (n = 0) which is just the fundamental theorem of calculus:

R0 (x) = f (a+x) - f (a) Halmos

Taylor-Lagrange Formula :

Lagrange Remainder (Joseph-Louis Lagrange)
$ q, | q | ≤ | x | , Rn (x) = (x-q)n
Vinculum
n! f (n+1) (a+q) dt

Proof : That's the result of applying the mean-value theorem to Taylor's original expression (1715) for Rn(x) as given in the previous section. Halmos

Taylor-Cauchy Formula

[画像: Come back later, we're still working on this one... ]

Taylor-Young formula (1910) : Rn (x) is negligible compared to xn as x tends to 0. A formulation due to William Henry Young (1863-1942).

Rn (x) << x n

[画像: Come back later, we're still working on this one... ]

Mathworld : Lagrange remainder | Cauchy remainder | Schlömilch remainder

Interpolation and Taylor's Theorem (Mathematics StackExchange, 2014年04月14日).


Lagrange (1736-1813) (2015年04月19日) Basing calculus on Taylor's expansions (1772)
Lagrange's strict algebraic interpretation of differential calculus.

Taylor's theorem was brought to great prominence in 1772 by Joseph-Louis Lagrange (1736-1813) who declared it the basis for differential calculus (he made this part of his own lectures at Polytechnique in 1797).

Arguably, this was a rebuttal to religious concerns which had been raised in 1734 (The Analyst) by George Berkeley (1685-1753) Bishop of Cloyne (1734-1753) about the infinitesimal foundations of Calculus.

The mathematical concepts behind differentiation and/or integration are so pervasive that they can be introduced or discussed outside of the historical context which originally gave birth to them, one century before Lagrange.

The starting point of Lagrange is an exact expression, valid for any polynomial f of degree n or less, in any commutative ring :

f (a + x) = f (a) + D1 f (a) x + D2 f (a) x2 + ... + Dn f (a) xn

In the ordinary interpretation of Calculus [over any field of characteristic zero] the following relation holds, for any polynomial f :

D0 f (a) = f (a) Dk f (a) = f (k) (a) / k!

However, the expressions below are true even when neither the reciprocal of k! nor higher-order derivatives are defined, over any commutative ring.

Lagrange's definitions of Dk f (a) are just based on the binomial theorem. Dk f is simply a polynomial of degree k. No divisions are needed.

The following manipulations are limited to the case when f is a polynomial whose order is at most n. So only finitely many terms are involved in the data and in the results. With infinitely many terms, the convergence of neither would be guaranteed.

[画像: Come back later, we're still working on this one... ]

"Théorie des fonctions analytiques contenant les principes du calcul différentiel, dégagés de toute considération d'infiniment petits ou d'évanouissants, de limites ou de fluxions et réduits à l'analyse algébrique des quantités finies" by Joseph-Louis Lagrange (1797) Journal de l'École polytechnique, 9, III, 52, p. 49

Lagrange's algebraic basis for differential calculus (48:26) by N.J. Wildberger (2013年08月14日).


(2008年12月23日) Radius of Convergence of a Complex Power Series
A complex power series converges inside a disk and diverges outside of it (the situation at different points of the boundary circle may vary).

That disk is called the disk of convergence. Its radius is the radius of convergence and its boundary is the circle of convergence.

The result advertised above is often called Abel's power series theorem. Although it was known well before him, Abel is credited for making this part of a general discussion which includes the status of points on the circumference of the circle of convergence. The main tool for that is another theorem due to Abel, discussed in the next section.

[画像: Come back later, we're still working on this one... ]

Radius of convergence


(2018年06月01日) Stolz Sector
Slice of the disk of convergence with its apex on the boundary.

[画像: Come back later, we're still working on this one... ]

Stolz angle by Andrzej Kozlowski (Wolfram Demonstrations Project).
Stolz region. Question of Daniel answered by Robert Israel (StackExchange, 2012年09月02日).

Otto Stolz (1842-1905, PhD 1864)


(2021年08月08日) Composition of two (formal) power series.

[画像: Come back later, we're still working on this one... ]

Lagrange inversion formula | Composition of two power series (Math Stack Exchange, 2017年05月27日). | Composition of Power Series by John Armstrong (The Unapologetic Mathematician).


(2019年12月06日) Lagrange Inversion Formula
Lagrange-Bürmann Inversion Formula.

[画像: Come back later, we're still working on this one... ]

Lagrange inversion formula | Joseph-Louis Lagrange (1736-1813)
Lagrange-Bürmann formula | Hans Heinrich Bürmann (c.1770-1817)


Brian Keiffer (Yahoo! 2011年08月07日) Formal properties of exp series.
Defining exp (x) = Sn xn/n! and e = exp (1) prove that exp (x) = e x

In their open disk of convergence (i.e., circular boundary excluded, unless it's at infinity) power series are absolutely convergent series. So, in that domain, the sum of the series is unchanged by modifying the order of the terms (commutativity) and/or grouping them together (associativity). This allows us to establish directly the following fundamental property (using the binomial theorem):

exp (x) exp (y) = exp (x+y)

Such manipulations are disallowed for convergent series that are not absolutely convergent (which is to say that the series consisting of the absolute values of the terms diverges). Rearranging the terms of any such real series can make it converge to any arbitrary limit !
exp (x) exp (y) = (
¥
å
n = 0
xn/n! )(
¥
å
n = 0
yn/n! ) =
¥
å
n=0
¥
å
m=0
(xn/n!) (ym/m!)

=
¥
å
n=0
n
å
k=0
xk yn-k
Vinculum
k! (n-k)!
=
¥
å
n=0
(x+y)n
Vinculum
n!
= exp (x+y)

This lemma shows immediately that exp (-x) = (exp x)-1. Then, by induction on the absolute value of the integer n, we can establish that:

exp (n x) = (exp x) n

With m = n and y = n x, this gives exp (y) = (exp y/m)m . So :

exp (y / m) = (exp y) 1/m

Chaining those two results, we obtain, for any rational q = n/m

exp (q y) = (exp y) q

By continuity, the result holds for any real q = x. In particular, with y = 1:

exp (x) = (exp 1) x = e x QED


(2008年12月23日) Analytic Continuation (Weierstrass, 1842)
Power series that coincide wherever their disks of convergence overlap.

In the realm of real or complex numbers, two polynomials which coincide at infinitely many distinct points are necessarily equal (HINT: as a polynomial with infinitely many roots, their difference must be zero).

This result on polynomials doesn't have an immediate generalization to analytic functions for the simple reason that there are analytic functions with infinitely many zeroes. The sine function is one example of an analytic function with infinitely discrete seroes.

However, an analytic function defined on a nonempty open domain can be extended in only one way to a larger open domain of definition which doesn't encircle any point outside the previous one. Such an extension of an analytic function is called an analytic continuation thereof.

Divergent Series :

Loosely speaking, analytic continuations can make sense of divergent series in a consistent way. Consider, for example, the classic summation formula for the geometric series, which converges when |z| < 1 :

1 + z + z2 + z3 + z4 + ... + zn + ... = 1 / (1-z)

The right-hand-side always makes sense, unless z = 1. It's thus tempting to equate it formally to the left-hand-side, even when the latter diverges! This viewpoint has been shown to be consistent. It makes perfect sense of the following "sums" of divergent series which may otherwise look like monstrosities (respectively obtained for z = -1, 2, 3) :

1 - 1 + 1 - 1 + ... + (-1)n + ... = ½
1 + 2 + 4 + 8 + 16 + ... + 2n + ... = -1
1 + 3 + 9 + 27 + 81 + ... + 3n + ... = - ½

Analytic continuation | Proof of uniqueness | Identity theorem

Several complex variables | Sheaf cohomology

Divergent geometric series | 1 + 2 + 4 + 8 +... | 1 - 2 + 4 - 8 +... | Borel summation (1899)
Adding Past Infinity & Taming Infinity by Henry Reich (Minute Physics)
Visualizing analytic continuation (20:27) by Grant Sanderson (2016年12月09日).
Analytic Continuation and the Zeta Function (49:33) by Zetamath (2021年12月16日).


Dimitrina Stavrova (2008年12月22日; e-mail) Decimated Power Series
What is the sum of 8n / (3n)! over all natural integers n ?

Answer : 1/3 ( e2 + 2 cos (Ö3) / e ) = 2.423641733185364535425...

That's a special case (for z = 2, k = 3, an = 1/n! ) of this problem:

For an integer k and a known series f (z) = ån an z n , find the value of:

fk (z) = å n a kn z kn

The key is to introduce a primitive kth root of unity, like w = exp (2pi / k).

1 + w + w2 + ... + wk-1 = 0 wk = 1

The quantity 1 + wj + ... + w(k-1) j is k when j is a multiple of k and vanishes otherwise. Matching coefficients of aj z j , we obtain:

f (z) + f (w z) + f (w2 z) + ... + f (wk-1 z) = k fk (z)

For f (z) = e z this gives the advertised result as f3 (2) in the form:

1/3 [ exp (2) + exp (2w) + exp (2w2 ) ] where w = ½ (-1 + i Ö3 )

On 2008年12月26日, Dimitrina Stavrova wrote: [edited summary]
I am greatly impressed by the quick and accurate generalization of my question, which gave me a deeper understanding of the related material. Thank you for creating such a great site!
Dimitrina Stavrova, Ph.D.
Sofia, Bulgaria

Thanks for the kind words, Dimitrina.

å xn / 3n! in closed form


(2021年07月22日) Periodic Decomposition of a Power Series
A slight generalization of the technique introduced above.

Instead of retaining only the terms of a power series whose indices are multiples of a given modulus k, we may wish to keep only indices whose residues modulo k are a prescribed remainder r. Thus, we're now after:

f k,r (z) = å n a kn+r z kn+r

That can be worked out with our previous result (the special case r = 0) by applying it to the function z(k-r) f (z). Using w = exp(2pi/k), we have:

z(k-r) f (z) + (w z)(k-r) f (w z) +...+ (wk-1 z)(k-r) f (wk-1 z) = k z(k-r) f k,r (z)

Dividing both sides by k z(k-r), using wk = 1, we obtain the desired result:

f k,r (z) = (1/k) [ f (z) + w-r f (w z) + w-2r f (w2 z) + ... + w-(k-1)r f (wk-1 z) ]

In the example k = 4 for f = exp, we have w = i and, therefore:

f 4,r (z) = ¼ [ ez + (-i)r eiz + (-1)r e-z + ir e-iz ]

That translates into four equations, for r = 0, 1, 2 or 3:

f 4,0 (z) = ¼ [ ez + eiz + e-z + e-iz ] = ½ [ ch z + cos z ]
f 4,1 (z) = ¼ [ ez - i eiz - e-z + i e-iz ] = ½ [ sh z + sin z ]
f 4,2 (z) = ¼ [ ez - eiz + e-z - e-iz ] = ½ [ ch z - cos z ]
f 4,3 (z) = ¼ [ ez + i eiz - e-z - i e-iz ] = ½ [ sh z - sin z ]

The whole machinery may be an overkill in this case, where the above four relations are fairly easy to obtain directly from the expansions of cos, ch, sin and sh. However, that's a good opportunity to introduce the methodology which is needed in less trivial cases with other roots of unity...

Let's use this to compute the deformed exponential expq (z) when q = -1.

expq (z) = ¥
å
n=0 zn
Vinculum
n! q n(n-1)/2

When q = -1, the value of q (n-1)n/2 depends only on what n is modulo 4:

n mod 4 0 1 2 3
(-1) (n-1) n/2 +1 +1 -1 -1

Therefore, exp-1 (z) = f4,0 (z) + f4,1 (z) - f4,2 (z) - f4,3 (z). So:

exp-1 (z) = cos z + sin z = Ö2 sin (z+p/4)

The technique applies when q is a root of unity. For example, with q = i the series splits according to the residue of the index n modulo 8:

n mod 8 0 1 2 3 4 5 6 7
i (n-1) n/2 +1 +1 +i -i -1 -1 -i +i

expi (z) = f8,0 + f8,1 + i f8,2 - i f8,3 - f8,4 - f8,5 - i f8,6 + i f8,7

where f 8,r = 1/8 7
å
m=0 e-mr ip/4 exp ( z em ip/4 )

After a fairly tedious computation, this boils down to:

expi (z) = ½ [ e (7z+1) i p/4 + e 5z i p/4 - e (3z+1) i p/4 + e z i p/4 ]

The case k = 3 (w = e2ip/3 ) is much simpler, entailing only a 3-way split:

n mod 3 0 1 2
w (n-1) n/2 +1 +1 w

expw (z) = f3,0 + f3,1 + w f3,2

f 3,0 (z) = 1/3 [ ez + ez w + ez w* ]
f 3,1 (z) = 1/3 [ ez + w* ezw + w ez w* ]
f 3,2 (z) = 1/3 [ ez + w ezw + w* ez w* ]

expw (z) = 1/3 [ (2+w) ez + (1+2w*) ezw + (2+w) ez w* ]

Period m (along n) of exp ( n(n-1)pi/k ) (A022998)
k 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
m 1 2 3 8 5 12 7 16 9 20 11 24 13 28 15 32

Numericana : Deformed exponential


Brook Taylor (2021年10月05日) Finite-Difference Calculus (FDC)
Applying the methods of calculus to discrete sequences.

Difference Operator D (discrete derivative) :

D f (n) = f (n+1) - f (n)

Like the usual differential operator (d) this is a linear operator, as are all iterated difference operators Dk recursively defined, for k ≥ 0 :

D0 f = f
Dk+1 f = Dk (D f ) = D (Dk f )

Unlike the differential operator d of infinitesimal calculus, the above difference operator D yields ordinary finite quantities whose products can't be neglected; there's a third term in the corresponding product rule :

D (uv) = (D u) v + u (D v) + (D u) (D v)

Falling Powers (falling factorials) :

The number of ways to pick a sequence of m objects out of n possible choices (allowing repetitions) is nm, pronounced n to the power of m.

When objects already picked are disallowed, the result is denoted nm and called n to the falling power of m. It's the product of m decreasing factors:

nm = (n)m = n (n-1) (n-2) ... (n+1-m)

As usual, that's 1 when m = 0, because it's the product of zero factors. Falling powers are closely related to choice numbers:

C(n,m) = nCm = C n
m = æ
è n
m ö
ø = n! = nm = (n)m
Vinculum Vinculum Vinculum
(n-m)! m! m! m!

Falling powers are to FDC what powers are to infinitesimal calculus since:

D nm = m nm-1

Iterating this relation yields the pretty formula:

Dk nm = mk nm-k

James Gregory
James Gregory
(1638-1675)

Gregory-Newton forward-difference formula :

f (n) = ¥
å
k=0 æ
è n
k ö
ø Dk f (0)

When n is a natural integer, the right-hand side is a finite sum, as all binomial coefficients with k > n vanish.

Proof : By induction on n (the case n = 0 being trivial):

Assuming the formula holds for a given n, we apply it to D f and obtain:

f (n+1) - f (n) = D f (n) = ¥
å
k=0 æ
è n
k ö
ø Dk+1 f (0) = ¥
å
k=0 æ
è n
k-1 ö
ø Dk f (0)

Note the zero leading term (k = 0) in the re-indexed rightmost sum. We may add this finite sum termwise to the previous expansion of f (n) to obtain:

f (n+1) = ¥
å
k=0 [ æ
è n
k-1 ö
ø + æ
è n
k ö
ø ] Dk f (0) = ¥
å
k=0 æ
è n+1
k ö
ø Dk f (0)

This says that the formula holds for n+1 QED

Falling powers, make the above look like a Taylor-MacLaurin expansion:

f (n) = f (0) + Df (0) n + D2 f (0) n2/2 + ... + Dk f (0) nk/k! + ...

When one Dk is zero, so are all the subsequent ones and the above right-hand-side gives directly f as a polynomial function of n.

Encyclopedia of Mathematics | Wikipedia | MathWorld

Finite Differences? by John D. Cook (2009年02月01日).

Calculus of Finite Differences by Andreas Klappenecker (2009年02月01日).

Newton Gregory backward Interpolation Formula (13:42) by Tessy Cyriac (2020年04月18日).

What comes next? (47:10) by Burkard Polster (Mathologer, 2021年10月02日).

border
border
visits since Dec. 23, 2008
(c) Copyright 2000-2023, Gerard P. Michon, Ph.D.

AltStyle によって変換されたページ (->オリジナル) /