Well, people have been inventing special functions ad nauseam. The list is quite literally endless, but we may attempt the beginning of a classification for those functions which are common enough to have a universally accepted name. Let's start with the truly elementary functions:
As advertised, the list is endless...
1) If ln(x-2)-3 = ln(x+1) then ln((x-2)/(x+1)) = ln(e3)
so we must have
(x-2) = (x+1)e3
and x can only be equal to
(2+e3)/(1-e3).
Now, however, this value of x happens to be negative
(it's about -1.157) which makes it unacceptable,
since both (x-2) and (x+1) should be positive
(or else you can't take their logarithm).
Therefore, the original equation does not have any solutions at all!
2) Rewrite sin(2x)sin(x)+cos(x) = 0 as 2 cos(x)sin(x)sin(x) + cos(x) = 0, or cos(x)[2sin2(x)+1] = 0. As the second factor cannot be zero, this equation boils down to cos(x) = 0, which has infinitely many solutions of the form x = (k+½)p, where k is any integer (positive or not).
Basically, the following relation is used:
sin(x) = x
- x3/6
+ x5/120
- x7/5040
+ x9/362880 - ...
+ (-1)k
x2k+1/(2k+1)! + ...
To use this for actual computations, you've got to remember that x should be expressed in radians (1° = p/180 rad). In your example, x = 32 ° = 0.558505360638... rad. The series "converges" very rapidly:
After 1 term, S = 0.55850536063818 After 2 terms, S = 0.52946976180816 After 3 terms, S = 0.52992261296708 After 4 terms, S = 0.52991924970365 After 5 terms, S = 0.52991926427444 After 6 terms, S = 0.52991926423312 After 7 terms, S = 0.52991926423332 (no change at this precision after this)
Your computer and/or calculator uses this along with a technique called economization (the most popular of which is the Chebyshev economization) which allows a polynomial of high degree (or any reasonable function) to be very well approximated by a polynomial of lower degree.
In the case of the sine function, the convergence of the above series is so good that economization only saves you a couple of multiplications for a given precision. In some other cases (like the atan function), it is quite indispensable.
Footnote: about atan: The atan function has a nice
Chebyshev expansion which allows one to bypass the intermediates
step of a so-called Taylor expansion like the above.
This is rather fortunate because the convergence of atan's
Taylor expansion is quite lousy when x is close to 1.
Modern atan routines use an economized polynomial for x between 0 and 1,
and reduce the computation of atan(x) to that of atan(1/x)
when x is above 1.
See the following article for more details...
Over a finite interval, it is always possible to approximate a continuous function with arbitrary precision by a polynomial of sufficiently high degree. In some cases [one example is the sine function in the previous article] truncation of the function's Taylor series works well enough. In other cases, the Taylor series may either converge too slowly or not at all (the function may not be analytic or, if it is analytic, the radius of convergence of its Taylor series may be too small to cover comfortably the desired interval).
If a good polynomial approximation of the continuous real function f (x) is desired over a finite interval, the following approach may be used and is in fact the most popular one. We may consider without loss of generality that the desired range of x is [-1,1] (if it's not, a linear change of variable will make it so). Thus, a new variable q (whose range is [0,p] ) can be introduced via the relation cos q = x. Either variable is a decreasing function of the other.
The fundamental remark is that cos(nq) is a polynomial function of cos(q). In fact, either of the following relations defines a polynomial of degree n known as the Chebyshev polynomial [of the first kind] of degree n. The symbol "T" is conventionally used for these because of alternate transliterations from Russian, like Tchebycheff or Tchebychev which are a better match for the Russian pronounciation (the spellings "Chebychev" and "Tchebyshev" also appear).
cos(nq) = Tn(cos q) or ch(nq) = Tn(ch q) [ch = hyperbolic cosine]
The trigonometric formula cos(n+2)x = 2 cos x cos(n+1)x - cos nx translates into a simple recurrence relation which makes Chebyshev polynomials very easy to tabulate: Tn+2(x) = 2x Tn+1(x) - Tn(x)
We must remark prominently that, if y2 = x2-1 (y need not be real ), then:
Tn(x) = [ (x+y)n + (x-y)n ] / 2
This is a consequence of de Moivre's relation (with x = cos q and y = i sin q ):
[ cos q + i sin q ] n = exp(i q) n = exp(i nq) = cos nq + i sin nq
Now, f (cos q) is clearly an even function of q which is continuous when f is. As such, it has a tame Fourier expansion which contains only cosines and translates into the so-called Chebyshev-Fourier expansion of f(x):
The last expression is a series which is always convergent. For "infinitely smooth" functions, it converges exponentially fast (as a function of n, the coefficient has to be smaller than the reciprocal of a polynomial of degree k+1, for any k, or else the Fourier series of the k-th derivative of f (cos q) would not converge). Joseph Fourier (1768-1830) This is much more than what can be said about a Taylor power series... A truncated Fourier-Chebyshev series is thus expected to give a much better approximation than a Taylor series truncated to the same order.
What is known as Chebyshev economization is often limited to the following dubious technique: Take a good polynomial approximant with many terms (possibly coming from a Taylor expansion) and express it as a linear combination of Chebyshev polynomials (whose coefficients may be obtained from the inversion formula below). This expression may be truncated at some low order to obtain a good approximation as a polynomial of lower degree.
A better approach, whenever possible, is to compute the exact Chebyshev expansion of the target function and to truncate that in order to obtain a good approximation by a polynomial of low degree... The following inversion formula can be used for to obtain the Chebychev expansion [watch out for the explicit halving of c0 ] of an analytic function given by its Taylor expansion:
The above complete inversion formula (infinite sum) is occasionally handy, but one may also always obtain the coefficients cn via the Euler formulas, which give:
In at least one (important) case, we may even obtain the Chebyshev expansion directly by algebraic methods... Consider the arctangent function, which gives the angle in radians between -p/2 and p/2 whose tangent equals its given [real] argument. That function is variously abbreviated Arctg (Int'l/European), arctan (US), atg or atan (computerese). The following relation is true for small enough arguments. [It's true modulo p for unrestricted arguments, because of the formula giving tg(a+b) as (u+v)/(1-uv) if u and v are the respective tangents of a and b.] This may thus be considered an algebraic relation between formal power series:
Arctg( (u+v)/(1-uv) ) = Arctg(u) + Arctg(v)
With this in mind, we may as well use this formal identity for the complex numbers u = k [x+iÖ(1-x2 )] and v = k [x-iÖ(1-x2 )], so that 2kn Tn(x) = (u n + v n). This turns the RHS of the above identity directly into a Chebyshev expansion where the coefficient cn is simply the coefficient of the arctangent power series multiplied by 2kn. On the other hand, the LHS becomes Arctg(2kx/(1-k2 )). If we let k be Ö2-1, this boils down to Arctg(x) and we have:
That's [almost] all there is to it: We got the Chebyshev expansion at very little cost! How good is the convergence of this series? Well, we may first remark that it converges even if the magnitude of x exceeds unity. More precisely, when x is larger than 1, Tn(x) is asymptotically equal to half the n-th power of x+Ö(x2-1) , a quantity which equals the reciprocal of Ö2-1 when x is Ö2. Therefore, the series converges if and only if the magnitude of x is less than (or equal to) Ö2.
More importantly, when the magnitude of x is not more than 1, a partial sum approximates the whole thing with an error smaller than the coefficient of the first discarded term. Suppose we want to use this to find a polynomial approximant of the arctangent function at a precision of about 13 significant digits (we need it only over the interval [-1,1], as we may obtain the arctangent of x for x>1 as p/2 minus the arctangent of 1/x). We find that for 2n+1=31, the relevant coefficient is about 0.88 10-13 so that the corresponding term is just about small enough to be dropped. The method will thus give the desired precision with an odd polynomial of degree 29, whose value can be computed using 16 multiplications and 14 additions. A similar accuracy would require about 10 000 000 000 000 operations with the "straight" Taylor series... Some economization, indeed!
The above "formal" computation gives the same results as the (unambiguous) relevant Euler formula for the coefficients of the Chebyshev expansion of the arctangent function. This may puzzle a critical reader, since the whole thing seems to work as long as the quantity 2k/(1-k2 ) is equal to unity, and this quadratic condition is true not only when k is Ö2-1, but also for the alternate root -(Ö2+1) as well. This latter value, however, leads to a formal Chebyshev series which diverges for any value of x...
The following intimidating definitions of the transcendental Gamma function hide its simple nature: G(z+1) is merely the generalization of the factorial function (z!) to all real or complex values of the number z [besides negative integers].
Leonhard Euler (1707-1783)G(z) has an elementary expression only when z is either a positive integer n, or a positive or negative half-integer (½+n or ½-n):
In this, k! ("k factorial") is the product of all positive integers less than or equal to k, whereas k!! ("k double-factorial") is the product of all such integers which have the same parity as k, namely k(k-2)(k-4)... Note that k!, is undefined (¥) when k is a negative integer (the G function is undefined at z = 0,-1,-2,-3,... as it has a simple pole at z = -n with a residue of (-1)n/n! , for any natural integer n). However, the double factorial k!! may also be defined for negative odd values of k: The expression (-2n-1)!! = -(-1)n / (2n-1)!! ) may be obtained through the recurrence relation (k-2)!! = k!! / k , starting with k=1. In particular (-1)!! = 1, so that either of the above formulas does give G(1/2) = Öp , with n=0. (You may also notice that either relation holds for positive or negative values of n.)
When the real 2x is not an integer, we do not know any expression of G(x) in terms of elementary functions:
G(1/3) = 2.67893853470774763365569294097467764412868937795730...
G(1/4) = 3.62560990822190831193068515586767200299516768288006...
G(1/5) = 4.59084371199880305320475827592915200343410999829340...
The real [little known] gem which I have to offer about numerical values of the Gamma function is the so-called "Lanczos approximation formula" [pronounced "LAHN-tsosh" and named after the Hungarian mathematician Cornelius Lanczos (1893-1974), who published it in 1964]. Its form is quite specific to the Gamma function whose values it gives with superb precision, even for complex numbers. The formula is valid as long as Re(z) [the real part of z] is positive. The nominal accuracy, as I recall, is stated for Re(z) > ½, but it's a simple application of the "reflection formula" (given below) to obtain the value for the rest of the complex plane with a similar accuracy. The Lanczos formula makes the Gamma function almost as straightforward to compute as a sine or a cosine. Here it is:
G(z) = [1+C1/(z)+C2/(z+1)+ ... +Cn/(z+n-1) + e(z)] ´ Ö(2p) (z+p-1/2)z-1/2 / ez+p-1/2
e(z) is a small error term whose value is bounded over the half-plane described above. The values of the coefficients Ci depend on the choice of the integers p and n. For p=5 and n=6, the formula gives a relative error less than 2.2´10-10 with the following choice of coefficients: C1=76.18009173, C2= -86.50532033, C3=24.01409822, C4= -1.231739516, C5=0.00120858003, and C6= -0.00000536382.
I used this particular set of coefficients extensively for years (other sources may be used for confirmation) and stated so in my original article here. This prompted Paul Godfrey of Intersil Corp. to share a more precise set and his own method to compute any such sets (without the fear of uncontrolled rounding errors). Paul has kindly agreed to let us post his (copyrighted) notes on the subject here.
Some of the fundamental properties of the Gamma function are:
Other interesting remarks about the Gamma function include:
The Gamma Function (38 pages) by Emil Artin (1931; English translation by Michael Butler, 1964)
Taking the exponential of both sides makes it easy to solve for i:
t et = [I / i] T
i = I / ( t e t ) 1/T
To solve for t, you must use Lambert's W function, one of the more common "special" functions presented above: Apply W to both sides of the first of the above equations. By definition, W(t exp(t)) is equal to t. Therefore:
t = W( [I / i] T )
This solution is valid for positive values of t (the original equation does not make sense for negative ones). By itself, the equation x = t exp(t) has 2 real solutions for t when x is between -1/e and 0 and no real solution when x is less than -1/e.
The radius of convergence of the Taylor series of W is 1/e (0.36787944...)
Thanks a lot for your kind, quick and learned answer.
It will be most useful to me.Best regards,
Louis Vlemincq, Transmission Specialist, BelcomLab.
BELGACOM / 2, rue Carli, 1140 Evere / Belgium
This can be used to define W everywhere except at the singular point z = -1/e by analytic computation as a multivalued function whose branch cuts are not trivially related.
Johann Heinrich Lambert (1728-1777)
|
MathWorld
|
Wikipedia
Omega constant
|
Gompertz-Makeham law
Branch differences and Lambert W
by D.J. Jeffrey & J.E. Jankowski (2014).
The Lambert W Function (11:57)
Mathoma (2015年02月12日).
The Famous Equation x^2=2^x (11:57)
by Steve Chow (blackpenredpen, 2019年10月29日).
The polylogarithm of order s is the analytic function of z defined by
Riemann's Zeta function is a special case: Lis(1) = z(s)
For s=2 and s=3. the names dilogarithm and trilogarithm are used. Other standard numerical prefixes are available if the need arises... Historically, the dilogarithm function was studied well before other polylogarithms, starting with the following formula due to Euler:
Dilogarithm Reflection Formula (Euler, 1768)The dilogarithm is still sometimes called Spence's function to recognize the work published in 1809 by the Scottish mathematician William Spence (1777-1815). Spence was concerned with polylogarithms of all orders (integers only) which he called Logarithmic Transcendents.
Legendre posed some properties of dilogarithms as exercises (1811). Niels Abel (1802-1829) discussed dilogarithms at greater length in 1826. The name itself (German; bilogarithmische Function) was coined in 1828 by the Swedish mathematician Carl Johan Hill (né Rudelius, 1793-1875).
In December 1888, the Swiss mathematician Alfred Jonquière (1862-1899) presented the general case (for integer values of s) to the Royal Swedish Academy of Sciences, under the title: Ueber einige Transcendente welche bei der wiederholten Integration rationaler Funktionen aufreten. Shortly thereafter, Jonquière extended his definition to allow all complex values of s in a note published in French in Bulletin de la Socété Mathématique de France, 17, pp. 142-152 (1889).
Besides polylogarithms, Alfred Jonquière is best remembered for a musical treatise: Grundriss der musikalischen Akustik (1898). He is unrelated to the family of the French geometer and naval officer Ernest de Jonquières (1820-1901) whose name is spelled with a trailing "s".
Thus, Jonquière's function (the general polylogarithm) can be considered to be a differentiable function of two complex variables, s and z, verifying the following recurrence relation:
Li0(z) = z / (1-z)
Li1(z) = - Log (1-z)
¶z Lis(z)
=
Lis-1(z) / z
The latter relation can be integrated unambiguously, using Lis(1) = z(s) :
This isn't applicable to s=1 because both terms diverge.
[画像: Come back later, we're still working on this one... ]
Wikipedia
|
Brilliant
|
John D. Cook
|
Polylogarithmic identities
Kummer's function
|
Eduard Kummer (1810-1893)
|
Lerch transcendent
|
Mathias Lerch (1860-1922)
Note
sur la série S xn / ns
by Niels Abel (1826, posthumously published).
Note sur la série
S xn / ns by Alfred Jonquière.
Bulletin de la S.M.F 17 pp. 142-152 (1889).
Video :
Dilogarithm (8:28)
by Steve Chow (blackpenredpen, 2019年12月04日).
From a certain viewpoint, the simplest
entire
function after the exponential function.
Georges Valiron (1884-1955) in 1938.
For less obvious expressions when q is a root of unity, see the technique presented elsewhere on this site. Such results include expressions like:
expi (z) = ½ [ e (7z+1) i p/4 + e 5z i p/4 - e (3z+1) i p/4 + e z i p/4 ]
When q is a real between 0 and 1, a theorem due to Edmond Laguerre (1834-1886; X1853) says that all zeros are simple, real and negative.
The first column in the above table x0 (q) has been the object of considerable attention. Alan Sokal (2011) found that 1+1/x0 (q) has an expansion consisting only of positive terms (he checked up to order 899):
Sur
le déplacement des zéros des fonctions entières par leur dérivation (Uppsala PhD)
Martin Ålander (1914).
Lecture #1 and
Lecture #2 by
Alan Sokal (1955-) at
Queen Mary (March 2011).
The deformed exponential function by
Alan Sokal (Marc Kac seminar, Utrecht, 2011年06月10日).
An asymptotic formula for the zeros of the deformed exponential function
Cheng Zhang (arXiv, 2015年01月12日).
Zeros
of the deformed exponential function by Liuquan Wang & Cheng Zhang (2018).
The name deformed exponential has often been hijacked in recent years. What follows coincides with the above only up to second order:
eq (x) = 1 + x + q x2/2 + O(x3 )
Constantino Tsallis presented what's next in 1988 and 1994. It had been analyzed in 1964 (using the real parameter l = 1-q) by the statisticians George E.P. Box (1919-2013) and David Cox (1924-) and also, in 1967, by Jan Havrda and Frantisek Charvát, as they named the concept of structural a-entropy. They all refer to:
eq (x) = [ 1 + (1-q) x ]1/(1-q)
That expression is well-defined only when 1+(1-q)x is a positive real, although it does reduce to the ordinary exponential as q tends to 1.
That's e 1-q (x,1) using the deformed exponential function of two variables of Miomir Stankovic, Sladjana Marinkovic & Predrag Rajkovic (2011):
eh (x,y) = [ 1 + h x ] y/h
Extending to e0 (x,y) = e xy by continuity in the neighborhood of h = 0.
Tsallis statistics | Tsallis entropy | Tsallis distribution | q-Gaussian | Constantino Tsallis (1943-)
A solution of a linear differential equation can only have fixed singularities at points where the coefficients of the equation are singular.
On the other hand, the solutions of a nonlinear diffential equations, may present other singularities which depend on the initial conditions. Those are called movable singularities (also known as spontaneous or internal ; French: singularité mobile).
[画像: Come back later, we're still working on this one... ]
Painkevé-type equations
|
Moveable singularity
|
Painlevé transcendents
Emile Picard (1856-1941)
|
Paul Painlevé (1863-1933)
|
Bertrand Gambier (1879-1954)