Jump to content
Wikipedia The Free Encyclopedia

Gaussian integral

From Wikipedia, the free encyclopedia
Integral of the Gaussian function, equal to sqrt(π)
This integral from statistics and physics is not to be confused with Gaussian quadrature, a method of numerical integration.
A graph of the function f ( x ) = e x 2 {\displaystyle f(x)=e^{-x^{2}}} {\displaystyle f(x)=e^{-x^{2}}} and the area between it and the x {\displaystyle x} {\displaystyle x}-axis, (i.e. the entire real line) which is equal to π {\displaystyle {\sqrt {\pi }}} {\displaystyle {\sqrt {\pi }}}.

The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function f ( x ) = e x 2 {\displaystyle f(x)=e^{-x^{2}}} {\displaystyle f(x)=e^{-x^{2}}} over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is e x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.}

Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809,[1] attributing its discovery to Laplace. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.

Although no elementary function exists for the error function, as can be proven by the Risch algorithm,[2] the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for e x 2 d x , {\displaystyle \int e^{-x^{2}},円dx,} {\displaystyle \int e^{-x^{2}},円dx,} but the improper integral e x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} can be evaluated. The definite integral of an arbitrary Gaussian function is e a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}},円dx={\sqrt {\frac {\pi }{a}}}.} {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}},円dx={\sqrt {\frac {\pi }{a}}}.}

Computation

[edit ]

By polar coordinates

[edit ]

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[3] is to make use of the property that:

( e x 2 d x ) 2 = e x 2 d x e y 2 d y = e ( x 2 + y 2 ) d x d y . {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}},円dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}},円dx\int _{-\infty }^{\infty }e^{-y^{2}},円dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)},円dx,円dy.} {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}},円dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}},円dx\int _{-\infty }^{\infty }e^{-y^{2}},円dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)},円dx,円dy.}

Consider the function e ( x 2 + y 2 ) = e r 2 {\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}} {\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}}on the plane R 2 {\displaystyle \mathbb {R} ^{2}} {\displaystyle \mathbb {R} ^{2}}, and compute its integral two ways:

  1. on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: ( e x 2 d x ) 2 ; {\displaystyle \left(\int e^{-x^{2}},円dx\right)^{2};} {\displaystyle \left(\int e^{-x^{2}},円dx\right)^{2};}
  2. on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be π {\displaystyle \pi } {\displaystyle \pi }

Comparing these two computations yields the integral, though one should take care about the improper integrals involved.

R 2 e ( x 2 + y 2 ) d x d y = 0 2 π 0 e r 2 r d r d θ = 2 π 0 r e r 2 d r = 2 π 0 1 2 e s d s s = r 2 = π 0 e s d s = π [ e s ] 0 = π ( e 0 e ) = π ( 1 0 ) = π , {\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx,円dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r,円dr,円d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}},円dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s},円ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s},円ds\\[6pt]&=\pi ,円\left[e^{s}\right]_{-\infty }^{0}\\[6pt]&=\pi ,円\left(e^{0}-e^{-\infty }\right)\\[6pt]&=\pi ,円\left(1-0\right)\\[6pt]&=\pi ,\end{aligned}}} {\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx,円dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r,円dr,円d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}},円dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s},円ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s},円ds\\[6pt]&=\pi ,円\left[e^{s}\right]_{-\infty }^{0}\\[6pt]&=\pi ,円\left(e^{0}-e^{-\infty }\right)\\[6pt]&=\pi ,円\left(1-0\right)\\[6pt]&=\pi ,\end{aligned}}} where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates (r dr is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking s = −r2, so ds = −2r dr.

Combining these yields ( e x 2 d x ) 2 = π , {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}},円dx\right)^{2}=\pi ,} {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}},円dx\right)^{2}=\pi ,} so e x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.}

Complete proof

[edit ]

To justify the improper double integrals and equating the two expressions, we begin with an approximating function: I ( a ) = a a e x 2 d x . {\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.} {\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.}

If the integral e x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} were absolutely convergent we would have that its Cauchy principal value, that is, the limit lim a I ( a ) {\displaystyle \lim _{a\to \infty }I(a)} {\displaystyle \lim _{a\to \infty }I(a)} would coincide with e x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx.} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx.} To see that this is the case, consider that

| e x 2 | d x < 1 x e x 2 d x + 1 1 e x 2 d x + 1 x e x 2 d x < . {\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}},円dx+\int _{-1}^{1}e^{-x^{2}},円dx+\int _{1}^{\infty }xe^{-x^{2}},円dx<\infty .} {\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}},円dx+\int _{-1}^{1}e^{-x^{2}},円dx+\int _{1}^{\infty }xe^{-x^{2}},円dx<\infty .}

So we can compute e x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx} by just taking the limit lim a I ( a ) . {\displaystyle \lim _{a\to \infty }I(a).} {\displaystyle \lim _{a\to \infty }I(a).}

Taking the square of I ( a ) {\displaystyle I(a)} {\displaystyle I(a)} yields

I ( a ) 2 = ( a a e x 2 d x ) ( a a e y 2 d y ) = a a ( a a e y 2 d y ) e x 2 d x = a a a a e ( x 2 + y 2 ) d y d x . {\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}},円dx\right)\left(\int _{-a}^{a}e^{-y^{2}},円dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}},円dy\right),円e^{-x^{2}},円dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)},円dy,円dx.\end{aligned}}} {\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}},円dx\right)\left(\int _{-a}^{a}e^{-y^{2}},円dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}},円dy\right),円e^{-x^{2}},円dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)},円dy,円dx.\end{aligned}}}

Using Fubini's theorem, the above double integral can be seen as an area integral [ a , a ] × [ a , a ] e ( x 2 + y 2 ) d ( x , y ) , {\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)},円d(x,y),} {\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)},円d(x,y),} taken over a square with vertices {(−a, a), (a, a), (a, −a), (−a, −a)} on the xy-plane.

Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than I ( a ) 2 {\displaystyle I(a)^{2}} {\displaystyle I(a)^{2}}, and similarly the integral taken over the square's circumcircle must be greater than I ( a ) 2 {\displaystyle I(a)^{2}} {\displaystyle I(a)^{2}}. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:

x = r cos θ , y = r sin θ {\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}} {\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}} J ( r , θ ) = [ x r x θ y r y θ ] = [ cos θ r sin θ sin θ r cos θ ] {\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}} {\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}} d ( x , y ) = | J ( r , θ ) | d ( r , θ ) = r d ( r , θ ) . {\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r,円d(r,\theta ).} {\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r,円d(r,\theta ).} 0 2 π 0 a r e r 2 d r d θ < I 2 ( a ) < 0 2 π 0 a 2 r e r 2 d r d θ . {\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}},円dr,円d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}},円dr,円d\theta .} {\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}},円dr,円d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}},円dr,円d\theta .}

(See to polar coordinates from Cartesian coordinates for help with polar transformation.)

Integrating, π ( 1 e a 2 ) < I 2 ( a ) < π ( 1 e 2 a 2 ) . {\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).} {\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).}

By the squeeze theorem, this gives the Gaussian integral e x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx={\sqrt {\pi }}.}

By Cartesian coordinates

[edit ]

A different technique, which goes back to Laplace (1812),[3] is the following. Let y = x s d y = x d s . {\displaystyle {\begin{aligned}y&=xs\\dy&=x,円ds.\end{aligned}}} {\displaystyle {\begin{aligned}y&=xs\\dy&=x,円ds.\end{aligned}}}

Since the limits on s as y → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that ex2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,

e x 2 d x = 2 0 e x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx=2\int _{0}^{\infty }e^{-x^{2}},円dx.} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}},円dx=2\int _{0}^{\infty }e^{-x^{2}},円dx.}

Thus, over the range of integration, x ≥ 0, and the variables y and s have the same limits. This yields: I 2 = 4 0 0 e ( x 2 + y 2 ) d y d x = 4 0 ( 0 e ( x 2 + y 2 ) d y ) d x = 4 0 ( 0 e x 2 ( 1 + s 2 ) x d s ) d x {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy,円dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)},円dy\right),円dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x,円ds\right),円dx\\[6pt]\end{aligned}}} {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy,円dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)},円dy\right),円dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x,円ds\right),円dx\\[6pt]\end{aligned}}} Then, using Fubini's theorem to switch the order of integration: I 2 = 4 0 ( 0 e x 2 ( 1 + s 2 ) x d x ) d s = 4 0 [ e x 2 ( 1 + s 2 ) 2 ( 1 + s 2 ) ] x = 0 x = d s = 4 ( 1 2 0 d s 1 + s 2 ) = 2 arctan ( s ) | 0 = π . {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x,円dx\right),円ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty },円ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}} {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x,円dx\right),円ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty },円ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}}

Therefore, I = π {\displaystyle I={\sqrt {\pi }}} {\displaystyle I={\sqrt {\pi }}}, as expected.

In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider e x 2 1 x 2 ( 1 + x 2 ) 1 {\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}} {\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}}.

In fact, since ( 1 + t ) e t 1 {\displaystyle (1+t)e^{-t}\leq 1} {\displaystyle (1+t)e^{-t}\leq 1} for all t {\displaystyle t} {\displaystyle t}, we have the exact bounds: 1 x 2 e x 2 ( 1 + x 2 ) 1 {\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}} {\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}}Then we can do the bound at Laplace approximation limit: [ 1 , 1 ] ( 1 x 2 ) n d x [ 1 , 1 ] e n x 2 d x [ 1 , 1 ] ( 1 + x 2 ) n d x {\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx} {\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx}

That is, 2 n [ 0 , 1 ] ( 1 x 2 ) n d x [ n , n ] e x 2 d x 2 n [ 0 , 1 ] ( 1 + x 2 ) n d x {\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx} {\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx}

By trigonometric substitution, we exactly compute those two bounds: 2 n ( 2 n ) ! ! / ( 2 n + 1 ) ! ! {\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!} {\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!} and 2 n ( π / 2 ) ( 2 n 3 ) ! ! / ( 2 n 2 ) ! ! {\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!} {\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!}

By taking the square root of the Wallis formula, π 2 = n = 1 ( 2 n ) 2 ( 2 n 1 ) ( 2 n + 1 ) {\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}} {\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}}we have π = 2 lim n n ( 2 n ) ! ! ( 2 n + 1 ) ! ! {\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}} {\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}}, the desired lower bound limit. Similarly we can get the desired upper bound limit. Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.

Relation to the gamma function

[edit ]

The integrand is an even function,

e x 2 d x = 2 0 e x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx} {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx}

Thus, after the change of variable x = t {\textstyle x={\sqrt {t}}} {\textstyle x={\sqrt {t}}}, this turns into the Euler integral

2 0 e x 2 d x = 2 0 1 2   e t   t 1 2 d t = Γ ( 1 2 ) = π {\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}} {\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}}

where Γ ( z ) = 0 t z 1 e t d t {\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt} {\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt} is the gamma function. This shows why the factorial of a half-integer is a rational multiple of π {\textstyle {\sqrt {\pi }}} {\textstyle {\sqrt {\pi }}}. More generally, 0 x n e a x b d x = Γ ( ( n + 1 ) / b ) b a ( n + 1 ) / b , {\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},} {\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},} which can be obtained by substituting t = a x b {\displaystyle t=ax^{b}} {\displaystyle t=ax^{b}} in the integrand of the gamma function to get Γ ( z ) = a z b 0 x b z 1 e a x b d x {\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx} {\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx}.

Generalizations

[edit ]

The integral of a Gaussian function

[edit ]

The integral of an arbitrary Gaussian function is e a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}},円dx={\sqrt {\frac {\pi }{a}}}.} {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}},円dx={\sqrt {\frac {\pi }{a}}}.}

An alternative form is e ( a x 2 b x + c ) d x = π a e b 2 4 a c . {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}-bx+c)},円dx={\sqrt {\frac {\pi }{a}}},円e^{{\frac {b^{2}}{4a}}-c}.} {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}-bx+c)},円dx={\sqrt {\frac {\pi }{a}}},円e^{{\frac {b^{2}}{4a}}-c}.}

This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.

Complex form

[edit ]
Main article: Fresnel integral

e 1 2 i t 2 d t = e i π / 4 2 π {\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}} {\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}}and more generally, R N e 1 2 i x T A x d x = det ( A ) 1 2 ( e i π / 4 2 π ) N {\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}} {\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}}for any positive-definite symmetric matrix A {\displaystyle A} {\displaystyle A}.

n-dimensional and functional generalization

[edit ]

Suppose A is a symmetric positive-definite (hence invertible) n ×ばつ n precision matrix, which is the matrix inverse of the covariance matrix. Then,

R n exp ( 1 2 x T A x ) d n x = R n exp ( 1 2 i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A = 1 det ( A / 2 π ) = det ( 2 π A 1 ) {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)},円d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)},円d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}} {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)},円d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)},円d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}}By completing the square, this generalizes to R n exp ( 1 2 x T A x + b T x + c ) d n x = det ( 2 π A 1 ) exp ( 1 2 b T A 1 b + c ) {\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)},円d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)} {\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)},円d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)}

This fact is applied in the study of the multivariate normal distribution.

Also, x k 1 x k 2 N exp ( 1 2 i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A 1 2 N N ! σ S 2 N ( A 1 ) k σ ( 1 ) k σ ( 2 ) ( A 1 ) k σ ( 2 N 1 ) k σ ( 2 N ) {\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}},円\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)},円d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}},円{\frac {1}{2^{N}N!}},円\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}} {\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}},円\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)},円d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}},円{\frac {1}{2^{N}N!}},円\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}} where σ is a permutation of {1, ..., 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, ..., 2N} of N copies of A−1.

Alternatively,[4]

f ( x ) exp ( 1 2 i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A exp ( 1 2 i , j = 1 n ( A 1 ) i j x i x j ) f ( x ) | x = 0 {\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}},円\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}} {\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}},円\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}}

for some analytic function f, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.

While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. [citation needed ] There is still the problem, though, that ( 2 π ) {\displaystyle (2\pi )^{\infty }} {\displaystyle (2\pi )^{\infty }} is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:

f ( x 1 ) f ( x 2 N ) exp [ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f exp [ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f = 1 2 N N ! σ S 2 N A 1 ( x σ ( 1 ) , x σ ( 2 ) ) A 1 ( x σ ( 2 N 1 ) , x σ ( 2 N ) ) . {\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2}),円d^{d}x_{2N+1},円d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2}),円d^{d}x_{2N+1},円d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}} {\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2}),円d^{d}x_{2N+1},円d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2}),円d^{d}x_{2N+1},円d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}}

In the DeWitt notation, the equation looks identical to the finite-dimensional case.

n-dimensional with linear term

[edit ]

If A is again a symmetric positive-definite matrix, then (assuming all are column vectors) exp ( 1 2 i , j = 1 n A i j x i x j + i = 1 n b i x i ) d n x = exp ( 1 2 x T A x + b T x ) d n x = ( 2 π ) n det A exp ( 1 2 b T A 1 b ) . {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}} {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}}

Integrals of similar form

[edit ]

0 x 2 n e x 2 / a 2 d x = π a 2 n + 1 ( 2 n 1 ) ! ! 2 n + 1 {\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}},円dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}} {\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}},円dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}} 0 x 2 n + 1 e x 2 / a 2 d x = n ! 2 a 2 n + 2 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}},円dx={\frac {n!}{2}}a^{2n+2}} {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}},円dx={\frac {n!}{2}}a^{2n+2}} 0 x 2 n e b x 2 d x = ( 2 n 1 ) ! ! b n 2 n + 1 π b {\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}},円dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}} {\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}},円dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}} 0 x 2 n + 1 e b x 2 d x = n ! 2 b n + 1 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}},円dx={\frac {n!}{2b^{n+1}}}} {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}},円dx={\frac {n!}{2b^{n+1}}}} 0 x n e b x 2 d x = Γ ( n + 1 2 ) 2 b n + 1 2 {\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}},円dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}} {\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}},円dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}} where n {\displaystyle n} {\displaystyle n} is a positive integer

An easy way to derive these is by differentiating under the integral sign.

x 2 n e α x 2 d x = ( 1 ) n n α n e α x 2 d x = ( 1 ) n n α n e α x 2 d x = π ( 1 ) n n α n α 1 2 = π α ( 2 n 1 ) ! ! ( 2 α ) n {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}},円dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}},円dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}},円dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}} {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}},円dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}},円dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}},円dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}}

One could also integrate by parts and find a recurrence relation to solve this.

Higher-order polynomials

[edit ]

Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL(n)-invariants of the polynomial. One such invariant is the discriminant, zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.[5]

Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is[citation needed ]

e a x 4 + b x 3 + c x 2 + d x + f d x = 1 2 e f n , m , p = 0 n + p = 0 mod 2 b n n ! c m m ! d p p ! Γ ( 3 n + 2 m + p + 1 4 ) ( a ) 3 n + 2 m + p + 1 4 . {\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f},円dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.} {\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f},円dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.}

The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)n+p/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.

See also

[edit ]

References

[edit ]

Citations

[edit ]
  1. ^ Stahl, Saul (April 2006). "The Evolution of the Normal Distribution" (PDF). MAA.org. Archived from the original (PDF) on January 25, 2016. Retrieved May 25, 2018.
  2. ^ Cherry, G. W. (1985). "Integration in Finite Terms with Special Functions: the Error Function". Journal of Symbolic Computation. 1 (3): 283–302. doi:10.1016/S0747-7171(85)80037-7 .
  3. ^ a b Lee, Peter M. "The Probability Integral" (PDF).
  4. ^ "Reference for Multidimensional Gaussian Integral". Stack Exchange . March 30, 2012.
  5. ^ Morozov, A.; Shakirove, Sh. (2009). "Introduction to integral discriminants". Journal of High Energy Physics. 2009 (12): 002. arXiv:0903.2595 . Bibcode:2009JHEP...12..002M. doi:10.1088/1126-6708/2009/12/002.

Sources

[edit ]
Types of
integrals
Integration
techniques
Improper integrals
Stochastic integrals
Miscellaneous

AltStyle によって変換されたページ (->オリジナル) /