Max Planck (1858-1947)

Final Answers
© 2000-2023 Gérard P. Michon, Ph.D.

Numerical Constants

It can be of no practical use to know that Pi is irrational,
but if we can know, it surely would be intolerable Michon not to know.

Ted Titchmarsh (1899-1963)

Related articles on this site:

Related Links (Outside this Site)

Mathematical Constants :

Numbers, Constants and Computation by Xavier Gourdon and Pascal Sebah.
Constants and Records of Computation by Pascal Sebah (2010年08月12日).
Records for the computation of constants by Simon Plouffe (June 2000).
Constants by Eric W. Weisstein | Constants by Stanislav Sýkora
Some products of rational functions the primes by Gerhard Niklasch (2002).
Earliest Uses of Symbols for Constants by Jeff Miller
Quotes about constants

Mathematical Constants by Steven R. Finch [dedicated to Philippe Flajolet 1948-2011]

Physical Constants :

Latest CODATA values of the fundamental physical constants (NIST)
Adjusting the Values of the Fundamental Constants, Mohr & Taylor (2001).
Bureau International des Poids et Mesures (BIPM).
Universal [Fine Structure] Constant Might Not Be Constant (2005年04月11日)

Videos :

Dimensionless Physical Constants and Large Number Hypothesis by Paul Dirac.
Does the Gravitational Constant Vary? (53:49) by Paul Dirac (1979).
Could gravity vary with time? (6:09) by Freeman Dyson (2016年09月05日).
Are the Fundamental Constants Changing? (14:51) Matt O'Dowd (2017年09月28日).

border
border

Fundamental Mathematical Constants


(2003年07月26日) 0
Zero is a number like any other, only more so...

Zero is probably the most misunderstood number. Even the imaginary number i is probably better understood, (because it's usually introduced only to comparatively sophisticated audiences). It took humanity thousands of years to realize what a great mathematical simplification it was to have an ordinary number used to indicate "nothing", the absence of anything to count... The momentous introduction of zero metamorphosed the ancient Indian system of numeration into the familiar decimal system we use today.

The counting numbers start with 1, but the natural integers start with 0... Most mathematicians prefer to start with zero the indexing of the terms in a sequence, if at all possible. Physicists do that too, in order to mark the origin of a continous quantity: If you want to measure 10 periods of a pendulum, say "0" when you see it cross a given point from left to right (say) and start your stopwatch. Keep counting each time the same event happens again and stop your timepiece when you reach "10", for this will mark the passing of 10 periods. If you don't want to use zero in that context, just say something like "Umpf" when you first press your stopwatch; many do... Just a joke!

A universal tradition, which probably predates the introduction of zero by a few millenia, is to use counting numbers (1,2,3,4...) to name successive intervals of time; a newborn baby is "in its first year", whereas a 24-year old is in his 25th. When applied to calendars, this unambiguous tradition seems to disturb more people than it should. Since the years of the first century are numbered 1 to 100, the second century goes from 101 to 200, and the twentieth century consists of the years 1901 to 2000. The third millenium starts with January 1, 2001. Quantum mechanics was born in the nineteenth century (with Planck's explanation for the blackbody law, on 1900年12月14日).

For some obscure reason, many people seem to have a mental block about some ordinary mathematics applied to zero. A number of journalists, who should have known better, once questioned the simple fact that zero is even. Of course it is: Zero certainly qualifies as a multiple of two (it's zero times two). Also, in the integer sequence, any even number is surrounded by two odd ones, just like zero is surrounded by the odd integers -1 and +1... Nevertheless, we keep hearing things like: "Zero, should be an exception, an integer that's neither even nor odd." Well, why on Earth would anyone want to introduce such unnatural exceptions where none is needed?

What about 00 ? Well, anything raised to the power of zero is equal to unity and a closer examination would reveal that there's no need to make an exception for zero in this case either: Zero to the power of zero is equal to one! Any other "convention" would invalidate a substantial portion of the mathematical literature (especially concerning common notations for polynomials and/or power series).

A related discussion involves the factorial of zero (0!) which is also equal to 1. However, most people seem less reluctant to accept this one, because the generalization of the factorial function (involving the Gamma function) happens to be continous about the origin...


(2003年07月26日) 1
The unit number to which all nonzero numbers refer.

(2003年07月26日) p = 3.141592653589793238462643383279502884+
Pi is the ratio of the perimeter of a circle to its diameter.
[画像: Pi ]

The symbol p for the most famous transcendental number was introduced in a 1706 textbook by William Jones (1675-1749) reportedly because it's the first letter of the Greek verb perimetrein ("to measure around") from which the word "perimeter" is derived. Euler popularized the notation after 1736. It's not clear whether Euler knew of the previous usage pioneered by Jones.

Historically, ancient mathematicians did convince themselves that LR/2 was the area of the surface generated by a segment of length R when one of its extremities (the "apex") is fixed and the other extremity has a trajectory of length L (which remains perpendicular to that segment).

The record shows that they did this for planar geometry (in which case the trajectory is a circle) but the same reasoning would apply to nonplanar trajectories as well (any curve drawn on the surface of sphere centered on the apex will do).

They reasoned that the trajectory (the circle) could be approximated by a polygonal line with many small sides. The surface could then be seen as consisting of many thin triangles whose heights were very nearly equal to R, whereas the base was very nearly a portion of the trajectory. As the area of each triangle is R/2 times such a portion, the area of the whole surface is R/2 times the length of the entire trajectory [QED?].

Of course, this type of reasoning was made fully rigorous only with the advent of infinitesimal calculus, but it did convince everyone of the existence of a single number p which would give both the perimeter (2pR) and the surface area (pR2 ) of a circle of radius R...

The ancient problem of squaring the circle asked for a ruler and compass construction of a square having the same area as a circle of given diameter. Such a thing would constitute a proof that p is constructible, which it's not. Therefore, it's not possible to square the circle...

p isn't even algebraic (i.e., it's not the root of any polynomial with integer coefficients). All constructible numbers are algebraic but the converse doesn't hold. For example, the cube root of two is algebraic but not constructible, which is to say that there's no solution to another ancient puzzle known as the Delian problem (or duplication of the cube). A number which is not algebraic is called transcendental.

In 1882, p was shown to be transcendental by C.L. Ferdinand von Lindemann (1852-1939) using little more than the tools devised 9 years earlier by Charles Hermite to prove the transcendence of e (1873).

Lindemann was the advisor of at least 49 doctoral students. The three earliest ones had stellar academic careers and scientific achievements: David Hilbert (1885), Minkowski (1885) and Sommerfeld (1891).

p was proved irrational much earlier (1761) by Lambert (1728-1777).

Pi Day 2018

Since 1988, Pi Day is celebrated worldwide on March 14 (3-14 is the beginning of the decimal expansion of Pi and it's also the Birthday of Albert Einstein, 1879-1955). This geeky celebration was the brainchild of the physicist Larry Shaw (1939-2017). The thirtieth Pi Day was celebrated by Google with the above Doodle on their home page, on 2018年03月14日.

On that fateful Day, Stephen Hawking (1942-2018) died at the age of 76.

Expansion of Pi as a Continued Fraction | Mnemonics for pi | Wikipedia
Video : The magic and mystery of pi by Norman J. Wildberger (2013, dubious "conclusion").


(2003年07月26日) Ö2 = 1.414213562373095048801688724209698+
Root 2. The diagonal of a square of unit side. Pythagoras' Constant.
He is unworthy of the name of man who is ignorant of the fact
that the diagonal of a square is incommensurable with its side.

Plato (427-347 BC)

When they learned about the irrationality of Ö2, the Pythagoreans sacrificed 100 oxen to the gods (a so-called hecatomb)... The followers of Pythagoras (c. 569-475 BC) kept this sensational discovery a secret to be revealed to the initiated mathematikoi only.

At least one version of a dubious legend says that the man who disclosed that dark secret was thrown overboard and perished at sea.

The martyr may have been Hippasus of Metapontum and the death sentence—reportedly handed out by Pythagoras himself—may have been a political retribution for starting a rival sect, whether or not the schism revolved around the newly discovered concept of irrationality. Eight centuries later, Iamblicus reported that Hippasus had drowned because of his publication of the construction of a dodecahedron inside a sphere (something construed as a sort of community secret).

Hippasus of Metapontum is credited with the classical proof (ca. 500 BC) which is summarized below. It is based on the fundamental theorem of arithmetic (i.e., the unique factorization of any integer into primes).

The square of any fraction features an even number of prime factors both in the numerator and in the denominator. Those cannot cancel pairwise to yield a single prime, like 2, in lowest terms. QED

The irrationality of the square root of 2 may also be proved very nicely using the method of infinite descent, without any notion of divisibility !

Video : What was up with Pythagoras? by Vi Hart (2012年06月12日)


(2013年07月17日) Ö3 = 1.732050807568877293527446341505872+
Root 3. Diameter of a cube of unit side. Constant of Theodorus.

Theodorus taught mathematics to Plato, who reported that he was teaching about the irrationality of the square root of all integers besides perfect squares "up to 17", before 399 BC. Of course, the theorem of Theodorus is true without that artificial restriction (which Theodorus probably imposed for pedagogical purposes only). Once the conjecture is made, the truth of the general theorem is fairly easy to establish.

Elsewhere on this site, we give a very elegant short modern proof of the general theorem, by the method of infinite descent. A more pedestrian approach, probably used by Theodorus, is suggested below...

There's also a partial proof which settles only the cases below 17. Some students of the history of mathematics jumped to the conclusion that this must have been the (lost) reasoning of Theodorus (although this guess flies in the face that the Greek words used by Plato do mean "up to 17" and not "up to 16"). Let's present that weak argument, anachronistically, in the vocabulary of congruences, for the sake of brevity:

If q is an odd integer with a rational square root expressed in lowest terms as x/y, then:

q y2 = x2

Because q is odd, so are both sides (or else x and y would have a common even factor). Therefore, the two odd squares are congruent to 1 modulo 8 and q must be too. Below 17, the only possible values of q are 1 and 9 (both of which are perfect squares). QED

This particular argument doesn't settle the case of q = 17 (which Theodurus was presenting in class as solved) and it's not much simpler (if at all) than a discussion based on a full factorization of both sides (leading to a complete proof by mere generalization of the method which had established the irrationality of the square root of 2, one century earlier).

Therefore, my firm opinion is that Theodorus himself knew very well that his theorem was perfectly general, because he had proved it so... The judgement of history that the square root of 3 was the second number proved to be irrational seems fair. So does the naming of that constant and the related theorem after Theodorus of Cyrene (465-398 BC).


(2003年07月26日) f = 1.61803398874989484820458683436563811772+
The diagonal of a regular pentagon of unit side: f = (1+Ö5) / 2
[画像:The diagonal of a regular pentagon of unit side is equal to the golden ratio.]

f2 = 1 + f

This ubiquitous number is variously known as the Golden Number, the Golden Section, the Golden Mean, the Divine Proportion or the Fibonacci Ratio (because it's the limit of the ratio of consecutive terms in the Fibonacci sequence). It's the aspect ratio of a rectangle whose semiperimeter is to the larger side what the larger side is to the smaller one.

The Greek symbol f (phi) is the initial of feidíaz , the name of the Great Pheidias (c.480-430 BC) who created the Statue of Zeus at Olympia (c.435 BC) the fourth oldest of the Seven Wonders of the Ancient World. Pheidias also created the sculptures on the Parthenon, whose architecture embodied the golden ratio, half a century after Pythagoras described it.

The 5 Fifth Roots of Unity | Continued Fraction | Wythoff 's Game

Metallic means | Beyond the Golden Ratio (14:46) by Gabe Perez-Giz (PBS Infinite Series, 2018年01月25日).


(2003年07月26日) e = 2.718281828459045235360287471352662497757+
The base of an exponential function equal to its own derivative: å 1/n!

Some US students call this number the Andrew Jackson number, because of an obscure way to memorize its first 15 digits: 2.18281828459045.

Among many other things, e is the limit of (1 + 1/n ) n as n tends to infinity. Its continued fraction expansion is surprisingly simple:

[2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, ... 2n+2, 1, 1, ... ]

e is called Euler's number (not to be confused with Euler's constant g ).

e was first proved transcendental by Charles Hermite (1822-1901) in 1873.

Leonhard Euler 1707-1783
The letter e may now no longer be used to denote
anything other than this positive universal constant.

Edmund Landau (1877-1938)

The Invention of Logarithms | Mnemonics for e


(2014年05月15日) 1-1/e = 0.632120558828557678404476229838539...
Rise time and fixed-point probability: 1/1! - 1/2! + 1/3! - 1/4! + 1/5! - ...

Every electrical engineer knows that the time constant of a first-order linear filter is the time it takes to reach 63.2% of a sudden level change.

For example, to measure a capacitor C with an oscilloscope, use a known resistor R and feed a square wave to the input of the basic first-order filter formed by R and C. Assuming the period of the wave is much larger than RC, the value of RC is equal to the time it takes the output to change by 63.2% of the peak-to-peak amplitude on every transition.

The "rise-time" which can be given automatically by modern oscillosopes is defined as the time it takes a signal to rise from 10% to 90% of its peak-to-peak amplitude. It's good to know that the RC time constant is about 45.5% of that for the above signal (it sure beats messing around with cursors just to measure a capacitor). For example, the rise time of a sinewave is 29.52% of its period (the reader may want to check that the exact number is asin(0.8)/p).

Proof : If the time constant of a first-order lowpass filter is taken as the unit of time, then its response to a unit step will be 1-exp(-t) at time t. That's 10% at time ln(10/9) and 90% at time ln(10) The rise time is the interval between those two times, namely ln(9) or nearly 2.2. The reciprocal of that is about 45.512%. More precisely:

"10-90 Rise Time" = ln(9) RC = (2.1972245773362...) RC

The time constant (RC) of a first-order lowpass filter is 45.5% of its rise time.
WaveformRise TimesTime
unit
0 to 63.212%10-90%0-100%
RC-filtered
long-period squarewave 1 ln 9 n/a RC
2.1972
Sinewave ¼+asin(1-1/e)/2p asin(0.8)/p 0.5 Period
0.3589 0.2952

Probability of existence of a fixed point :

The number 63.212...% is also famously known as the probability that a permutation of many elements will have at least one fixed point (i.e., an element equal to its image). Technically, it's only the limit of that as the number of elements tends to infinity. However, the convergence is so rapid that the difference is negligible. The exact probability for n elements is:

1/1! - 1/2! + 1/3! - 1/4! + 1/5! - 1/6! + ... - (-1)n / n!

With n = 10, for example, this is 28319 / 44800 = 0.63212053571428... (which approximates the limit to a precision smaller than 37 ppb).

A random self-mapping (not necessarily bijective) of a set of n points will have at least one fixed point with a probability that tends slowly to that same limit when n tends to infinity. The exact probability is:

1 - ( 1 - 1/n )n

For n = 10, this is 0.6513215599, which is 1.92% more than the limit.

Bandwidth of a signal from its rise-time, by Eric Bogatin (2013年11月09日).
Wikipedia : Rise time


(2003年07月26日) ln 2 = 0.693147180559945309417232121458176568+
The alternating sum 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + 1/7 - 1/8 +...
or the straight sum ½ + (½)2/2 + (½)3/3 + (½)4/4 + (½)5/5 + ...

Both expressions come from the Mercator series : ln 2 = -H(-1) = H(½)

where H(x) = x + x2/2 + x3/3 + x4/4 + ... + xn/n + ... = - ln(1-x)

The first few decimals of this pervasive constant are worth memorizing!

The Logarithmic Constant: Log 2 by Xavier Gourdon and Pascal Sebah.


(2011年08月24日) log 2 = 0.3010299956639811952137388947245-

When many actual computations used decimal logarithms, every engineer memorized the 5-digit value (0.30103) and trusted it to 8-digit precision.

If decibels (dB) are used, a power factor of 2 thus corresponds to 3 dB or, more precisely, 3.0103 dB.

To a filter designer, the attenuation of a first-order filter is quoted as 6 dB per octave which means that amplitudes change by a factor of 2 when frequencies change by an octave (which is a factor of 2 in frequency). A second-order low-pass filter would have an ultimate slope of 12 dB per octave, etc.


(2003年07月26日) g = 0.577215664901532860606512090082402431+
The limit of [1 + 1/2 + 1/3 + 1/4 + ... + 1/n] - ln(n) , as n ® ¥

Leonhard Euler 1707-1783 The Euler-Mascheroni constant is named after Leonhard Euler (1707-1783) and Lorenzo Mascheroni (1750-1800). It's also known as Euler's constant (as opposed to Euler's number e ). It's arguably best defined as a slope in the Gamma function:

g = - G' (1)

The previous sum can be recast as the partial sum of a convergent series, by introducing telescoping terms. The general term of that series (for n ≥ 2) is:

1/n - ln(n) + ln(n-1) = 1/n + ln(1-1/n) = S p ≥ 2 (-1 / pnp )

Therefore, since terms in absolutely convergent series can be reordered:

1 - g = S n ≥ 2 S p ≥ 2 1 / pnp = S p ≥ 2 S n ≥ 2 1 / pnp
Therefore, using the zeta function 1 - g = S p ≥ 2 (z(p)-1) / p

The constant g was calculated to 16 digits by Euler in 1781. The symbol g is due to Mascheroni, who gave 32 digits in 1790 (his other claim to fame is the Mohr-Mascheroni theorem). Only the first 19 of Mascheroni's digits were correct. The mistake was only spotted in 1809 by Johann von Soldner (the eponym of another constant) who obtained 24 correct decimals...

In 1878, the thing was worked out to 263 decimal places by the astronomer John Couch Adams (1819-1892) who had almost discovered Neptune as a young man (in 1846).

In 1962, gamma was computed electronically to 1271 digits by D.E. Knuth, then to 3566 digits by Dura W. Sweeney (1922-1999) with a new approach.

7000 digits were obtained in 1974 (W.A. Beyer & M.S. Waterman) and 20 000 digits in 1977 (by R.P. Brent, using Sweeney's method). Teaming up with Edwin McMillan (1907-1991; Nobel 1951) Brent would produce more than 30 000 digits in 1980.

Alexander J. Yee, a 19-year old freshman at Northwestern University, made UPI news (on 2007年04月09日) for his computation of 116 580 041 decimal places in 38½ hours on a laptop computer, in December 2006. Reportedly, this broke a previous record of 108 million digits, set in 47 hours and 36 minutes of computation (from September 23 to 26, 1999) by the Frenchmen Xavier Gourdon (X1989) and Patrick Demichel.

Unbeknownst to Alex Yee and the record books (kept by Gourdon and Sebah) that record had been shattered earlier (with 2 billion digits) by Shigeru Kondo and Steve Pagliarulo. Competing against that team, Alexander J. Yee and Raymond Chan have since computed about 30 billion digits of g (and also of Log 2) as of 2009年03月13日. Kondo and Yee then collaborated to produce 1 trillion digits of Ö2 in 2010. Later that year, they computed 5 trillion digits of p, breaking the previous record of 2.7 trillion digits of p (2009年12月31日) held by the Frenchman Fabrice Bellard (X1993, born in 1972).

Everybody's guess is that g is transcendental but this constant has not even been proven irrational yet...

Charles de la Vallee-Poussin (1866-1962) Baron in 1928 Charles de la Vallée-Poussin (1866-1962) is best known for having given an independent proof of the Prime Number Theorem in 1896, at the same time as Jacques Hadamard (1865-1963). In 1898, he investigated the average fraction by which the quotient of a positive integer n by a lesser prime falls short of an integer. Vallée-Poussin proved that this tends to g for large values of n (and not to ½, as might have been guessed).

The Euler constant: g by Xavier Gourdon and Pascal Sebah (2004)

The mystery of 0.577 (10:02) by Tony Padilla (Numberphile, 1016年10月05日).


(2003年07月26日) G = 0.91596559417721901505460351493238411+
Catalan's Constant, alternating sum of the reciprocal odd squares. b(2)

G = b(2) = 1 - 1/9 + 1/25 - 1/49 + ... + (-1)n/(2n+1)2 + ...

Eugene Charles Catalan 1814-1894; X 1833

This is named after Eugène Catalan (1814-1894; X1833).

Catalan's name has also been given to the Catalan solids (the duals of the Archimedean solids) and the famous integer sequence of Catalan numbers.

Dirichlet Beta Function (b)


(2003年07月26日) z(3) = 1.20205690315959428539973816151144999+
Apéry's Constant, the sum of the reciprocal cubes: å 1/n 3 A002117
Apéry's incredible proof appears to be a mixture of miracles and mysteries.
Alfred Jacobus van der Poorten (1942-2010)

What caused the admiration of Alf van der Poorten is the proof of the irrationality of z(3) by the French mathematician Roger Apéry (1916-1994) in 1977. That proof is based on an equation featuring a rapidly-converging series:

z(3)
=
5
¥
å
k=1
(-1) k-1
vinculum vinculum

=
2
k3 ì
î 2k
k ü
þ

The reciprocal of Apéry's constant 1/z(3) is equally important: (A088453)

1/z(3) = 0.831907372580707468683126278821530734417...

It is the density of cubefree integers (see A160112) and the probability that three random integers are relatively prime. That constant also appears in the expression of the average energy of a thermal photon.

Average Energy of a Thermal Photon | Experimental Mathematics and Integer Relations

Apéry's constant by Jacob Krol (Student project, Fall 2016).


[画像: Orientation of the Complex Plane.] (2003年07月26日) i is the basic imaginary number: i 2 = -1
If +1 is one step forward, i is a step sideways to the left...

Many people who should know better (including brilliant physicists like Steven Weinberg or Leonard Susskind) have not been able to resist the temptation of "defining" i as Ö(-1) to avoid a more proper introduction.

Such a shortcut must be avoided unless one is prepared to give up the most trusted properties of the square root function, including:

Ö(xy) = Öx Öy

If you are not convinced that the square root function (and its familiar symbol) should be strictly limited to nonnegative real numbers, just consider what the above relation would mean with x = y = -1.

Neither of the two complex numbers (i and -i) whose square is -1 can be described as the "square root of -1". The square root function cannot be defined as a continuous function over the domain of complex numbers. Continuity can be rescued if the domain of the function is changed to a strange beast consisting of two properly connected copies (Riemann sheets) of the complex plane sharing the same origin. Such considerations do not belong in an introduction to complex numbers. Neither does the deceptive square-root symbol (Ö).

Idiot's Guide to Complex Numbers

border
border
border
border

Exotic Mathematical Constants

These important mathematical constants are much less pervasive than the above ones...

(2008年04月13日) 21/3 = 1.25992104989487316476721060727822835+
The Delian constant is the scaling factor which doubles a volume.

The cube root of 2 is much less commonly encountered than its square root (1.414...). There's little need to remember that it's roughly equal to 1.26 but it can be useful (e.g., a 5/8" steel ball weighs almost twice as much as a 1/2" one).

The fact that this quantity cannot be constructed "classically" (i.e., with ruler and compass alone) shows that there's no "classical" solution to the so-called Delian problem whereby the Athenians were asked by the Oracle of Apollo at Delos to resize the altar of Apollo to make it "twice as large".

The Delian constant has also grown to be a favorite example of an algebraic number of degree 3 (arguably, it's the simplest such number). Thus, its continued fraction expansion (CFE) has been under considerable scrutiny... There does not seem to be anything special about it, but the question remains theoretically open whether it's truly normal or not (by contrast, the CFE of any algebraic number of degree 2 is periodic ).

In Western music theory, the chromatic octave (the interval which doubles the frequency of a tone) is subdivided into 12 equal intervals (semitones). That's to say: three equal steps of four semitones each result in a doubling of the frequency. An interval of four semitones is known as a major third. Three consecutive major thirds correspond to a doubling of the frequency. Thus, the Delian constant (1.259921...) is the frequency ratio corresponding to a major third.

A Delian brick is a cuboid with sides proportional to 1, 21/3 and 22/3.

That term was coined by Ed Pegg on 2018年06月19日. A planar cut across the middle of its longest side splits a Delian brick into two Delian bricks.

That's the 3-D equivalent of a Ö2 aspect ratio for rectangles, on which is based the common A-series of paper sizes (as are the B-series, used for some playing cards, and the C-series for envelopes.)

The Delian Brick and other 3D self-similar dissections by Ed Pegg (2018年07月03日)


(2009年02月08日) G = 0.834626841674073186281429732799046808994-
Gauss's constant (G) is the reciprocal of agm (1,Ö2)

On May 30, 1799, Carl Friedrich Gauss found the following expression to be the reciprocal of the arithmetic-geometric mean between 1 and Ö2.


G =
2 ó
õ 1
0 dx
=
B (¼,½)
=
G (¼) 2
= 0.83462684...
vinculum vinculum vinculum vinculum
p
space
vinculum
Ö 1-x4
2p (2p)3/2

It's also the reciprocal of the Wallis integral of order ½. I1/2 = 1/G.

The continued fraction expansion (CFE) of G is: (A053002)

G = [ 0; 5, 21, 3, 4, 14, 1, 1, 1, 1, 1, 3, 1, 15, 1, 3, 8, 36, 1, 2, 5, 2, 1, 1 ... ]

The symbol G is also used for Catalan's constant which is best denoted b(2) whenever there is any risk of confusion.

Wikipedia : Gauss's constant


(2015年07月12日) Rayleigh factor: 1.219669891266504454926538847465+
Conventional coefficient pertaining to the diffraction limit on resolution.

This is equal to the first zero of the J1 Bessel function divided by p. Commonly approximated 1.22 or 1.220.

This coefficient appears in the formula which gives the limit q of the angular resolution of a perfect lens of diameter D for light of wavelength l :

q = 1.220 l / D

This precise coefficient is arrived at theoretically by using Rayleigh's criterion which states that two points of light (e.g., distant stars) can't be distinguished if their angular separation is less than the diameter of their Airy disks (the diameter of the first dark circle in the interference pattern described theoretically by George Airy in 1835).

The precise value of the factor to use is ultimately a matter of convention about what constitutes optical distinguishability. The theoretical criterion on which the above formula is based was originally proposed by Rayleigh for sources of equal magnitudes. It has proved more appealing than all other considerations, including the empirical Dawes' limit, which ignores the relevance of wavelength. Dawes' limit would correspond to a coefficient of about 1.1 at a wavelength of 507 nm (most relevant to the scotopic astronomical observations used by Dawes).

Note that the digital deconvolution of images allows finer resolutions than what the above classical formula implies.


(2003年07月30日) B1 = 0.26149721284764278375542683860869585905+
The limit of [1/2 + 1/3 + 1/5 + 1/7 + 1/11 + 1/13 + ... + 1/p] - ln(ln p)

This is often called Mertens' constant in honor of the number theorist Franz Mertens (1840-1927). It is to the sequence of primes what Euler's constant is to the sequence of integers. It's sometimes also called Kronecker's constant or the Reciprocal Prime Constant.

Proposals have been made to name this constant after Charles de la Vall馥-Poussin (1866-1962) and/or Jacques Hadamard (1865-1963), the two mathematicians who first proved (independently) the Prime Number Theorem, in 1896.


(2006年06月15日) Artin's Constant : C = 0.373955813619202288054728+
The product of all the factors [ 1 - 1 / (q2- q) ] for prime values of q.

For any prime p besides 2 and 5, the decimal expansion of 1/p has a period at most equal to p-1 (since only this many different nonzero "remainders" can possibly show up in the long division process). Primes yielding this maximal period are called long primes [to base ten] by recreational mathematicians and others. The number 10 is a primitive root modulo such a prime p, which is to say that the first p-1 powers of 10 are distinct modulo p (the cycle then repeats, by Fermat's little theorem). Putting a = 10, this is equivalent to the condition:

a (p-1)/d ¹ 1 (modulo p) for any prime factor d of (p-1).

For a given prime p, there are f(p-1) satisfactory values of a (modulo p), where f is Euler's totient function. Conversely, for a given integer a, we may investigate the set of long primes to base a...

Emil Artin, March 3, 1898, Dec. 20, 1962.
Emil Artin

It seems that the proportion C(a) of such primes (among all prime numbers) is equal to the above numerical constant C, for many values of a (including negative ones) and that it's always a rational multiple of C. The precise conjecture tabulated below originated with Emil Artin (1898-1962) who communicated it to Helmut Hasse in September 1927.

Neither -1 nor a quadratic residue can be a primitive root modulo p > 3. Hence, the table's first row is as stated.

Artin's conjecture for primitive roots (1927) first refined by Dick Lehmer
(For a given "base" a, use the earliest applicable case, in the order listed.)
Base a Proportion C(a) of primes p for which a is a primitive root
-1 or b 2 0
a = b k C(a) = v(k) C(b)
v is multiplicative: v(qn ) = q(q-2) / (q2-q-1) if q is prime
sf(a) mod 4 = 1
See notation below*
C(a) = [ 1 - q prime
1
Vinculum
1 + q - q2
] C
Õ
q | sf (a)
Otherwise, C(a) = C = 0.3739558136192022880547280543464164151116... This last case applies to all integers, positive (A085397) or negative (A120629) that are not perfect powers and whose squarefree part isn't congruent to 1 modulo 4, namely:
2, 3, 6, 7, 10, 11, 12, 14, 15, 18, 19, 22, 23, 24, 26, 28, 30, 31, 34, 35, 38, 39, 40 ...
-2, -4, -5, -6, -9, -10, -13, -14, -16, -17, -18, -20, -21, -22, -24, -25, -26, -29, -30, -33 ...

(*) In the above, sf (a) is the squarefree part of a, namely the integer of least magnitude which makes the product a sf (a) a square. The squarefree part of a negative integer is the opposite of the squarefree part of its absolute value.

The conjecture can be deduced from its special case about prime values of a, which states the density is C unless a is 1 modulo 4, in which case it's equal to:

[ ( a 2 - a ) / ( a 2 - a - 1 ) ] C

In 1984, Rajiv Gupta and M. Ram Murty showed Artin's conjecture to be true for infinitely many values of a. In 1986, David Rodney ("Roger") Heath-Brown proved nonconstructively that there are at most 2 primes for which it fails... Yet, we don't know about any single value of a for which the result is certain!


(2003年07月30日) m = 1.451369234883381050283968485892027449493+
Ramanujan-Soldner constant, zero of the logarithmic integral: li(m) = 0

This number is named after Johann von Soldner (1766-1833) and Srinivasa Ramanujan (1887-1920). It's also called Soldner's constant.

m is the only positive root of the logarithmic integral function "li" (which shouldn't be confused with the older capitalized offset logarithmic integral "Li", still used by number theorists when x is large: Li x = li x - li 2 ).

li x = ó x
dt = ó x
dt = Ei (ln x)
PV PV PV PV
õ 0 ln t õ m ln t
Li x = ó x
dt = li x - li 2
PV PV
õ 2 ln t

li 2 = 1.0451637801174927848445888891946131365226155781512...

The above integrals must be understood as Cauchy principal values whenever the singularity at t = 1 is in the interval of integration...

This last caveat fully applies to Li when x isn't known to be large. The ad-hoc definition of Li was made by Euler (1707-1783) well before Cauchy (1789-1857) gave a proper definition for the principal value of an integral.

Nowadays, there would be no reason to use the Eulerian logarithmic integral (capitalized Li) except for compatibility with the tradition that some number theorists have kept to this day. Even in the realm of number theory, I advocate the use of the ordinary logarithmic integral (lowercase li) possibly with the second definition given above (where the Soldner constant 1.451... is the lower bound of integration). That second definition avoids bickering about principal values when the argument is greater than one (the domain used by number theorists) although students may wonder at first about the origin of the "magical" constant. Wonderment is a good thing.

The function li is also called integral logarithm (French: logarithme integral).


(2017年11月25日) Landau-Ramanujan constant (Landau, 1908)
K = 0.7642236535892206629906987312500923281167905413934...
Defined by Landau and expressed as an integral by Ramanujan.

Asymptotically, the density of integers below x expressible as the sum of two squares is inversely proportional to the square root of the natural logarithm of x. The coefficient of proprtionality is, by definition, the Landau-Ramanujan constant.

Ramanujan expressed as an integral the constant so defined by Landau.

Edmund Landau (1877-1938) | Srinivasa Ramanujan (1887-1920) | A064533 | Wikipedia | MathWorld


(2004年02月19日) W(1) = 0.567143290409783872999968662210355550-
For no good reason, this is sometimes called the Omega constant.

It's the solution of the equation x = e-x   or, equivalently, x = ln(1/x)
In other words, it's the value at point 1 of Lambert's W function.

The value of that constant could be obtained by iterating the function e-x, but the convergence is very slow. It's much better to iterate the function:

f (x) = (1+x) / (1+ex )

This has the same fixed-point but features a zero derivative there, so that the convergence is quadratic (the number of correct digits is roughly doubled with each iteration). This fast approach is an example of Newton's method.


(2003年07月30日) The two Feigenbaum constants rule the onset of chaos:
d = 4.669201609102990671853203820466201617258185577475769-
a = -2.502907875095892822283902873218215786381271376727150-

What's known as the [first] Feigenbaum constant is the "bifurcation velocity" (d) which governs the geometric onset of chaos via period-doubling in iterative sequences (with respect to some parameter which is used linearly in each iteration, to damp a given function having a quadratic maximum). This universal constant was unearthed in October 1975 by Mitchell J. Feigenbaum (1944-2019). The related "reduction parameter" (a) is the second Feigenbaum constant...

Feigenbaum Constant by Eric W. Weisstein (MathWorld) | Mathematical Constants by Steven R. Finch.

4.669... The Feigenbaum constant (18:54) by Ben Sparks (Numberphile, 2017年01月16日).
This equation will change how you see the world (18:38) by Derek Muller (Veritasium, 2020年01月29日).


(2021年06月21日) Fransén-Robinson constant (inverse gamma integral).
F = 2.80777024202851936522150118655777293230808592093019829+
F = ò ¥ dx = e + ò ¥ e-x dx
Vinculum Vinculum
0 G(x) 0 p2 + (Log x)2

Historically, this constant was successively computed...

A058655 (decimals) | A046943 (continued fraction) | Wikipedia | Weisstein | Gamma function

Difference between e and the Fransén-Robinson constant. Jack D'Aurizio (MathStackExchange, 2016年05月27日).

Fransén-Robinson Adventure (12:39, 15:28, 26:37) by Jens Fehlau (Flammable Maths, July 2019).


(2021年08月06日) Landau's Constant (upper bound is conjectured accurate).
L = 0.54325896534297670695272829530061323113886329375835699-
0.5 = 1 < L ≤ G(1/3) G(5/6) = 0.5432589653429767...
Vinculum Vinculum
2 G(1/6)

Edmund Landau (1877-1938) | Bloch's theorem | André Bloch (1893-1948)


(2021年07月30日) Bloch's Constant (upper bound is conjectured accurate).
B = 0.47186165345268178487446879361131614907701262173944324+
0.4330127... = Ö3 < B ≤ G(1/3) G(11/12) = 0.47186165345...
Vinculum Vinculum
4 G(1/4) (1+Ö3)½

It's conjectured that the above upper bound, using the Gamma function, is actually the true value of Bloch's constant, but this hasn't been proved yet. When André Bloch (1893-1948) originally stated his theorem, he merely stated that the universal constant B he introduced was no less than 1/72.

Bloch's theorem (1925)

Consider the space S of all schlicht functions (holomorphic injections on the open disk D of radius 1 centered on 0). The largest disk contained in f (D) has a radius which is no less than a certain universal positive constant B.

Bloch's constant is defined as the largest value of B for which the theorem holds. Originally, Bloch only proved that B ≥ 1/72.

Encyclopedia of Mathematics

Les théorèmes de M. Valiron sur les fonctions entières et la théorie de l'uniformisation André Bloch (1925).

Ueber die Blochsche Konstante by L.V. Ahlfors and H. Grunsky Math. Z. , 42, 671-673 (1937).

The Bloch constant of bounded harmonic mappings by Flavia Colonna (MathStackExchange, 2016年05月27日).

border
border
border
border

Some Third-Tier Mathematical Constants

The neat examples in this section seem unrelated to more fundamental constants... They're also probably useless outside of the specific context in which they've popped up.

(2016年01月19日) Gelfond's Constant: e p = 23.1406926327792690...
Raising this transcendental number to the power of i gives e ip = -1.

Because i is irrational but not transcendental, the Gelfond-Schneider theorem implies that Gelfond's constant is transcendental.

Wikipedia : Gelfond's constant | Gelfond-Schneider theorem (1934) | Alexander Gelfond (1906-1968)


(2004年05月22日) Brun's Constant: B2 = 1.90216058321 (26)
Sum of the reciprocals of [pairs of] twin primes:
(1/3+1/5) + (1/5+1/7) + (1/11+1/13) + (1/17+1/19) + (1/29+1/31) + ...

This constant is named after the Norwegian mathematician who proved the sum to be convergent, in 1919: Viggo Brun (1885-1978).

The scientific notation used above and throughout Numericana indicates a numerical uncertainty by giving an estimate of the standard deviation (s). This estimate is shown between parentheses to the right of the least significant digit (expressed in units of that digit). The magnitude of the error is thus stated to be less than this with a probability of 68.27% or so.

Thomas R. Nicely Thomas R. Nicely, professor of mathematics at Lynchburg College. started his computation of Brun's constant in 1993. He made headlines in the process, by uncovering a flaw in the Pentium microprocessor's arithmetic, which ultimately forced a costly (475ドルM) worldwide recall by Intel.

Usually, mathematicians have to shoot somebody to get this much publicity.
Dr. Thomas R. Nicely (quoted in The Cincinnati Enquirer)

Nicely kept updating his estimate of Brun's constant for a few years until 2010 or so, at which point he was basing his computation on the exact number of twin primes found below 1.6×1015. Because he felt a general audience could not be expected to be familiar with the aforementioned standard way scientists report uncertainties, Nicely chose to report the so-called 99% confidence level, which is three times as big. (More precisely, ±3s is a 99.73% confidence level.) The following expressions thus denote the same value, with the same uncertainty:

1.90216 05832 09 ± 0.00000 00007 81
1.90216 05832 09 (260) [ updated: 2009年10月05日 ]

As a three-digit precision on the uncertainty is usually considered an overkill, most scientists would advertised that as 1.90216058321(26).

Brun's theorem | A065421 (Decimal expansion) | A065421 (Continued fraction)


(2003年08月05日) 3.359885666243177553172011302918927179688905+
Prévost's Constant: Sum of the reciprocals of the Fibonacci numbers.

1/1 + 1/1 + 1/2 + 1/3 + 1/5 + 1/8 + 1/13 + 1/21 + 1/34 + 1/55 + 1/89 + ...

The sum of the reciprocals of the Fibonacci numbers proved irrational by Marc Prévost, in the wake of Roger Apéry's celebrated proof of the irrationality of z(3), which has been known as Apéry's constant ever since.

The attribution to Prévost was reported by François Apéry (son of Roger Apéry) in 1996: See The Mathematical Intelligencer, vol. 18 #2, pp. 54-61: Roger Ap駻y, 1916-1994: A Radical Mathematician available online (look for "Prevost", halfway down the page).

The question of the irrationality of the sum of the reciprocals of the Fibonacci numbers was formally raised by Paul Erdös and may still be erroneously listed as open, despite the proof of Marc Prévost (Université du Littoral Côte d'Opale).


(2003年08月05日) 0.73733830336929...
Grossman's Constant. [Not known much beyond the above accuracy.]

A 1986 conjecture of Jerrold W. Grossman (which was proved in 1987 by Janssen & Tjaden) states that the following recurrence defines a convergent sequence for only one value of x, which is now called Grossman's Constant:


a0 = 1 ; a1 = x ; an+2 =
an
vinculum
1 + an+1

Similarly, there's another constant, first investigated by Michael Somos in 2000, above which value of x the following quadratic recurrence diverges (below it, there's convergence to a limit that's less than 1): 0.39952466709679947- (where the terminal "7-" stands for something probably close to "655").

a0 = 0 ; a1 = x ; an+2 = an+1 ( 1 + an+1 - an )

Early releases from Michael Somos contained a typo in the digits underlined above ("666" instead of "66") which Somos corrected when we pointed this out to him (2001年11月24日). However, the typo still remained for several years (until 2004年04月13日) in a MathSoft online article whose original author (Steven Finch) was no longer working at MathSoft at the time when a first round of notifications was sent out.


(2003年08月06日) 262537412640768743.9999999999992500725971982-
Ramanujan's number: exp(p Ö163) is almost an integer.

The attribution of this irrational constant to Ramanujan was made by Simon Plouffe, as a monument to a famous 1975 April fools column by Martin Gardner in Scientific American (Gardner wrote that this constant had been proved to be an integer, as "conjectured by Ramanujan" in 1914 [sic!] ).

Actually, this particular property of 163 was first noticed in 1859 by Charles Hermite (1822-1901). It doesn't appear in Ramanujan's relevant 1914 paper.

There are reasons why the expression exp (pÖn) should be close to an integer for specific integral values of n. In particular, when n is a large Heegner number (43, 67 and 163 are the largest Heegner numbers). The value n = 58, which Ramanujan did investigate in 1914, is also most interesting. Below are the first values of n for which exp (pÖn) is less than 0.001 away from an integer:

 25: 6635623.999341134233266+
 37: 199148647.999978046551857-
 43: 884736743.999777466034907-
 58: 24591257751.999999822213241+
 67: 147197952743.999998662454225-
 74: 545518122089.999174678853550-
148: 39660184000219160.000966674358575+
163: 262537412640768743.999999999999250+
232: 604729957825300084759.999992171526856+
268: 21667237292024856735768.000292038842413-
522: 14871070263238043663567627879007.999848726482795-
652: 68925893036109279891085639286943768.000000000163739-
719: 3842614373539548891490294277805829192.999987249566012+

Kurt_Heegner (1893-1965)

Ramanujan's Constant and its Cousins by Titus Piezas III (2005年01月14日)

163 and Ramanujan's Number (11:29) by Alex Clark (Numberphile, 2012年03月02日).


(2003年08月09日) 1.1319882487943...
Viswanath's constant was computed to 8 decimals in 1999.

In 1960, Hillel Furstenberg and Harry Kesten showed that, for a certain class of random sequences, geometric growth was almost always obtained, although they did not offer any efficient way to compute the geometric ratio involved in each case. The work of Furstenberg and Kesten was used in the research that earned the 1977 Nobel Prize in Physics for Philip Anderson, Neville Mott, and John van Vleck. This had a variety of practical applications in many domains, including lasers, industrial glasses, and even copper spirals for birth control...

At UC Berkeley in 1999, Divakar Viswanath investigated the particular random sequences in which each term is either the sum or the difference of the two previous ones (a fair coin is flipped to decide whether to add or subtract). As stated by Furstenberg and Kesten, the absolute values of the numbers in almost all such sequences tend to have a geometric growth whose ratio is a constant. Viswanath was able to compute this particular constant to 8 decimals.

Currently, more than 14 significant digits are known (see A078416).


(2012年07月01日) Copeland-Erdös Number: 0.23571113171923293137...
Concatenating the digits of the primes forms a normal number.

Borel defined a normal number (to base ten) as a real number whose decimal expansion is completely random, in the sense that all sequences of digits of a prescribed length are equally likely to occur at a random position in the decimal expansion.

It is well-known that almost all real numbers are normal in that sense (which is to say that the set of the other real numbers is contained in a set of zero measure). Pi is conjectured to be normal but this is not known for sure.

It is actually surprisingly difficult to define explicitely a number that can be proven to be normal. So far, all such numbers have been defined in terms of a peculiar decimal expansion. The simplest of those is Champernowne's Constant whose decimal expansion is obtained by concatenating the digits of all the integers in sequence. This number was proved to be decimally normal in 1933, by David G. Champernowne (1912-2000) as an undergraduate.

0.1234567891011121314151617181920212323242526272829303132...

In 1935, Besicovitch showed that the concatenation of all squares is normal:

0.1491625364964811001211441691962252562893243614004414845...

Champernowne had conjectured (in 1933) that a normal number would also be formed by concatenating the digits of all the primes:

0.2357111317192329313741434753596167717379838997101103107...

In 1946, that conjecture was proved by Arthur H. Copeland (1898-1970) and Paul Erdös (1913-1996) and this last number was named after them.

Note on Normal Numbers (1946) | Copeland-Erdös Number (Wikipedia) | Copeland-Erdös Constant

border
border
border
border

The 6+1 Basic Dimensionful Physical Constants ( Proleptic SI )

The Newtonian constant of gravitation is the odd one out, but each of the other 6 constants below either has an exact value defining one of the 7 basic physical units in terms of the SI second (the unit of time) or could play such a role in the near future... (The term "proleptic" in the title is a reminder that this may be wishful thinking.)

Some other set of independent constants could have been used to define the 7 basic units (for example, a conventional value of the electron's charge could replace the conventional permeability of the vacuum) but the following one was chosen after careful considerations. For the most part, it has already been enacted officially as part of the SI system ("de jure" values are pending for Planck's constant, Avogadro's number and Boltzmann's constant).

The number of physical dimensions is somewhat arbitrary. We argue that temperature ought to be an independent dimension, whereas the introduction of the mole is more of a practical convenience than an absolute necessity. A borderline case concerns radiation measurements: We have included the so-called luminous units (candela, lumen, etc.) through the de jure mechanical equivalent of light, but have left out ionizing radiation which is handled by other proper SI units (sievert, gray, etc.). Yet, both cases have a similarly debatable biological basis: Either the response of a "standard" human retina (under photopic conditions) or damage to some "average" living tissue.

On the other hand, the very important and very fundamental Gravitational Constant (G) does not make this list... With 7 dimensions and an arbitrary definition of one unit (the second) there's only room for 6 basic constants, and G was crowded out. Other systems can be designed where G has first-class status, but there's a price to pay: In the Astronomical System of Units, a precise value of G is obtained at the expense of an imprecise kilogram ! To design a system of units where both G and the kilogram have precise values would require a major breakthrough (e.g., a fundamental expression for the mass of the electron).

(2003年07月26日) c = 299792458 m/s Einstein's Constant
The speed of light in a vacuum. [Exact, by definition of the meter (m)]

In April 2000, Kenneth Brecher (of Boston University) produced experimental evidence, at an unprecedented level of accuracy, which supports the main tenet of Einstein's Special Theory of Relativity, namely that the speed of light (c) does not depend on the speed of the source.

Brecher was able to claim a fabulous accuracy of less than one part in 1020, improving the state-of-the-art by 10 orders of magnitude! Brecher's conclusions were based on the study of the sharpness of gamma ray bursts (GRB) received from very distant sources: In such explosive events, gamma rays are emitted from points of very different [vectorial] velocities. Even minute differences in the speeds of these photons would translate into significantly different times of arrival, after traveling over immense cosmological distances. As no such spread is observed, a careful analysis of the data translates into the fabulous experimental accuracy quoted above in support of Einstein's theoretical hypothesis.

Because a test that aims at confirming SR must necessarily be evaluated in the context of theories incompatible with SR, there will always be room for fringe scientists to remain unconvinced by Brecher's arguments (e.g., Robert S. Fritzius, 2002).

When he announced his results at the April 2000 APS meeting in Long Beach (CA), Brecher declared that the constant c appears "even more fundamental than light itself" and he urged his colleagues to give it a proper name and start calling it Einstein's constant. The proposal was well received and has only been gaining momentum ever since, to the point that the "new" name seems now fairly well accepted.

Since 1983, the constant c has been used to define the meter in terms of the second, by enacting as exact the above value of 299792458 m/s.

Where does the symbol "c" come from?

Historically, "c" was used for a constant which later came to be identified as the speed of electromagnetic propagation multiplied by the square root of 2 (this would be cÖ2, in modern terms). This constant appeared in Weber's force law and was thus known as "Weber's constant" for a while.

On at least one occasion, in 1873, James Clerk Maxwell (who normally used "V" to denote the speed of light) adjusted the meaning of "c" to let it denote the speed of electromagnetic waves instead.

In 1894, Paul Drude (1863-1906) made this explicit and was instrumental in popularizing "c" as the preferred notation for the speed of electromagnetic propagation. However, Drude still kept using the symbol "V" for the speed of light in an optical context, because the identification of light with electromagnetic waves was not yet common knowledge: Coat-of-arms of Heinrich Hertz Electromagnetic waves had first been observed in 1888, by Heinrich Hertz (1857-1894). Einstein himself used "V" for the speed of light and/or electromagnetic waves as late as 1907.

c may also be called the celerity of light: [Phase] celerity and [group] speed are normally two different things, but they coincide for light in a vacuum.

For more details, see: Why is c the symbol for the speed of light? by Philip Gibbs

History of the speed of light (16:32) by Paul Looyen (High School Physics Explained, 2019年04月19日).

About the speed of light (24:25, 11:48) by Rebecca Smethurst (Dec. 2019).

Why No One Has Measured The Speed Of Light (19:04) by Derek Muller (Veritasium, 2020年10月31日).


(2003年07月26日) mo = 4p 10-7 N/A2 = 1.256637061435917295... mH/m
Magnetic permeability of the vacuum. [Definition of the ampere (A)]

The relation eo mo c 2 = 1 and the exact value of c yield an exact SI value, with a finite decimal expansion, for Coulomb's constant (in Coulomb's law):

1 = 8.9875517873681764 ´ 10 9 » 9 ´ 10 9 N . m 2 / C 2
vinculum
4peo

Consequently, the electric constant (dielectric permittivity of the vacuum) has a known infinite decimal expansion, derived from the above:

eo = 8.85418781762038985053656303171... ´ 10 -12 F/m


h and h-bar (2003年08月10日) Planck's Constant(s): h and h/2p
Quantum of action: h = 6.62607015 10-34 J/Hz (fixed in 2019) Quantum of spin: h/2p = 1.054571817646156391262428...10-34 J.s/rad

A photon of frequency n has an energy hn where h is Planck's constant. Using the pulsatance w = 2pn this is h-barw where h-bar is Dirac's constant.

The constant h-bar = h/2p is actually known under several names:

  • Dirac's constant.
  • The reduced Planck constant.
  • The rationalized Planck constant.
  • The quantum of angular momentum.
  • The quantum of spin (although some spins are half-multiples of this).

The constant h-bar is pronounced either "h-bar" or (more rarely) "h-cross". It is equal to unity in the natural system of units of theoreticians (h is 2p). The spins of all particles are multiples of h-bar/2 = h/4p (an even multiple for bosons, an odd multiple for fermions).

There's a widespread belief that the letter h initially meant Hilfsgrösse ("auxiliary parameter" or, literally, "helpful quantity" in German) because that's the neutral way Max Planck (1858-1947) introduced it, in 1900.

Units :

As noted at the outset, the actual numerical value of Planck's constant depends on the units used. This, in turn, depends on whether we choose to express the rate of change of a periodic phenomenon directly as the change with time of its phase expressed in angular units (pulsatance) or as the number of cycles per unit of time (frequency). The latter can be seen as a special case of the former when the angular unit of choice is a complete revolution (i.e., a "cycle" or "turn" of 2p radians).

A key symptom that angular units ought to be involved in the measurement of spin is that the sign of a spin depends on the conventional orientation of space (it's an axial quantity).

Likewise, angular momentum and the dynamic quantity which induces a change in it (torque) are axial properties normally obtained as the cross-product of two radial vectors. One good way to stress this fact is to express torque in Joules per radian (J/rad) when obtained as the cross-product of a distance in meters (m) and a force in newtons (N).

1 N.m = 1 J / rad = 2p J / cycle = 2p W / Hz = 120 p W / rpm

Note that torque and spectral power have the same physical dimension.

Evolution from measured to defined values :

Current technology of the watt balance (which compares an electromagnetic force with a weight) is almost able to measure Planck's constant with the same precision as the best comparisons with the International prototype of the kilogram, the only SI unit still defined in terms of an arbitrary artifact. It is thus likely that Planck's constant could be given a de jure value in the near future, which would amount to a new definition of the SI unit of mass.

Resolution 7 of the 21st CGPM (October 1999) recommends "that national laboratories continue their efforts to refine experiments that link the unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram". Although precise determinations of Avogadro's constant were mentioned in the discussion leading up to that resolution, the watt balance approach was considered more promising. It's also more satisfying to define the kilogram in terms of the fundamental Planck constant, rather than make it equivalent to a certain number of atoms in a silicon crystal. (Incidentally, the mass of N identical atoms in a crystal is slightly less than N times the mass of an isolated atom, because of the negative energy of interaction involved.)

In 1999, Peter J. Mohr and Barry N. Taylor have proposed to define the kilogram in terms of an equivalent frequency n = 1.35639274 1050 Hz, which would h equal to c2/n, or 6.626068927033756019661385... 10-34 J/Hz.

Instead, it would probably be better to assign h or [rather] h/2p a rounded decimal value de jure. This would make the future definition of the kilogram somewhat less straightforward, but would facilitate actual usage when the utmost precision is called for. To best fit the "kilogram frequency" proposed by Mohr and Taylor, the de jure value of h-bar would have been:

1.054571623 10-34 J.s/rad

However, a mistake which was corrected with the 2010 CODATA set makes that value substantially incompatible with our best experimental knowledge. Currently (2011) the simplest candidate for a de jure definition is:

h-bar = 1.0545717 10-34 J.s/rad

Note: " ħ " is how your browser displays UNICODE's "h-bar" (&#295;).

In 2018, an exact value of h will define the kilogram :

The instrument which will perform the defining measurement is the Watt Balance invented in 1975 by Bryan Kibble (1938-2016). In 2016, the metrology community decided to rename the instrument a Kibble balance, in his honor (in a unanimous decision by the CCU = Consultative Committee for Units).

The Watt-Balance & Redefining the Kilogram (9:49) Bryan Kibble, Tony Hartland & Ian Robinson (2013).
Planck's Constant and the Origin of Quantum Mechanics (15:15) Matt O'Dowd (2016年06月22日).
How we're Redefining the Kilogram (9:49) by Derek Muller (2017年07月12日).


(2003年08月10日) Boltzmann's Constant k = 1.3806488(13) 10-23 J/K
Defining entropy and/or relating temperature to energy.

Named after Ludwig Boltzmann (1844-1906) the constant k = R/N is the ratio of the ideal gas constant (R) to Avogadro's number (N).

Boltzmann's constant is currently a measured quantity. However, it would be sensible to assign it a de jure value that would serve as an improved definition of the unit of thermodynamic temperature, the kelvin (K) which is currently defined in terms of the temperature of the triple point of water (i.e., 273.16 K = 0.01°C, both expressions being exact by definition ).

History :

What's now known as Boltzmann's relation was first formulated by Boltzmann in 1877. It gives the entropy S of a system known to be in one of W equiprobable states. Following Abraham Pais, Eric W. Weisstein reports that Max Planck first used the constant k in 1900.

Grave of Ludwig Boltzmann (1844-1906) Zentralfriedhof, Wien (Group 14C, Number 1)

S = k ln (W)

Epitaph of Ludwig Boltzmann (1844-1906)
Epitaph of Ludwig Boltzmann (1844-1906)

The constant k became known as Boltzmann's constant around 1911 (Boltzmann had died in 1906) under the influence of Planck. Before that time, Lorentz and others had named the constant after Planck !

Philosophy of Statistical Mechanics by Lawrence Sklar (2001)


Avogadro's coat of arms (2003年08月10日) Avogadro Number = Avogadro's Constant
Number of things per mole of stuff : 6.02214129(27) 1023/mol
In January 2011, the IAC argued for 6.02214082(18) 1023/mol

The constant is named after the Italian physicist Amedeo Avogadro (1776-1856) who formulated what is now known as Avogadro's Law, namely:

At the same temperature and [low] pressure, equal volumes of different gases contain the same number of molecules.

The current definition of the mole states that there are as many countable things in a mole as there are atoms in 12 grams of carbon-12 (the most common isotope of carbon).

Keeping this definition and giving a de jure value to the Avogadro number would effectively constitute a definition of the unit of mass. Rather, the above definition could be dropped, so that a de jure value given to Avogadro's number would constitute a proper definition of the mole which would then be only approximatively equal to 12 g of carbon-12 (or 27.97697027(23) g of silicon-28).

In spite of the sheer beauty of those isotopically-enriched single-crystal polished silicon spheres manufactured for the International Avogadro Coordination (IAC), it would certainly be much better for many generations of physicists yet to come to let a de jure value of Planck's constant define the future kilogram... (The watt-balance approach is more rational but less politically appealing, or so it seems.)

(2003年07月26日) 683 lm/W (lumen per watt) at 540 THz
The "mechanical equivalent of light". [Definition of the candela (cd)]

The frequency of 540 THz (5.4 1014 Hz) corresponds to yellowish-green light. This translates into a wavelength of about 555.1712185 nm in a vacuum, or about 555.013 nm in the air, which is usually quoted as 555 nm.

This frequency, sometimes dubbed "the most visible light", was chosen as a basis for luminous units because it corresponds to a maximal combined sensitivity for the cones of the human retina (the receptors which allow normal color vision under bright-light photopic conditions).

The situation is quite different under low-light scotopic conditions, where human vision is essentially black-and-white (due to rods not cones ) with a peak response around a wavelength of 507 nm.

Brightness by Rod Nave | The Power of Light | Luminosity Function


(2007年10月25日) The ultimate dimensionful constant...
Newton's constant of gravitation: G = 6.674 10-11 m3 / kg s2

Assuming the above evolutions [ 1, 2, 3 ] come to pass, the SI scheme would define every unit in terms of de jure values of fundamental constants, using only one arbitrary definition for the unit of time (the second). There would be no need for that remaining arbitrary definition if the Newtonian constant of gravitation (the remaining fundamental constant) was given a de jure value.

There's no hope of ever measuring the constant of gravitation directly with enough precision to allow a metrological definition of the unit of time (the SI second) based on such a measurement.

However, if our mathematical understanding of the physical world progresses well beyond its current state, we may eventually be able to find a theoretical expression for the mass of the electron in terms of G. This would equate the determination of G to a measurement of the mass of the electron. Possibly, that could be done with the required metrological precision...

border
border
border
border

Fundamental Physical Constants


Here are a few physical constants of significant metrological importance, with the most precisely known ones listed first. For the utmost in precision, this is roughly the order in which they should be either measured or computed.

One exception is the magnetic moment of the electron expressed in Bohr magnetons: 1.00115965218076(27). That number is a difficult-to-compute function of the fine structure constant (a) which is actually known with a far lesser relative precision. However, that "low" precision pertains to a small corrective term away from unity and the overall precision is much better.

The list starts with numbers that are known exactly (no uncertainty whatsoever) simply because of the way SI units are currently defined. Such exact numbers include the speed of light (c) in meters per second (cf. SI definition of the meter) or the vacuum permeability (m0 ) in henries per meter (or, equivalently, newtons per squared ampère, see SI definition of the ampere).

In this table, an equation between square brackets denotes a definition of an experimental quantity in terms of fundamental constants known with a lesser precision. On the other hand, unbracketed equations normally yields not only the value of the quantity but the uncertainty on it (from the uncertainties on products or ratios of the constants involved). Recall that the worst-case uncertainty on a product of independent factors is very nearly the sum of the uncertainties on those factors. So is the uncertainty on a product of positive factors that are increasing functions of each other: (e.g., the uncertainty on a square and a cube are respectively two and three times larger than the uncertainty on the number itself). The reader may want to use such considerations to establish that the uncertainties on the Bohr radius, the Compton wavelength and the "classical radius of the electron" are respectively proportional to 1, 2 and 3. (HINT: The uncertainty on the fine-structure constant is much larger than the uncertainty on Rydberg's constant.) Another good exercise is to use the tabulated formula to compute Stefan's constant and the uncertainty on it.
Except as noted, all values are derived from CODATA 2010.
Dx / x Physical Constants (sorted by relative uncertainty)
0 Einstein's Constant : c = 299792458 m/s (speed of light, SI 1983)
Permeability of the Vacuum : m0 = 4p ´ 10-7 H/m [e0 m0 c 2 = 1]
Ampere's Constant : m0 / 4p = 1 / 4pe0c2 = 10-7 N/A2 = 10-7 H/m
Coulomb's Constant : 1 / 4pe0 = 8.9875517873681764 ´ 109 V.m/C
Mass of a Lone Carbon-12 Atom, in daltons : 12 u = 12 Da
Cesium-133 Hyperfine Frequency : 9192.63177 MHz
2.610-13 Electron Moment in Bohr magnetons: me = -1.00115965218076(27) mB
6.310-13 Protium Hyperfine Frequency : nH = 1420.4057517667(9) MHz
Protium Hyperfine Wavelength : lH = 21.106114054179(13) cm
5.010-12 Rydberg Constant : R¥ [ = me c a2 / 2h] = 10973731.568539(55) / m
Rydberg Frequency : c R¥ = 3289.841960364(17) THz
1.510-11 Mass of an Alpha Particle, in daltons : ma = 4.001506179125(62) u
4He Atom: ma + 2me - (24.58739+54.41531) eV/c2 = 4.002603254131(63) u
Mohr et al. gave 4.002603254153(63) at odds with CODATA 2006 in the next-to-last digit (5 instead of 3).
3.810-11 Mass of a Deuteron, in daltons : md = 2.013553212712(77) u
Mass of Deuterium: md + me - 13.6020 eV/c 2 = 2.01410177791(78) u
6.010-11 Heliocentric Gravitational Constant : 1.32712440042(8) 1020 m3/s2
8.910-11 Mass of a Proton, in daltons : mp = 1.007276466812(90) u
Mass of Protium : mp + me - 13.5983 eV/c 2 = 1.007825032123(90) u
3.210-10 von Klitzing Resistance : RK [ = h / q2 ] = 25812.8074434(84) W
Fine-Structure Constant : a = Z0 / 2RK = 1 / 137.035999074(44)
Bohr Radius : a0 = a / 4pR¥ = 0.52917721092(17) 10-10 m
4.010-10 Mass of an Electron, in daltons : me = 5.4857990946(22) 10-4 u
4.110-10 Proton / Electron Mass Ratio : mp / me = 1836.15267245(75)
4.210-10 Mass of a Neutron, in daltons : mn = 1.00866491600(43) u
8.210-10 Mass of a Triton, in daltons : mt = 3.0155007134(25) u
Mass of Tritium : mt + me - 13.6032 eV/c 2 = 3.0160492787(25) u
8.310-10 Mass of a Helion, in daltons : mh = ma' = 3.0149322468(25) u
3He Atom: mh + 2me - (24.58629+54.41287) eV/c2 = 3.0160293218(25) u
6.510-10 Compton Wavelength : lc = a2 / 2R¥ = 2.4263102389(16) 10-12 m
9.710-10 Classical electron radius: re = a3 / 4pR¥ = 2.8179403267(27) 10-15 m
8.110-9 Electron / Proton Magnetic Ratio : me / mp = -658.2106848(54)
2.210-8 Magnetic Flux Quantum: F0 [ = h/2q ] = 2.067833758(46) 10-15 Wb
Elementary Charge : q = 2 F0 / RK = 1.602176565(35) 10-19 C
Faraday's Constant : F [ = q Na ] = 96485.3365(21) C/mol
Bohr Magneton : mB [ = q h / 4pme ] = 9.27400968(20) 10-24 J/T
Nuclear Magneton : mN [ = q h / 4pmp ] = 5.05078353(11) 10-27 J/T
Electron Charge / Mass : -q / me = -1.758820088(38) 1011 C/kg
Mass Defect per eV, in daltons : 1 eV / c2 = 1.073544150(24) 10-9 u
Rydberg Voltage : hc R¥ / q = ½ (me /q) c2 a2 = 13.60569253(30) V
Tritium Ionization: (hc R¥ /q) / (1 + me /mt ) = 13.60321783(30) V
Deuterium Ionization: (hc R¥ /q) / (1 + me /md ) = 13.60198675(30) V
Protium Ionization : (hc R¥/q) / (1 + me /mp ) = 13.59828667(30) V
Ionization of 4He : 2 F0 (5945204223(42) MHz) = 24.58738798(54) V
The correction factor for He-3 would be about 0.999955147 yielding approximately 24.586285 V for He-3.
Second IP of 4He+ : 4 (hc R¥/q) / (1 + me /ma ) = 54.4153101(12) V
Experimental / relativistic : 54.417760 V
Second IP of 3He+ : 4 (hc R¥/q) / (1 + me /ma') = 54.4128695(12) V
Extrapolation : 54.415319 V
4.410-8 Planck's Constant : h = q2 RK = 6.62606957(29) 10-34 J/Hz
Avogadro's Number : Na = F / q = 6.02214129(27) 1023 /mol
Atomic Mass Unit : u = (1 g/mol) / Na = 1.660538921(73) 10-27 kg
Mass of an Electron : me = 9.10938291(40) 10-31 kg
Rydberg Energy : hc R¥ = me c2 a2 / 2 = 2.179872171(96) 10-18 J
Mass of a Proton : mp = 1.672621777(74) 10-27 kg
Mass of a Neutron : mn = 1.674927351(74) 10-27 kg
9.110-7 Boltzmann Constant : k = 1.3806488(13) 10-23 J/K
Thermal Voltage Constant : k / q = R / F = 86.173324(78) mV/K
Molar Gas Constant : R = Na k = 8.3144621(75) J/K/mol
2.010-6 Richardson Constant: A0 = 4pk2 q me /h3 = 1.2017321(24) MA/m2.K2
3.610-6 Stefan Constant: s = 2 p5 k4 / 15 h3 c2 = 5.670373(21) 10-8 W/m2.K4
1.010-4 Newtonian Constant of Gravitation : G = 6.67384(80) 10-11 m3/kg.s2
The CODATA value was downgraded from 6.67428(67) in 2006 to 6.67384(80) in 2010 but combining the 2 most precise pre-2006 measurements would yield a value of 6.67425(14). CODATA went from 6.6720(41) in 1973 to an overly optimistic 6.67259(85) in 1986, then 6.673(10) in 1998 and 6.6742(10) in 2002.
Solar Mass : (G.S)/G = 1.98855(24) 1030 kg
8.010-4 Radius of a Proton (RMS charge) : Rp = 0.84184(67) 10-15 m
In July 2010, Randolf Pohl & al. found that the 2006 CODATA value of 0.8768(69) fm was 4% too large! This was too late to revise the 2010 CODATA value of 0.8775(51) fm which "stands" officially, for now.

Carl Sagan once needed an "obvious" universal length as a basic unit in a graphic message intended for [admittedly very unlikely] extra-terrestrial decoders. That famous picture was attached to the two space probes (Pioneer 10 and 11, launched in 1972 and 1973) which would become the first man-made objects ever to leave the Solar System.

Sagan chose one of the most prevalent lengths in the Cosmos, namely the wavelength of 21 cm corresponding to the hyperfine spin-flip transition of neutral hydrogen (isolated hydrogen atoms do pervade the Universe).

Hydrogen Line : 1420.4057517667(9) MHz 21.106114054179(13) cm

Back in 1970, the value of the hyperfine "spin-flip" transition frequency of the ground state of atomic hydrogen (protium) had already been measured with superb precision by Hellwig et al. :

1420.405751768(2) MHz.

This was based on a direct comparison with the hyperfine frequency of cesium-133, carried out at NBS (now NIST). In 1971, Essen et al pushed the frontiers of precision to a level that has not been equaled since then. Their results stood for nearly 40 years as the most precise measurement ever performed (the value of the magnetic moment of the electron expressed in Bohr magnetons is now known with slightly better precision).

1420.4057517667(9) MHz

Three years earlier (in 1967) a new definition of the SI second had been adopted based on cesium-133, for technological convenience. Now, the world is almost ripe for a new definition of the unit of time based on hydrogen, the simplest element. Such a new definition might have much better prospects of being ultimately tied to the theoretical constants of Physics in the future.

A similar hyperfine "spin-flip" transition is observed for the 3He+ ion, which is another system consisting of a single electron orbiting a fermion. Like the proton, the helion has a spin of 1/2 in its ground state (unlike the proton, it also exists in a rare excited state of spin 3/2). The corresponding frequency was measureed to be:

8665.649905(50) MHz E.N. Fortson, F.G. Major and H.G.Dehmelt
Phys. Rev. Lett., vol. 16, pp. 221-225 1966
8665.649867(10) MHz Hans A. Schuessler, E.N. Fortson and H.G. Dehmelt
Phys. Rev., vol. 187, pp. 5-38 1969

A very common microscopic yardstick is the equilibrium bond length in a hydrogen molecule (i.e., the average distance between the two protons in an ordinary molecule of hydrogen). It is not yet tied to the above fundamental constants and it's only known at modest experimental precision:

0.7414 Å = 7.414 10-11 m

CODATA recommended value for the physical constants: 2010 (2012年03月15日)
CODATA recommended value for the physical constants: 2006 (2008年06月06日)
by Peter J. Mohr, Barry N. Taylor & David B. Newell

Measurement of the Unperturbed Hydrogen Hyperfine Transition Frequency by Helmut Hellwig et al.
IEEE Transactions on Instrumentation and Measurements, Vol. IM-19, No. 4, November 1970.

"The atomic hydrogen line at 21 cm has been measured to a precision of 0.001 Hz"
by L. Essen, R. W. Donaldson, M. J. Bangham, and E. G. Hope,. Nature (London) 229, 110 (1971).

Hydrogen-like Atoms by James F. Harrison (Chemistry 883, Fall 2008, Michigan State University)

border
border
border
border

Primary Conversion Factors


Below are the statutory quantities which allow exact conversions between various physical units in different systems:

  • 149597870700 m to the au: Jean-Dominique Cassini 1625-1712 Astronomical unit of length. (2012)
    Enacted by the International Astronomical Union on August 31, 2012. This is the end of a long road which began in 1672 as Cassini proposed a unit equal to the mean distance between the Earth and the Sun. This was recast as the radius of the circular trajectory of a tiny mass that would orbit an isolated solar mass in one "year" (first an actual sidereal year, then a fixed approximation thereof, known as the Gaussian year).
    This gives also an exact metric equivalence for the parsec (pc) unit, defined as 648000 au / p. (The obscure siriometer introduced in 1911 by Carl Charlier (1862-1934) for interstellar distances is 1 Mau = 1.495978707 1017 m, or about 4.848 pc.)
  • 25.4 mm to the inch: International inch. (1959)
    Enacted by an international treaty, effective January 1, 1959. This gives the following exact metric equivalences for other units of length: 1 ft = 0.3048 m, 1 yd = 0.9144 m, 1 mi = 1609.344 m
  • 39.37 "US survey" inches to the meter: "US Survey" inch. (1866, 1893)
    This equivalence is now obsolete, except in some records of the US Coast and Geodetic Survey. The International units defined in 1959 are exactly 2 ppm smaller than their "US Survey" counterparts (the ratio is 999998/1000000).
  • 1 lb = 0.45359237 kg: International pound. (1959)
    Enacted by an international treaty, effective January 1, 1959. This gives the following exact metric equivalences for other customary units of mass: 1 oz = 28.349523125 g, 1 ozt = 31.1034768 g, 1 gn = 64.79891 mg, since there are 7000 gn to the lb, 16 oz to the lb, and 480 gn to the troy ounce (ozt).
  • 231 cubic inches to the Winchester gallon: U.S. Gallon. (1707, 1836)
    This is now tied to the 1959 International inch, which makes the [Winchester] US gallon equal to exactly 3.785411784 L.
  • 4.54609 L to the Imperial gallon: U.K. Gallon. (1985)
    This is the latest and final metric equivalence for a unit proposed in 1819 (and effectively introduced in 1824) as the volume of 10 lb of water at 62°F.
  • 9.80665 m/s2: Standard acceleration of gravity. (1901)
    Multiplying this by a unit of mass gives a unit of force equal to the weight of that mass under standard conditions approximately equivalent to those that would prevail at 45° of latitude on Earth, at sea-level. The value was enacted by the third CGPM in 1901. 1 kgf = 9.80665 N and 1 lbf = 4.4482216152605 N.
  • 101325 Pa = 1 atm: Normal atmospheric pressure. (1954)
    As enacted by the 10th CGPM in 1954, the atmosphere unit (atm) is exactly 760 Torr. It's only approximately 760 mmHg, because of the following specification for the mmHg and other units of pressure based on the conventional density of mercury.
  • 13595.1 g/L (or kg/m3 ): Conventional density of mercury.
    This makes 760 mmHg equal a pressure of (0.76)(13595.1)(9.80665) or exactly 101325.0144354 Pa, which was rounded down in 1954 to give the official value of the atm stated above. The torr (whose symbol is capitalized: Torr) was then defined as 1/760 of the rounded value, which makes the mmHg very slightly larger than the torr, although both are used interchangeably in practice. The mmHg is based on this conventional density (which is close to the actual density of mercury at 0°C) regardless of whatever the actual density of mercury may be under the prevailing temperature at the time measurements are taken. Beware of what apparently authoritative sources may say on this subject...
  • 999.972 g/L (or kg/m3 ): Conventional density of "water".
    This is the conventional conversion factor between so-called relative density and absolute density. This is also the factor to use for units of pressure expressed as heights of a water column (just like the above conventional density of mercury is used for similar purposes to obtain temperature-independent pressure units). This density is clearly very close to that of natural water at its densest point. However, it's best considered to be a conventional conversion factor.

    The above number can be traced to the 1904 work of the Swiss-born French metrologist Charles E. Guillaume (1861-1938; Nobel 1920). Guillaume had joined the BIPM in 1883 and would be its director from 1915 to 1936. From 1901 (3rd CGPM) to 1964 (12th CGPM), the liter was (unfortunately) not defined as a cubic decimeter, but instead as the volume of 1 kg of water in its densest state under 1 atm of pressure (which indicates a temperature of about 3.984°C) Guillaume measured that volume to be 1000.028 cc, which is equivalent to the above conversion factor (to a 9-digit accuracy).
    The above conventional density remains universally adopted in spite of the advent of "Standard Mean Ocean Water" (SMOW) whose density can be slightly higher: SMOW around 3.98°C is about 999.975 g/L.

    The original batch of SMOW came from seawater collected by Harmon Craig on the equator at 180 degrees of longitude. After distillation, it was enriched with heavy water to make the isotopic composition match what would be expected of undistilled seawater (distillation changes the isotopic composition, because lighter molecules are more volatile). In 1961, Craig tied SMOW to the NBS-1 sample of meteoric water originally collected from the Potomac River by the National Bureau of Standards (now NIST). For example, the ratio of Oxygen-18 to Oxygen-16 in SMOW was 0.8% higher than the corresponding ratio in NBS-1. This "actual" SMOW is all but exhausted, but water closely matching its isotopic composition has been made commercially available, since 1968, by the Vienna-based IAEA (International Atomic Energy Agency) under the name of VSMOW or "Vienna SMOW".
  • 4.184 J to the calorie (cal): Thermochemical calorie. (1935)
    This is currently understood as the value of a calorie, unless otherwise specified (the 1956 "IST" calorie described below is slightly different). Watch out! The kilocalorie (1 kcal = 1000 cal) was dubbed "Calorie" or "Cal" [capital "C"] in dietetics before 1969 (it still is, at times).
  • 2326 J/kg = 1 Btu/lb: IST heat capacity of water, per °F. (1956)
    This defines the IT or IST ("International [Steam] Tables") flavor of the Btu ("British Thermal Unit") in SI units, once the lb/kg ratio is known. That value was adopted in July 1956 by the 5th International Conference on the Properties of Steam, which took place in London, England. The subsequent definition of the pound as 0.45359237 kg (effective since January 1, 1959) makes the official Btu equal to exactly 1055.05585262 J. The rarely used centigrade heat unit (chu) is defined as 1.8 Btu (exactly 1899.100534716 J).
    The additional relation 1 cal/g = 1 chu/lb has been used to introduce a dubious "IST calorie" of exactly 4.1868 J competing with the above thermochemical calorie of 4.184 J, used by the scientific community since 1935. Beware of the bogus conversion factor of 4.1868 J/cal which has subsequently infected many computers and most handheld calculators with conversion capabilities...
    The Btu was apparently introduced by Michael Faraday (before 1820?) as the quantity of heat required to raise one pound (lb) of water from 63°F to 64°F. This deprecated definition is roughly compatible with the modern one (and it remains mentally helpful) but it's metrologically inferior.
border
border
border
border

Dimensionless Physical Constants

Embedded into physical reality are a few nontrivial constants whose values do not depend on our chosen system of measurement units. Examples include the ratios of all elementary particles to the mass of the electron. Arguably, one of the ultimate goals of theoretical physics is to explain those values.

Other such unexplained constants have a mystical flair to them.


(2018年06月02日) "Galileo's constant" : Case closed!
A constant Galileo once had to measure is now known perfectly.

Galileo detected the simultaneity of two events by ear. When two bangs were less than about 11 ms apart he heard a single sound and considered the two events simultaneous. That's probably why he chose that particular duration as his unit of time which he called a tempo (plural tempi). The precise definition of the unit was in terms of a particular water-clock which he was using to measure longer durations.

Using a simple pendulum of length R, he would produce a bang one quarter-period after the release by have metal gong just underneath the pivot point. On the other hand, he could also release a ball in free fall from a height H over another gong. Releasing the two things simultanously, he could tell if the two durations were equal (within the aforementioned precision) and adjust either length until they were.

Galileo observed that the ratio R/H was always the same and he measured the value of that constant as precisely as he could. Nowadays, we know the ideal value of that constant:

R/H = 8 / p2 = 0.8105694691387021715510357...

This much can be derived in any freshman physics class using the elementary principles established by Newton after Galileo's death.

Any experimental discrepancy can be explained by the smallish effects neglected to obtain the above ideal formula (e.g., air resistance, friction, finite size of the bob, substantial amplitude).

Thus, Galileo's results can now be used backwards to estimate how good his experiemental methods were. (Indeed, they were as good as can be expected when simultaneity is appreciated by ear.)

The dream of some theoretical physicists is now to advance our theories to the point that the various dimensionless physical constants which are now mysterious to us can be explained as easily as what I've called Galileo's constant here (for shock value).


(2018年06月02日) Sommerfeld's fine-structure constant (1916)
a = 0.0072973525664(17) = 1 / 137.035999139(31)

Combining Planck's constant (h), with the two electromagnetic constants and/or the speed of light (recall that eo mo c 2 = 1) there's essentially only one way to obtain a quantity whose dimension is the square of an electric charge. The ratio of the square of the charge of an electron to that quantity is a pure dimensionless number known as Sommerfeld's constant or the fine-structure constant :

a = m0 c e2 / 2h = e2 / 2hce0 = 1 / 137.035999...

[画像:alpha = mu0 c e^2 / 2h]

The value of this constant has captured the imagination of many generations of physicists, professionals and amateurs alike. Many wild guesses have been made, often based on little more than dubious numerology.

In 1948, Edward Teller (1908-2003) suggested that the electromagnetic interaction might be weakening in cosmological time and he ventured the guess that the fine-structure constant could be inversely proportional to the logarithm of the age of the Universe. This proposal was demolished by Dennis Wilkinson (1918-2013) using ordinary mineralogy, which shows that the rate of alpha-decay for U-238 could not have varied by much more than 10% in a billion years (that rate is extremely sensitive to the exact value of the fine-structure constant). Teller's proposal was further destroyed by precise measurements from the fossil reactors at Oklo (Gabon) which show that the fine-structure constant had essentially the same value as today two billion years ago.

Fine-structure constant | Arnold Sommerfeld (1868-1951)

On the Change of Physical Constants by Edward Teller, Physical Review, 73, 801 (1948年04月01日).


(2016年11月17日) A large number W relates electricity to gravity.
Paul Dirac tried to link it to other dimensionless physical constants.

In Newtonian terms, the electrostatic force and the gravitational force between two electrons both vary inversely as the square of the distance between them. Therefore, their ratio is a dimensionless constant W equal to the square of the electron charge-to-mass quotient multiplied by Coulomb's constant divided by the gravitational constant, namely:

W = [ 1.758820024(11) 1011 ]2 8.9875517873681764 109 = 4.16575(20) 1042
Vinculum
6.67408(31) 10-11

In 1919, Hermann Weyl (1885-1955) remarked that the radius of the Universe and the radius of an electron would be exactly in the above ratio if the mass of the Universe was to gravitational energy what the mass of an electron is to electromagnetic energy (using, for example, the electrostatic argument leading to the classical radius of the electron).

Dirac's coat of arms

In 1937, Dirac singled out the interractions between an electron and a proton instead, which led him to ponder a quantity equal to the above divided by the proton-to-electron mass ratio :

w = W = 2.26874(11) 1039
Vinculum

Pascual Jordan's coat of arms In 1966, E. Pascual Jordan (1902-1980) used Dirac's "variable gravity" cosmology to argue that the Earth had doubled in size since the continents were formed, thus advocating a very misguided alternative to plate tectonics (or continental drift).

Large Number Hypothesis and Dirac's cosmology.
Expanding Earth and declining gravity: a chapter in the recent history of geophysics Helge Kragh (2015).
Audio : Dimensionless Physical Constants and Large Number Hypothesis by Paul Dirac.
Video : Could gravity vary with time? (6:09) by Freeman Dyson (Web of Stories).

Lecture by Paul Dirac (1:10:12) in Christchurch, New Zealand (1975).

border
border
visits since July 26, 2003
(c) Copyright 2000-2023, Gerard P. Michon, Ph.D.

AltStyle によって変換されたページ (->オリジナル) /