Dimensionless Physical Constants and
Large Number Hypothesis by Paul Dirac.
Does the Gravitational Constant Vary? (53:49)
by Paul Dirac (1979).
Could gravity vary with time? (6:09)
by Freeman Dyson (2016年09月05日).
Are the Fundamental Constants Changing? (14:51)
Matt O'Dowd (2017年09月28日).
Zero is probably the most misunderstood number. Even the imaginary number i is probably better understood, (because it's usually introduced only to comparatively sophisticated audiences). It took humanity thousands of years to realize what a great mathematical simplification it was to have an ordinary number used to indicate "nothing", the absence of anything to count... The momentous introduction of zero metamorphosed the ancient Indian system of numeration into the familiar decimal system we use today.
The counting numbers start with 1, but the natural integers start with 0... Most mathematicians prefer to start with zero the indexing of the terms in a sequence, if at all possible. Physicists do that too, in order to mark the origin of a continous quantity: If you want to measure 10 periods of a pendulum, say "0" when you see it cross a given point from left to right (say) and start your stopwatch. Keep counting each time the same event happens again and stop your timepiece when you reach "10", for this will mark the passing of 10 periods. If you don't want to use zero in that context, just say something like "Umpf" when you first press your stopwatch; many do... Just a joke!
A universal tradition, which probably predates the introduction of zero by a few millenia, is to use counting numbers (1,2,3,4...) to name successive intervals of time; a newborn baby is "in its first year", whereas a 24-year old is in his 25th. When applied to calendars, this unambiguous tradition seems to disturb more people than it should. Since the years of the first century are numbered 1 to 100, the second century goes from 101 to 200, and the twentieth century consists of the years 1901 to 2000. The third millenium starts with January 1, 2001. Quantum mechanics was born in the nineteenth century (with Planck's explanation for the blackbody law, on 1900年12月14日).
For some obscure reason, many people seem to have a mental block about some ordinary mathematics applied to zero. A number of journalists, who should have known better, once questioned the simple fact that zero is even. Of course it is: Zero certainly qualifies as a multiple of two (it's zero times two). Also, in the integer sequence, any even number is surrounded by two odd ones, just like zero is surrounded by the odd integers -1 and +1... Nevertheless, we keep hearing things like: "Zero, should be an exception, an integer that's neither even nor odd." Well, why on Earth would anyone want to introduce such unnatural exceptions where none is needed?
What about 00 ? Well, anything raised to the power of zero is equal to unity and a closer examination would reveal that there's no need to make an exception for zero in this case either: Zero to the power of zero is equal to one! Any other "convention" would invalidate a substantial portion of the mathematical literature (especially concerning common notations for polynomials and/or power series).
A related discussion involves the factorial of zero (0!) which is also equal to 1. However, most people seem less reluctant to accept this one, because the generalization of the factorial function (involving the Gamma function) happens to be continous about the origin...
The symbol p for the most famous transcendental number was introduced in a 1706 textbook by William Jones (1675-1749) reportedly because it's the first letter of the Greek verb perimetrein ("to measure around") from which the word "perimeter" is derived. Euler popularized the notation after 1736. It's not clear whether Euler knew of the previous usage pioneered by Jones.
Historically, ancient mathematicians did convince themselves that LR/2 was the area of the surface generated by a segment of length R when one of its extremities (the "apex") is fixed and the other extremity has a trajectory of length L (which remains perpendicular to that segment).
The record shows that they did this for planar geometry (in which case the trajectory is a circle) but the same reasoning would apply to nonplanar trajectories as well (any curve drawn on the surface of sphere centered on the apex will do).
They reasoned that the trajectory (the circle) could be approximated by a polygonal line with many small sides. The surface could then be seen as consisting of many thin triangles whose heights were very nearly equal to R, whereas the base was very nearly a portion of the trajectory. As the area of each triangle is R/2 times such a portion, the area of the whole surface is R/2 times the length of the entire trajectory [QED?].
Of course, this type of reasoning was made fully rigorous only with the advent of infinitesimal calculus, but it did convince everyone of the existence of a single number p which would give both the perimeter (2pR) and the surface area (pR2 ) of a circle of radius R...
The ancient problem of squaring the circle asked for a ruler and compass construction of a square having the same area as a circle of given diameter. Such a thing would constitute a proof that p is constructible, which it's not. Therefore, it's not possible to square the circle...
p isn't even algebraic (i.e., it's not the root of any polynomial with integer coefficients). All constructible numbers are algebraic but the converse doesn't hold. For example, the cube root of two is algebraic but not constructible, which is to say that there's no solution to another ancient puzzle known as the Delian problem (or duplication of the cube). A number which is not algebraic is called transcendental.
In 1882, p was shown to be transcendental by C.L. Ferdinand von Lindemann (1852-1939) using little more than the tools devised 9 years earlier by Charles Hermite to prove the transcendence of e (1873).
Lindemann was the advisor of at least 49 doctoral students. The three earliest ones had stellar academic careers and scientific achievements: David Hilbert (1885), Minkowski (1885) and Sommerfeld (1891).
p was proved irrational much earlier (1761) by Lambert (1728-1777).
Since 1988, Pi Day is celebrated worldwide on March 14 (3-14 is the beginning of the decimal expansion of Pi and it's also the Birthday of Albert Einstein, 1879-1955). This geeky celebration was the brainchild of the physicist Larry Shaw (1939-2017). The thirtieth Pi Day was celebrated by Google with the above Doodle on their home page, on 2018年03月14日.
On that fateful Day, Stephen Hawking (1942-2018) died at the age of 76.
Expansion of Pi as a Continued Fraction
|
Mnemonics for pi
|
Wikipedia
Video :
The magic and mystery of pi
by Norman J. Wildberger (2013, dubious "conclusion").
When they learned about the irrationality of Ö2, the Pythagoreans sacrificed 100 oxen to the gods (a so-called hecatomb)... The followers of Pythagoras (c. 569-475 BC) kept this sensational discovery a secret to be revealed to the initiated mathematikoi only.
At least one version of a dubious legend says that the man who disclosed that dark secret was thrown overboard and perished at sea.
The martyr may have been Hippasus of Metapontum and the death sentence—reportedly handed out by Pythagoras himself—may have been a political retribution for starting a rival sect, whether or not the schism revolved around the newly discovered concept of irrationality. Eight centuries later, Iamblicus reported that Hippasus had drowned because of his publication of the construction of a dodecahedron inside a sphere (something construed as a sort of community secret).
Hippasus of Metapontum is credited with the classical proof (ca. 500 BC) which is summarized below. It is based on the fundamental theorem of arithmetic (i.e., the unique factorization of any integer into primes).
The square of any fraction features an even number of prime factors both in the numerator and in the denominator. Those cannot cancel pairwise to yield a single prime, like 2, in lowest terms. QED
The irrationality of the square root of 2 may also be proved very nicely using the method of infinite descent, without any notion of divisibility !
Video : What was up with Pythagoras? by Vi Hart (2012年06月12日)
Theodorus taught mathematics to Plato, who reported that he was teaching about the irrationality of the square root of all integers besides perfect squares "up to 17", before 399 BC. Of course, the theorem of Theodorus is true without that artificial restriction (which Theodorus probably imposed for pedagogical purposes only). Once the conjecture is made, the truth of the general theorem is fairly easy to establish.
Elsewhere on this site, we give a very elegant short modern proof of the general theorem, by the method of infinite descent. A more pedestrian approach, probably used by Theodorus, is suggested below...
There's also a partial proof which settles only the cases below 17. Some students of the history of mathematics jumped to the conclusion that this must have been the (lost) reasoning of Theodorus (although this guess flies in the face that the Greek words used by Plato do mean "up to 17" and not "up to 16"). Let's present that weak argument, anachronistically, in the vocabulary of congruences, for the sake of brevity:
If q is an odd integer with a rational square root expressed in lowest terms as x/y, then:
q y2 = x2
Because q is odd, so are both sides (or else x and y would have a common even factor). Therefore, the two odd squares are congruent to 1 modulo 8 and q must be too. Below 17, the only possible values of q are 1 and 9 (both of which are perfect squares). QED
This particular argument doesn't settle the case of q = 17 (which Theodurus was presenting in class as solved) and it's not much simpler (if at all) than a discussion based on a full factorization of both sides (leading to a complete proof by mere generalization of the method which had established the irrationality of the square root of 2, one century earlier).
Therefore, my firm opinion is that Theodorus himself knew very well that his theorem was perfectly general, because he had proved it so... The judgement of history that the square root of 3 was the second number proved to be irrational seems fair. So does the naming of that constant and the related theorem after Theodorus of Cyrene (465-398 BC).
f2 = 1 + f
This ubiquitous number is variously known as the Golden Number, the Golden Section, the Golden Mean, the Divine Proportion or the Fibonacci Ratio (because it's the limit of the ratio of consecutive terms in the Fibonacci sequence). It's the aspect ratio of a rectangle whose semiperimeter is to the larger side what the larger side is to the smaller one.
The Greek symbol f (phi) is the initial of feidíaz , the name of the Great Pheidias (c.480-430 BC) who created the Statue of Zeus at Olympia (c.435 BC) the fourth oldest of the Seven Wonders of the Ancient World. Pheidias also created the sculptures on the Parthenon, whose architecture embodied the golden ratio, half a century after Pythagoras described it.
The 5 Fifth Roots of Unity
|
Continued Fraction
|
Wythoff 's Game
Metallic means
|
Beyond the Golden Ratio (14:46)
by Gabe Perez-Giz
(PBS Infinite Series, 2018年01月25日).
Some US students call this number the Andrew Jackson number, because of an obscure way to memorize its first 15 digits: 2.18281828459045.
Among many other things, e is the limit of (1 + 1/n ) n as n tends to infinity. Its continued fraction expansion is surprisingly simple:
[2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, ... 2n+2, 1, 1, ... ]
e is called Euler's number (not to be confused with Euler's constant g ).
e was first proved transcendental by Charles Hermite (1822-1901) in 1873.
Leonhard Euler 1707-1783Every electrical engineer knows that the time constant of a first-order linear filter is the time it takes to reach 63.2% of a sudden level change.
For example, to measure a capacitor C with an oscilloscope, use a known resistor R and feed a square wave to the input of the basic first-order filter formed by R and C. Assuming the period of the wave is much larger than RC, the value of RC is equal to the time it takes the output to change by 63.2% of the peak-to-peak amplitude on every transition.
The "rise-time" which can be given automatically by modern oscillosopes is defined as the time it takes a signal to rise from 10% to 90% of its peak-to-peak amplitude. It's good to know that the RC time constant is about 45.5% of that for the above signal (it sure beats messing around with cursors just to measure a capacitor). For example, the rise time of a sinewave is 29.52% of its period (the reader may want to check that the exact number is asin(0.8)/p).
Proof : If the time constant of a first-order lowpass filter is taken as the unit of time, then its response to a unit step will be 1-exp(-t) at time t. That's 10% at time ln(10/9) and 90% at time ln(10) The rise time is the interval between those two times, namely ln(9) or nearly 2.2. The reciprocal of that is about 45.512%. More precisely:
"10-90 Rise Time" = ln(9) RC = (2.1972245773362...) RC
The number 63.212...% is also famously known as the probability that a permutation of many elements will have at least one fixed point (i.e., an element equal to its image). Technically, it's only the limit of that as the number of elements tends to infinity. However, the convergence is so rapid that the difference is negligible. The exact probability for n elements is:
1/1! - 1/2! + 1/3! - 1/4! + 1/5! - 1/6! + ... - (-1)n / n!
With n = 10, for example, this is 28319 / 44800 = 0.63212053571428... (which approximates the limit to a precision smaller than 37 ppb).
A random self-mapping (not necessarily bijective) of a set of n points will have at least one fixed point with a probability that tends slowly to that same limit when n tends to infinity. The exact probability is:
1 - ( 1 - 1/n )n
For n = 10, this is 0.6513215599, which is 1.92% more than the limit.
Bandwidth
of a signal from its rise-time, by Eric Bogatin (2013年11月09日).
Wikipedia : Rise time
Both expressions come from the Mercator series : ln 2 = -H(-1) = H(½)
where H(x) = x + x2/2 + x3/3 + x4/4 + ... + xn/n + ... = - ln(1-x)
The first few decimals of this pervasive constant are worth memorizing!
The Logarithmic Constant: Log 2 by Xavier Gourdon and Pascal Sebah.
When many actual computations used decimal logarithms, every engineer memorized the 5-digit value (0.30103) and trusted it to 8-digit precision.
If decibels (dB) are used, a power factor of 2 thus corresponds to 3 dB or, more precisely, 3.0103 dB.
To a filter designer, the attenuation of a first-order filter is quoted as 6 dB per octave which means that amplitudes change by a factor of 2 when frequencies change by an octave (which is a factor of 2 in frequency). A second-order low-pass filter would have an ultimate slope of 12 dB per octave, etc.
Leonhard Euler 1707-1783 The Euler-Mascheroni constant is named after Leonhard Euler (1707-1783) and Lorenzo Mascheroni (1750-1800). It's also known as Euler's constant (as opposed to Euler's number e ). It's arguably best defined as a slope in the Gamma function:
g = - G' (1)
The previous sum can be recast as the partial sum of a convergent series, by introducing telescoping terms. The general term of that series (for n ≥ 2) is:
1/n - ln(n) + ln(n-1) = 1/n + ln(1-1/n) = S p ≥ 2 (-1 / pnp )
Therefore, since terms in absolutely convergent series can be reordered:
1 - g =
S n ≥ 2
S p ≥ 2 1 / pnp
=
S p ≥ 2
S n ≥ 2 1 / pnp
Therefore, using the zeta function
1 - g =
S p ≥ 2 (z(p)-1) / p
The constant g was calculated to 16 digits by Euler in 1781. The symbol g is due to Mascheroni, who gave 32 digits in 1790 (his other claim to fame is the Mohr-Mascheroni theorem). Only the first 19 of Mascheroni's digits were correct. The mistake was only spotted in 1809 by Johann von Soldner (the eponym of another constant) who obtained 24 correct decimals...
In 1878, the thing was worked out to 263 decimal places by the astronomer John Couch Adams (1819-1892) who had almost discovered Neptune as a young man (in 1846).
In 1962, gamma was computed electronically to 1271 digits by D.E. Knuth, then to 3566 digits by Dura W. Sweeney (1922-1999) with a new approach.
7000 digits were obtained in 1974 (W.A. Beyer & M.S. Waterman) and 20 000 digits in 1977 (by R.P. Brent, using Sweeney's method). Teaming up with Edwin McMillan (1907-1991; Nobel 1951) Brent would produce more than 30 000 digits in 1980.
Alexander J. Yee, a 19-year old freshman at Northwestern University, made UPI news (on 2007年04月09日) for his computation of 116 580 041 decimal places in 38½ hours on a laptop computer, in December 2006. Reportedly, this broke a previous record of 108 million digits, set in 47 hours and 36 minutes of computation (from September 23 to 26, 1999) by the Frenchmen Xavier Gourdon (X1989) and Patrick Demichel.
Unbeknownst to Alex Yee and the record books (kept by Gourdon and Sebah) that record had been shattered earlier (with 2 billion digits) by Shigeru Kondo and Steve Pagliarulo. Competing against that team, Alexander J. Yee and Raymond Chan have since computed about 30 billion digits of g (and also of Log 2) as of 2009年03月13日. Kondo and Yee then collaborated to produce 1 trillion digits of Ö2 in 2010. Later that year, they computed 5 trillion digits of p, breaking the previous record of 2.7 trillion digits of p (2009年12月31日) held by the Frenchman Fabrice Bellard (X1993, born in 1972).
Everybody's guess is that g is transcendental but this constant has not even been proven irrational yet...
Charles de la Vallee-Poussin (1866-1962) Baron in 1928 Charles de la Vallée-Poussin (1866-1962) is best known for having given an independent proof of the Prime Number Theorem in 1896, at the same time as Jacques Hadamard (1865-1963). In 1898, he investigated the average fraction by which the quotient of a positive integer n by a lesser prime falls short of an integer. Vallée-Poussin proved that this tends to g for large values of n (and not to ½, as might have been guessed).
The
Euler constant: g
by Xavier Gourdon and Pascal Sebah (2004)
The mystery of 0.577 (10:02)
by Tony Padilla
(Numberphile, 1016年10月05日).
G = b(2) = 1 - 1/9 + 1/25 - 1/49 + ... + (-1)n/(2n+1)2 + ...
This is named after Eugène Catalan (1814-1894; X1833).
Catalan's name has also been given to the Catalan solids (the duals of the Archimedean solids) and the famous integer sequence of Catalan numbers.
What caused the admiration of Alf van der Poorten is the proof of the irrationality of z(3) by the French mathematician Roger Apéry (1916-1994) in 1977. That proof is based on an equation featuring a rapidly-converging series:
The reciprocal of Apéry's constant 1/z(3) is equally important: (A088453)
1/z(3) = 0.831907372580707468683126278821530734417...
It is the density of cubefree integers (see A160112) and the probability that three random integers are relatively prime. That constant also appears in the expression of the average energy of a thermal photon.
Average Energy of a Thermal Photon
|
Experimental
Mathematics and Integer Relations
Apéry's constant
by Jacob Krol (Student project, Fall 2016).
Many people who should know better (including brilliant physicists like Steven Weinberg or Leonard Susskind) have not been able to resist the temptation of "defining" i as Ö(-1) to avoid a more proper introduction.
Such a shortcut must be avoided unless one is prepared to give up the most trusted properties of the square root function, including:
Ö(xy) = Öx Öy
If you are not convinced that the square root function (and its familiar symbol) should be strictly limited to nonnegative real numbers, just consider what the above relation would mean with x = y = -1.
Neither of the two complex numbers (i and -i) whose square is -1 can be described as the "square root of -1". The square root function cannot be defined as a continuous function over the domain of complex numbers. Continuity can be rescued if the domain of the function is changed to a strange beast consisting of two properly connected copies (Riemann sheets) of the complex plane sharing the same origin. Such considerations do not belong in an introduction to complex numbers. Neither does the deceptive square-root symbol (Ö).
The cube root of 2 is much less commonly encountered than its square root (1.414...). There's little need to remember that it's roughly equal to 1.26 but it can be useful (e.g., a 5/8" steel ball weighs almost twice as much as a 1/2" one).
The fact that this quantity cannot be constructed "classically" (i.e., with ruler and compass alone) shows that there's no "classical" solution to the so-called Delian problem whereby the Athenians were asked by the Oracle of Apollo at Delos to resize the altar of Apollo to make it "twice as large".
The Delian constant has also grown to be a favorite example of an algebraic number of degree 3 (arguably, it's the simplest such number). Thus, its continued fraction expansion (CFE) has been under considerable scrutiny... There does not seem to be anything special about it, but the question remains theoretically open whether it's truly normal or not (by contrast, the CFE of any algebraic number of degree 2 is periodic ).
In Western music theory, the chromatic octave (the interval which doubles the frequency of a tone) is subdivided into 12 equal intervals (semitones). That's to say: three equal steps of four semitones each result in a doubling of the frequency. An interval of four semitones is known as a major third. Three consecutive major thirds correspond to a doubling of the frequency. Thus, the Delian constant (1.259921...) is the frequency ratio corresponding to a major third.
A Delian brick is a cuboid with sides proportional to 1, 21/3 and 22/3.
That term was coined by Ed Pegg on 2018年06月19日. A planar cut across the middle of its longest side splits a Delian brick into two Delian bricks.
That's the 3-D equivalent of a Ö2 aspect ratio for rectangles, on which is based the common A-series of paper sizes (as are the B-series, used for some playing cards, and the C-series for envelopes.)
The Delian Brick and other 3D self-similar dissections by Ed Pegg (2018年07月03日)
On May 30, 1799, Carl Friedrich Gauss found the following expression to be the reciprocal of the arithmetic-geometric mean between 1 and Ö2.
It's also the reciprocal of the Wallis integral of order ½. I1/2 = 1/G.
The continued fraction expansion (CFE) of G is: (A053002)
G = [ 0; 5, 21, 3, 4, 14, 1, 1, 1, 1, 1, 3, 1, 15, 1, 3, 8, 36, 1, 2, 5, 2, 1, 1 ... ]
The symbol G is also used for Catalan's constant which is best denoted b(2) whenever there is any risk of confusion.
Wikipedia : Gauss's constant
This is equal to the first zero of the J1 Bessel function divided by p. Commonly approximated 1.22 or 1.220.
This coefficient appears in the formula which gives the limit q of the angular resolution of a perfect lens of diameter D for light of wavelength l :
q = 1.220 l / D
This precise coefficient is arrived at theoretically by using Rayleigh's criterion which states that two points of light (e.g., distant stars) can't be distinguished if their angular separation is less than the diameter of their Airy disks (the diameter of the first dark circle in the interference pattern described theoretically by George Airy in 1835).
The precise value of the factor to use is ultimately a matter of convention about what constitutes optical distinguishability. The theoretical criterion on which the above formula is based was originally proposed by Rayleigh for sources of equal magnitudes. It has proved more appealing than all other considerations, including the empirical Dawes' limit, which ignores the relevance of wavelength. Dawes' limit would correspond to a coefficient of about 1.1 at a wavelength of 507 nm (most relevant to the scotopic astronomical observations used by Dawes).
Note that the digital deconvolution of images allows finer resolutions than what the above classical formula implies.
This is often called Mertens' constant in honor of the number theorist Franz Mertens (1840-1927). It is to the sequence of primes what Euler's constant is to the sequence of integers. It's sometimes also called Kronecker's constant or the Reciprocal Prime Constant.
Proposals have been made to name this constant after Charles de la Vall馥-Poussin (1866-1962) and/or Jacques Hadamard (1865-1963), the two mathematicians who first proved (independently) the Prime Number Theorem, in 1896.
For any prime p besides 2 and 5, the decimal expansion of 1/p has a period at most equal to p-1 (since only this many different nonzero "remainders" can possibly show up in the long division process). Primes yielding this maximal period are called long primes [to base ten] by recreational mathematicians and others. The number 10 is a primitive root modulo such a prime p, which is to say that the first p-1 powers of 10 are distinct modulo p (the cycle then repeats, by Fermat's little theorem). Putting a = 10, this is equivalent to the condition:
a (p-1)/d ¹ 1 (modulo p) for any prime factor d of (p-1).
For a given prime p, there are f(p-1) satisfactory values of a (modulo p), where f is Euler's totient function. Conversely, for a given integer a, we may investigate the set of long primes to base a...
It seems that the proportion C(a) of such primes (among all prime numbers) is equal to the above numerical constant C, for many values of a (including negative ones) and that it's always a rational multiple of C. The precise conjecture tabulated below originated with Emil Artin (1898-1962) who communicated it to Helmut Hasse in September 1927.
Neither -1 nor a quadratic residue can be a primitive root modulo p > 3. Hence, the table's first row is as stated.
(*) In the above, sf (a) is the squarefree part of a, namely the integer of least magnitude which makes the product a sf (a) a square. The squarefree part of a negative integer is the opposite of the squarefree part of its absolute value.
The conjecture can be deduced from its special case about prime values of a, which states the density is C unless a is 1 modulo 4, in which case it's equal to:
[ ( a 2 - a ) / ( a 2 - a - 1 ) ] C
In 1984, Rajiv Gupta and M. Ram Murty showed Artin's conjecture to be true for infinitely many values of a. In 1986, David Rodney ("Roger") Heath-Brown proved nonconstructively that there are at most 2 primes for which it fails... Yet, we don't know about any single value of a for which the result is certain!
This number is named after Johann von Soldner (1766-1833) and Srinivasa Ramanujan (1887-1920). It's also called Soldner's constant.
m is the only positive root of the logarithmic integral function "li" (which shouldn't be confused with the older capitalized offset logarithmic integral "Li", still used by number theorists when x is large: Li x = li x - li 2 ).
li 2 = 1.0451637801174927848445888891946131365226155781512...
The above integrals must be understood as Cauchy principal values whenever the singularity at t = 1 is in the interval of integration...
This last caveat fully applies to Li when x isn't known to be large. The ad-hoc definition of Li was made by Euler (1707-1783) well before Cauchy (1789-1857) gave a proper definition for the principal value of an integral.
Nowadays, there would be no reason to use the Eulerian logarithmic integral (capitalized Li) except for compatibility with the tradition that some number theorists have kept to this day. Even in the realm of number theory, I advocate the use of the ordinary logarithmic integral (lowercase li) possibly with the second definition given above (where the Soldner constant 1.451... is the lower bound of integration). That second definition avoids bickering about principal values when the argument is greater than one (the domain used by number theorists) although students may wonder at first about the origin of the "magical" constant. Wonderment is a good thing.
The function li is also called integral logarithm (French: logarithme integral).
Asymptotically, the density of integers below x expressible as the sum of two squares is inversely proportional to the square root of the natural logarithm of x. The coefficient of proprtionality is, by definition, the Landau-Ramanujan constant.
Ramanujan expressed as an integral the constant so defined by Landau.
Edmund Landau (1877-1938) | Srinivasa Ramanujan (1887-1920) | A064533 | Wikipedia | MathWorld
It's the solution of the equation x = e-x
or, equivalently, x = ln(1/x)
In other words, it's the value at point 1
of Lambert's W function.
The value of that constant could be obtained by iterating the function e-x, but the convergence is very slow. It's much better to iterate the function:
f (x) = (1+x) / (1+ex )
This has the same fixed-point but features a zero derivative there, so that the convergence is quadratic (the number of correct digits is roughly doubled with each iteration). This fast approach is an example of Newton's method.
What's known as the [first] Feigenbaum constant is the "bifurcation velocity" (d) which governs the geometric onset of chaos via period-doubling in iterative sequences (with respect to some parameter which is used linearly in each iteration, to damp a given function having a quadratic maximum). This universal constant was unearthed in October 1975 by Mitchell J. Feigenbaum (1944-2019). The related "reduction parameter" (a) is the second Feigenbaum constant...
Feigenbaum Constant
by Eric W. Weisstein (MathWorld)
|
Mathematical Constants
by Steven R. Finch.
4.669... The Feigenbaum constant (18:54) by
Ben Sparks (Numberphile, 2017年01月16日).
This equation will change how you see the world (18:38)
by Derek Muller (Veritasium, 2020年01月29日).
Historically, this constant was successively computed...
A058655 (decimals)
|
A046943 (continued fraction)
|
Wikipedia
|
Weisstein
|
Gamma function
Difference between e and the Fransén-Robinson constant. Jack D'Aurizio (MathStackExchange, 2016年05月27日).
Fransén-Robinson Adventure
(12:39,
15:28,
26:37) by Jens Fehlau (Flammable Maths, July 2019).
Edmund Landau (1877-1938) | Bloch's theorem | André Bloch (1893-1948)
It's conjectured that the above upper bound, using the Gamma function, is actually the true value of Bloch's constant, but this hasn't been proved yet. When André Bloch (1893-1948) originally stated his theorem, he merely stated that the universal constant B he introduced was no less than 1/72.
Consider the space S of all schlicht functions (holomorphic injections on the open disk D of radius 1 centered on 0). The largest disk contained in f (D) has a radius which is no less than a certain universal positive constant B.
Bloch's constant is defined as the largest value of B for which the theorem holds. Originally, Bloch only proved that B ≥ 1/72.
Encyclopedia of Mathematics
Les théorèmes de M. Valiron
sur les fonctions entières et la théorie de l'uniformisation André Bloch (1925).
Ueber die Blochsche Konstante by L.V. Ahlfors and H. Grunsky
Math. Z. , 42, 671-673 (1937).
The Bloch constant of bounded harmonic mappings
by Flavia Colonna (MathStackExchange, 2016年05月27日).
Because i is irrational but not transcendental, the Gelfond-Schneider theorem implies that Gelfond's constant is transcendental.
Wikipedia : Gelfond's constant | Gelfond-Schneider theorem (1934) | Alexander Gelfond (1906-1968)
This constant is named after the Norwegian mathematician who proved the sum to be convergent, in 1919: Viggo Brun (1885-1978).
The scientific notation used above and throughout Numericana indicates a numerical uncertainty by giving an estimate of the standard deviation (s). This estimate is shown between parentheses to the right of the least significant digit (expressed in units of that digit). The magnitude of the error is thus stated to be less than this with a probability of 68.27% or so.
Thomas R. Nicely Thomas R. Nicely, professor of mathematics at Lynchburg College. started his computation of Brun's constant in 1993. He made headlines in the process, by uncovering a flaw in the Pentium microprocessor's arithmetic, which ultimately forced a costly (475ドルM) worldwide recall by Intel.
Nicely kept updating his estimate of Brun's constant for a few years until 2010 or so, at which point he was basing his computation on the exact number of twin primes found below 1.6×1015. Because he felt a general audience could not be expected to be familiar with the aforementioned standard way scientists report uncertainties, Nicely chose to report the so-called 99% confidence level, which is three times as big. (More precisely, ±3s is a 99.73% confidence level.) The following expressions thus denote the same value, with the same uncertainty:
As a three-digit precision on the uncertainty is usually considered an overkill, most scientists would advertised that as 1.90216058321(26).
Brun's theorem | A065421 (Decimal expansion) | A065421 (Continued fraction)
1/1 + 1/1 + 1/2 + 1/3 + 1/5 + 1/8 + 1/13 + 1/21 + 1/34 + 1/55 + 1/89 + ...
The sum of the reciprocals of the Fibonacci numbers proved irrational by Marc Prévost, in the wake of Roger Apéry's celebrated proof of the irrationality of z(3), which has been known as Apéry's constant ever since.
The attribution to Prévost was reported by François Apéry (son of Roger Apéry) in 1996: See The Mathematical Intelligencer, vol. 18 #2, pp. 54-61: Roger Ap駻y, 1916-1994: A Radical Mathematician available online (look for "Prevost", halfway down the page).
The question of the irrationality of the sum of the reciprocals of the Fibonacci numbers was formally raised by Paul Erdös and may still be erroneously listed as open, despite the proof of Marc Prévost (Université du Littoral Côte d'Opale).
A 1986 conjecture of Jerrold W. Grossman (which was proved in 1987 by Janssen & Tjaden) states that the following recurrence defines a convergent sequence for only one value of x, which is now called Grossman's Constant:
Similarly, there's another constant, first investigated by Michael Somos in 2000, above which value of x the following quadratic recurrence diverges (below it, there's convergence to a limit that's less than 1): 0.39952466709679947- (where the terminal "7-" stands for something probably close to "655").
a0 = 0 ; a1 = x ; an+2 = an+1 ( 1 + an+1 - an )
Early releases from Michael Somos contained a typo in the digits underlined above ("666" instead of "66") which Somos corrected when we pointed this out to him (2001年11月24日). However, the typo still remained for several years (until 2004年04月13日) in a MathSoft online article whose original author (Steven Finch) was no longer working at MathSoft at the time when a first round of notifications was sent out.
The attribution of this irrational constant to Ramanujan was made by Simon Plouffe, as a monument to a famous 1975 April fools column by Martin Gardner in Scientific American (Gardner wrote that this constant had been proved to be an integer, as "conjectured by Ramanujan" in 1914 [sic!] ).
Actually, this particular property of 163 was first noticed in 1859 by Charles Hermite (1822-1901). It doesn't appear in Ramanujan's relevant 1914 paper.
There are reasons why the expression exp (pÖn) should be close to an integer for specific integral values of n. In particular, when n is a large Heegner number (43, 67 and 163 are the largest Heegner numbers). The value n = 58, which Ramanujan did investigate in 1914, is also most interesting. Below are the first values of n for which exp (pÖn) is less than 0.001 away from an integer:
25: 6635623.999341134233266+ 37: 199148647.999978046551857- 43: 884736743.999777466034907- 58: 24591257751.999999822213241+ 67: 147197952743.999998662454225- 74: 545518122089.999174678853550- 148: 39660184000219160.000966674358575+ 163: 262537412640768743.999999999999250+ 232: 604729957825300084759.999992171526856+ 268: 21667237292024856735768.000292038842413- 522: 14871070263238043663567627879007.999848726482795- 652: 68925893036109279891085639286943768.000000000163739- 719: 3842614373539548891490294277805829192.999987249566012+
Kurt_Heegner (1893-1965)
Ramanujan's Constant
and its Cousins by Titus Piezas III (2005年01月14日)
163 and Ramanujan's Number (11:29)
by Alex Clark (Numberphile, 2012年03月02日).
In 1960, Hillel Furstenberg and Harry Kesten showed that, for a certain class of random sequences, geometric growth was almost always obtained, although they did not offer any efficient way to compute the geometric ratio involved in each case. The work of Furstenberg and Kesten was used in the research that earned the 1977 Nobel Prize in Physics for Philip Anderson, Neville Mott, and John van Vleck. This had a variety of practical applications in many domains, including lasers, industrial glasses, and even copper spirals for birth control...
At UC Berkeley in 1999, Divakar Viswanath investigated the particular random sequences in which each term is either the sum or the difference of the two previous ones (a fair coin is flipped to decide whether to add or subtract). As stated by Furstenberg and Kesten, the absolute values of the numbers in almost all such sequences tend to have a geometric growth whose ratio is a constant. Viswanath was able to compute this particular constant to 8 decimals.
Currently, more than 14 significant digits are known (see A078416).
Borel defined a normal number (to base ten) as a real number whose decimal expansion is completely random, in the sense that all sequences of digits of a prescribed length are equally likely to occur at a random position in the decimal expansion.
It is well-known that almost all real numbers are normal in that sense (which is to say that the set of the other real numbers is contained in a set of zero measure). Pi is conjectured to be normal but this is not known for sure.
It is actually surprisingly difficult to define explicitely a number that can be proven to be normal. So far, all such numbers have been defined in terms of a peculiar decimal expansion. The simplest of those is Champernowne's Constant whose decimal expansion is obtained by concatenating the digits of all the integers in sequence. This number was proved to be decimally normal in 1933, by David G. Champernowne (1912-2000) as an undergraduate.
0.1234567891011121314151617181920212323242526272829303132...
In 1935, Besicovitch showed that the concatenation of all squares is normal:
0.1491625364964811001211441691962252562893243614004414845...
Champernowne had conjectured (in 1933) that a normal number would also be formed by concatenating the digits of all the primes:
0.2357111317192329313741434753596167717379838997101103107...
In 1946, that conjecture was proved by Arthur H. Copeland (1898-1970) and Paul Erdös (1913-1996) and this last number was named after them.
Note on Normal Numbers (1946) | Copeland-Erdös Number (Wikipedia) | Copeland-Erdös Constant
In April 2000, Kenneth Brecher (of Boston University) produced experimental evidence, at an unprecedented level of accuracy, which supports the main tenet of Einstein's Special Theory of Relativity, namely that the speed of light (c) does not depend on the speed of the source.
Brecher was able to claim a fabulous accuracy of less than one part in 1020, improving the state-of-the-art by 10 orders of magnitude! Brecher's conclusions were based on the study of the sharpness of gamma ray bursts (GRB) received from very distant sources: In such explosive events, gamma rays are emitted from points of very different [vectorial] velocities. Even minute differences in the speeds of these photons would translate into significantly different times of arrival, after traveling over immense cosmological distances. As no such spread is observed, a careful analysis of the data translates into the fabulous experimental accuracy quoted above in support of Einstein's theoretical hypothesis.
Because a test that aims at confirming SR must necessarily be evaluated in the context of theories incompatible with SR, there will always be room for fringe scientists to remain unconvinced by Brecher's arguments (e.g., Robert S. Fritzius, 2002).
When he announced his results at the April 2000 APS meeting in Long Beach (CA), Brecher declared that the constant c appears "even more fundamental than light itself" and he urged his colleagues to give it a proper name and start calling it Einstein's constant. The proposal was well received and has only been gaining momentum ever since, to the point that the "new" name seems now fairly well accepted.
Since 1983, the constant c has been used to define the meter in terms of the second, by enacting as exact the above value of 299792458 m/s.
Historically, "c" was used for a constant which later came to be identified as the speed of electromagnetic propagation multiplied by the square root of 2 (this would be cÖ2, in modern terms). This constant appeared in Weber's force law and was thus known as "Weber's constant" for a while.
On at least one occasion, in 1873, James Clerk Maxwell (who normally used "V" to denote the speed of light) adjusted the meaning of "c" to let it denote the speed of electromagnetic waves instead.
In 1894, Paul Drude (1863-1906) made this explicit and was instrumental in popularizing "c" as the preferred notation for the speed of electromagnetic propagation. However, Drude still kept using the symbol "V" for the speed of light in an optical context, because the identification of light with electromagnetic waves was not yet common knowledge: Coat-of-arms of Heinrich Hertz Electromagnetic waves had first been observed in 1888, by Heinrich Hertz (1857-1894). Einstein himself used "V" for the speed of light and/or electromagnetic waves as late as 1907.
c may also be called the celerity of light: [Phase] celerity and [group] speed are normally two different things, but they coincide for light in a vacuum.
For more details, see:
Why
is c the symbol for the speed of light? by Philip Gibbs
History of the speed of light (16:32)
by Paul Looyen (High School Physics Explained, 2019年04月19日).
About the speed of light
(24:25,
11:48)
by Rebecca Smethurst (Dec. 2019).
Why No One Has Measured The Speed Of Light (19:04)
by Derek Muller (Veritasium, 2020年10月31日).
The relation eo mo c 2 = 1 and the exact value of c yield an exact SI value, with a finite decimal expansion, for Coulomb's constant (in Coulomb's law):
Consequently, the electric constant (dielectric permittivity of the vacuum) has a known infinite decimal expansion, derived from the above:
eo = 8.85418781762038985053656303171... ´ 10 -12 F/m
A photon of frequency n has an energy hn where h is Planck's constant. Using the pulsatance w = 2pn this is h-barw where h-bar is Dirac's constant.
The constant h-bar = h/2p is actually known under several names:
The constant h-bar is pronounced either "h-bar" or (more rarely) "h-cross". It is equal to unity in the natural system of units of theoreticians (h is 2p). The spins of all particles are multiples of h-bar/2 = h/4p (an even multiple for bosons, an odd multiple for fermions).
There's a widespread belief that the letter h initially meant Hilfsgrösse ("auxiliary parameter" or, literally, "helpful quantity" in German) because that's the neutral way Max Planck (1858-1947) introduced it, in 1900.
As noted at the outset, the actual numerical value of Planck's constant depends on the units used. This, in turn, depends on whether we choose to express the rate of change of a periodic phenomenon directly as the change with time of its phase expressed in angular units (pulsatance) or as the number of cycles per unit of time (frequency). The latter can be seen as a special case of the former when the angular unit of choice is a complete revolution (i.e., a "cycle" or "turn" of 2p radians).
A key symptom that angular units ought to be involved in the measurement of spin is that the sign of a spin depends on the conventional orientation of space (it's an axial quantity).
Likewise, angular momentum and the dynamic quantity which induces a change in it (torque) are axial properties normally obtained as the cross-product of two radial vectors. One good way to stress this fact is to express torque in Joules per radian (J/rad) when obtained as the cross-product of a distance in meters (m) and a force in newtons (N).
1 N.m = 1 J / rad = 2p J / cycle = 2p W / Hz = 120 p W / rpm
Note that torque and spectral power have the same physical dimension.
Current technology of the watt balance (which compares an electromagnetic force with a weight) is almost able to measure Planck's constant with the same precision as the best comparisons with the International prototype of the kilogram, the only SI unit still defined in terms of an arbitrary artifact. It is thus likely that Planck's constant could be given a de jure value in the near future, which would amount to a new definition of the SI unit of mass.
Resolution 7 of the 21st CGPM (October 1999) recommends "that national laboratories continue their efforts to refine experiments that link the unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram". Although precise determinations of Avogadro's constant were mentioned in the discussion leading up to that resolution, the watt balance approach was considered more promising. It's also more satisfying to define the kilogram in terms of the fundamental Planck constant, rather than make it equivalent to a certain number of atoms in a silicon crystal. (Incidentally, the mass of N identical atoms in a crystal is slightly less than N times the mass of an isolated atom, because of the negative energy of interaction involved.)
In 1999, Peter J. Mohr and Barry N. Taylor have proposed to define the kilogram in terms of an equivalent frequency n = 1.35639274 1050 Hz, which would h equal to c2/n, or 6.626068927033756019661385... 10-34 J/Hz.
Instead, it would probably be better to assign h or [rather] h/2p a rounded decimal value de jure. This would make the future definition of the kilogram somewhat less straightforward, but would facilitate actual usage when the utmost precision is called for. To best fit the "kilogram frequency" proposed by Mohr and Taylor, the de jure value of h-bar would have been:
1.054571623 10-34 J.s/rad
However, a mistake which was corrected with the 2010 CODATA set makes that value substantially incompatible with our best experimental knowledge. Currently (2011) the simplest candidate for a de jure definition is:
h-bar = 1.0545717 10-34 J.s/rad
Note: " ħ " is how your browser displays UNICODE's "h-bar" (ħ).
The instrument which will perform the defining measurement is the Watt Balance invented in 1975 by Bryan Kibble (1938-2016). In 2016, the metrology community decided to rename the instrument a Kibble balance, in his honor (in a unanimous decision by the CCU = Consultative Committee for Units).
The Watt-Balance &
Redefining the Kilogram (9:49) Bryan Kibble, Tony Hartland & Ian Robinson (2013).
Planck's Constant and the Origin of Quantum Mechanics (15:15)
Matt O'Dowd (2016年06月22日).
How we're Redefining the Kilogram (9:49)
by Derek Muller (2017年07月12日).
Named after Ludwig Boltzmann (1844-1906) the constant k = R/N is the ratio of the ideal gas constant (R) to Avogadro's number (N).
Boltzmann's constant is currently a measured quantity. However, it would be sensible to assign it a de jure value that would serve as an improved definition of the unit of thermodynamic temperature, the kelvin (K) which is currently defined in terms of the temperature of the triple point of water (i.e., 273.16 K = 0.01°C, both expressions being exact by definition ).
What's now known as Boltzmann's relation was first formulated by Boltzmann in 1877. It gives the entropy S of a system known to be in one of W equiprobable states. Following Abraham Pais, Eric W. Weisstein reports that Max Planck first used the constant k in 1900.
Grave of Ludwig Boltzmann (1844-1906) Zentralfriedhof, Wien (Group 14C, Number 1)S = k ln (W)
The constant k
became known as Boltzmann's constant around 1911
(Boltzmann had died in 1906) under the influence of Planck.
Before that time, Lorentz
and others had named the constant after Planck !
Philosophy of Statistical Mechanics by Lawrence Sklar (2001)
The constant is named after the Italian physicist Amedeo Avogadro (1776-1856) who formulated what is now known as Avogadro's Law, namely:
At the same temperature and [low] pressure, equal volumes of different gases contain the same number of molecules.
The current definition of the mole states that there are as many countable things in a mole as there are atoms in 12 grams of carbon-12 (the most common isotope of carbon).
Keeping this definition and giving a de jure value to the Avogadro number would effectively constitute a definition of the unit of mass. Rather, the above definition could be dropped, so that a de jure value given to Avogadro's number would constitute a proper definition of the mole which would then be only approximatively equal to 12 g of carbon-12 (or 27.97697027(23) g of silicon-28).
In spite of the sheer beauty of those isotopically-enriched single-crystal polished silicon spheres manufactured for the International Avogadro Coordination (IAC), it would certainly be much better for many generations of physicists yet to come to let a de jure value of Planck's constant define the future kilogram... (The watt-balance approach is more rational but less politically appealing, or so it seems.)
The frequency of 540 THz (5.4 1014 Hz) corresponds to yellowish-green light. This translates into a wavelength of about 555.1712185 nm in a vacuum, or about 555.013 nm in the air, which is usually quoted as 555 nm.
This frequency, sometimes dubbed "the most visible light", was chosen as a basis for luminous units because it corresponds to a maximal combined sensitivity for the cones of the human retina (the receptors which allow normal color vision under bright-light photopic conditions).
The situation is quite different under low-light scotopic conditions, where human vision is essentially black-and-white (due to rods not cones ) with a peak response around a wavelength of 507 nm.
Brightness by Rod Nave | The Power of Light | Luminosity Function
Assuming the above evolutions [ 1, 2, 3 ] come to pass, the SI scheme would define every unit in terms of de jure values of fundamental constants, using only one arbitrary definition for the unit of time (the second). There would be no need for that remaining arbitrary definition if the Newtonian constant of gravitation (the remaining fundamental constant) was given a de jure value.
There's no hope of ever measuring the constant of gravitation directly with enough precision to allow a metrological definition of the unit of time (the SI second) based on such a measurement.
However, if our mathematical understanding of the physical world progresses well beyond its current state, we may eventually be able to find a theoretical expression for the mass of the electron in terms of G. This would equate the determination of G to a measurement of the mass of the electron. Possibly, that could be done with the required metrological precision...
Here are a few physical constants of significant metrological importance, with the most precisely known ones listed first. For the utmost in precision, this is roughly the order in which they should be either measured or computed.
One exception is the magnetic moment of the electron expressed in Bohr magnetons: 1.00115965218076(27). That number is a difficult-to-compute function of the fine structure constant (a) which is actually known with a far lesser relative precision. However, that "low" precision pertains to a small corrective term away from unity and the overall precision is much better.
The list starts with numbers that are known exactly (no uncertainty whatsoever) simply because of the way SI units are currently defined. Such exact numbers include the speed of light (c) in meters per second (cf. SI definition of the meter) or the vacuum permeability (m0 ) in henries per meter (or, equivalently, newtons per squared ampère, see SI definition of the ampere).
In this table, an equation between square brackets denotes a definition of an experimental quantity in terms of fundamental constants known with a lesser precision. On the other hand, unbracketed equations normally yields not only the value of the quantity but the uncertainty on it (from the uncertainties on products or ratios of the constants involved). Recall that the worst-case uncertainty on a product of independent factors is very nearly the sum of the uncertainties on those factors. So is the uncertainty on a product of positive factors that are increasing functions of each other: (e.g., the uncertainty on a square and a cube are respectively two and three times larger than the uncertainty on the number itself). The reader may want to use such considerations to establish that the uncertainties on the Bohr radius, the Compton wavelength and the "classical radius of the electron" are respectively proportional to 1, 2 and 3. (HINT: The uncertainty on the fine-structure constant is much larger than the uncertainty on Rydberg's constant.) Another good exercise is to use the tabulated formula to compute Stefan's constant and the uncertainty on it.Except as noted, all values are derived from CODATA 2010.
Carl Sagan once needed an "obvious" universal length as a basic unit in a graphic message intended for [admittedly very unlikely] extra-terrestrial decoders. That famous picture was attached to the two space probes (Pioneer 10 and 11, launched in 1972 and 1973) which would become the first man-made objects ever to leave the Solar System.
Sagan chose one of the most prevalent lengths in the Cosmos, namely the wavelength of 21 cm corresponding to the hyperfine spin-flip transition of neutral hydrogen (isolated hydrogen atoms do pervade the Universe).
Hydrogen Line : 1420.4057517667(9) MHz 21.106114054179(13) cm
Back in 1970, the value of the hyperfine "spin-flip" transition frequency of the ground state of atomic hydrogen (protium) had already been measured with superb precision by Hellwig et al. :
1420.405751768(2) MHz.
This was based on a direct comparison with the hyperfine frequency of cesium-133, carried out at NBS (now NIST). In 1971, Essen et al pushed the frontiers of precision to a level that has not been equaled since then. Their results stood for nearly 40 years as the most precise measurement ever performed (the value of the magnetic moment of the electron expressed in Bohr magnetons is now known with slightly better precision).
1420.4057517667(9) MHz
Three years earlier (in 1967) a new definition of the SI second had been adopted based on cesium-133, for technological convenience. Now, the world is almost ripe for a new definition of the unit of time based on hydrogen, the simplest element. Such a new definition might have much better prospects of being ultimately tied to the theoretical constants of Physics in the future.
A similar hyperfine "spin-flip" transition is observed for the 3He+ ion, which is another system consisting of a single electron orbiting a fermion. Like the proton, the helion has a spin of 1/2 in its ground state (unlike the proton, it also exists in a rare excited state of spin 3/2). The corresponding frequency was measureed to be:
A very common microscopic yardstick is the equilibrium bond length in a hydrogen molecule (i.e., the average distance between the two protons in an ordinary molecule of hydrogen). It is not yet tied to the above fundamental constants and it's only known at modest experimental precision:
0.7414 Å = 7.414 10-11 m
CODATA recommended
value for the physical constants: 2010 (2012年03月15日)
CODATA recommended
value for the physical constants: 2006 (2008年06月06日)
by
Peter J. Mohr, Barry N. Taylor & David B. Newell
Measurement of the Unperturbed Hydrogen
Hyperfine Transition Frequency by Helmut Hellwig
et al.
IEEE Transactions on Instrumentation and Measurements, Vol. IM-19, No. 4, November 1970.
"The atomic hydrogen line at 21 cm has been measured to a precision of 0.001 Hz"
by L. Essen, R. W. Donaldson, M. J. Bangham, and E. G. Hope,. Nature (London) 229, 110 (1971).
Hydrogen-like
Atoms by James F. Harrison
(Chemistry 883, Fall 2008, Michigan State University)
Below are the statutory quantities which allow exact conversions between various physical units in different systems:
This gives also an exact metric equivalence for the parsec (pc) unit, defined as 648000 au / p. (The obscure siriometer introduced in 1911 by Carl Charlier (1862-1934) for interstellar distances is 1 Mau = 1.495978707 1017 m, or about 4.848 pc.)
The above conventional density remains universally adopted in spite of the advent of "Standard Mean Ocean Water" (SMOW) whose density can be slightly higher: SMOW around 3.98°C is about 999.975 g/L.
The original batch of SMOW came from seawater collected by Harmon Craig on the equator at 180 degrees of longitude. After distillation, it was enriched with heavy water to make the isotopic composition match what would be expected of undistilled seawater (distillation changes the isotopic composition, because lighter molecules are more volatile). In 1961, Craig tied SMOW to the NBS-1 sample of meteoric water originally collected from the Potomac River by the National Bureau of Standards (now NIST). For example, the ratio of Oxygen-18 to Oxygen-16 in SMOW was 0.8% higher than the corresponding ratio in NBS-1. This "actual" SMOW is all but exhausted, but water closely matching its isotopic composition has been made commercially available, since 1968, by the Vienna-based IAEA (International Atomic Energy Agency) under the name of VSMOW or "Vienna SMOW".
The additional relation 1 cal/g = 1 chu/lb has been used to introduce a dubious "IST calorie" of exactly 4.1868 J competing with the above thermochemical calorie of 4.184 J, used by the scientific community since 1935. Beware of the bogus conversion factor of 4.1868 J/cal which has subsequently infected many computers and most handheld calculators with conversion capabilities...The Btu was apparently introduced by Michael Faraday (before 1820?) as the quantity of heat required to raise one pound (lb) of water from 63°F to 64°F. This deprecated definition is roughly compatible with the modern one (and it remains mentally helpful) but it's metrologically inferior.
Embedded into physical reality are a few nontrivial constants whose values
do not depend on our chosen system of measurement units.
Examples include the ratios of all elementary particles to the mass of the
electron. Arguably, one of the ultimate goals of theoretical physics is
to explain those values.
Other such unexplained constants have a mystical flair to them.
Galileo detected the simultaneity of two events by ear. When two bangs were less than about 11 ms apart he heard a single sound and considered the two events simultaneous. That's probably why he chose that particular duration as his unit of time which he called a tempo (plural tempi). The precise definition of the unit was in terms of a particular water-clock which he was using to measure longer durations.
Using a simple pendulum of length R, he would produce a bang one quarter-period after the release by have metal gong just underneath the pivot point. On the other hand, he could also release a ball in free fall from a height H over another gong. Releasing the two things simultanously, he could tell if the two durations were equal (within the aforementioned precision) and adjust either length until they were.
Galileo observed that the ratio R/H was always the same and he measured the value of that constant as precisely as he could. Nowadays, we know the ideal value of that constant:
R/H = 8 / p2 = 0.8105694691387021715510357...
This much can be derived in any freshman physics class using the elementary principles established by Newton after Galileo's death.
Any experimental discrepancy can be explained by the smallish effects neglected to obtain the above ideal formula (e.g., air resistance, friction, finite size of the bob, substantial amplitude).
Thus, Galileo's results can now be used backwards to estimate how good his experiemental methods were. (Indeed, they were as good as can be expected when simultaneity is appreciated by ear.)
The dream of some theoretical physicists is now to advance our theories to the point that the various dimensionless physical constants which are now mysterious to us can be explained as easily as what I've called Galileo's constant here (for shock value).
Combining Planck's constant (h), with the two electromagnetic constants and/or the speed of light (recall that eo mo c 2 = 1) there's essentially only one way to obtain a quantity whose dimension is the square of an electric charge. The ratio of the square of the charge of an electron to that quantity is a pure dimensionless number known as Sommerfeld's constant or the fine-structure constant :
a = m0 c e2 / 2h = e2 / 2hce0 = 1 / 137.035999...
The value of this constant has captured the imagination of many generations of physicists, professionals and amateurs alike. Many wild guesses have been made, often based on little more than dubious numerology.
In 1948, Edward Teller (1908-2003) suggested that the electromagnetic interaction might be weakening in cosmological time and he ventured the guess that the fine-structure constant could be inversely proportional to the logarithm of the age of the Universe. This proposal was demolished by Dennis Wilkinson (1918-2013) using ordinary mineralogy, which shows that the rate of alpha-decay for U-238 could not have varied by much more than 10% in a billion years (that rate is extremely sensitive to the exact value of the fine-structure constant). Teller's proposal was further destroyed by precise measurements from the fossil reactors at Oklo (Gabon) which show that the fine-structure constant had essentially the same value as today two billion years ago.
Fine-structure constant
|
Arnold Sommerfeld (1868-1951)
On the Change of
Physical Constants by Edward Teller, Physical Review, 73, 801 (1948年04月01日).
In Newtonian terms, the electrostatic force and the gravitational force between two electrons both vary inversely as the square of the distance between them. Therefore, their ratio is a dimensionless constant W equal to the square of the electron charge-to-mass quotient multiplied by Coulomb's constant divided by the gravitational constant, namely:
In 1919, Hermann Weyl (1885-1955) remarked that the radius of the Universe and the radius of an electron would be exactly in the above ratio if the mass of the Universe was to gravitational energy what the mass of an electron is to electromagnetic energy (using, for example, the electrostatic argument leading to the classical radius of the electron).
Dirac's coat of armsIn 1937, Dirac singled out the interractions between an electron and a proton instead, which led him to ponder a quantity equal to the above divided by the proton-to-electron mass ratio :
Pascual Jordan's coat of arms In 1966, E. Pascual Jordan (1902-1980) used Dirac's "variable gravity" cosmology to argue that the Earth had doubled in size since the continents were formed, thus advocating a very misguided alternative to plate tectonics (or continental drift).
Large Number Hypothesis and Dirac's cosmology.
Expanding Earth and declining gravity:
a chapter in the recent history of geophysics Helge Kragh (2015).
Audio :
Dimensionless Physical Constants and
Large Number Hypothesis by Paul Dirac.
Video :
Could gravity vary with time? (6:09)
by Freeman Dyson (Web of Stories).
Lecture by Paul Dirac (1:10:12)
in Christchurch, New Zealand (1975).