Yes, always! The chances of winning are doubled by switching this way.
The reason is that the first guess is right with probability 1/3, whereas one of the other two choices is correct with probability 2/3. The rules are actually a disguise for a simple choice between two alternatives: Would you rather win if the prize is behind the door you first picked (1/3) or win if it's not (2/3)?
Monty Hall, 1975 Before making national news in 1991, as chronicled below, this puzzle appeared under several other guises, including Bertrand's Box Paradox introduced by Joseph Bertrand (1822-1900; X1839) in Calcul des probabilités (1889) and The Three Prisoner Problem, which Martin Gardner presented in 1959, in his Scientific American column (and again in his 1961 book More Mathematical Puzzles). The popularity of Monty Hall's TV gameshow Let's Make a Deal revived interest in the problem, about which Steve Selvin wrote a letter to the editor entitled "A Problem in Probability", published in the February 1975 issue of The American Statistician (with a follow-up in August 1975). Marilyn vos Savant
Most notably, the problem was posed by a reader of the famous puzzle columnist Marilyn vos Savant (Parade Magazine, September 9, 1990), whose correct answer started a flood of objections. The thing became known as the Monty Hall Problem, in honor of the popular host of the TV show Let's Make a Deal (which ran 4500 times from 1963 to 1990). The craze culminated in a front-page article of the Sunday New York Times, on July 21, 1991. A number of people just could not believe the above answer was correct (they were wrong). Fewer people remarked that the above rules were not directly applicable to the situation in Let's Make a Deal (they are right). Actually, it's quite interesting to investigate how different "rules" affect the conclusion:
We can't begin to address the issue if we don't have a clue about the host's obligations. For example, the producers may ask him to flip a fair coin in order to decide whether or not to proceed as above (in such a case the contestant should still take the opportunity to switch whenever available, but his overall probability of winning is reduced from 2/3 to 1/2). It may also be known to the contestant that the "hostile" host only gives the opportunity to switch when the initial guess was right (in which case the contestant should clearly always decline any such "opportunity" and will win with probability 1/3). A generalized analysis may be carried out along the following lines:
A contestant who always switches will win with probability (1-x)/3 + 2y/3 . The first term is the probability the contestant was right and can't switch, whereas the second term is the probability he was wrong and could switch. It's better to switch whenever that probability is larger than 1/3 (which is the probability of winning when one never switches). In other words, a contestant should switch whenever 2y is at least equal to x.
A host may want to keep his show interesting by having (clever) contestants react differently (or indifferently) to the switching opportunity. The above demonstrates that such a host should allow good guesses to be changed twice as often as he does for bad guesses (so that x = 2y). This makes the contestant win with probability 1/3, whether he decides to switch or not.
Before the warden's revelation, the probability that A would survive was clearly 1/3, as the governor's "random" choice was presumably not biased in favor of anyone. The confirmation of the execution of either one of the other two prisoners does not changes that. It's irrelevant to the fate of A, whose probability of survival remains 1/3.
However, things are looking brighter for C who is now known to survive with probability 2/3, since he escapes execution whenever A does not.
This is another version of the infamous "paradox" discussed in the previous article. However, this presentation of the so-called Three Prisoners Paradox predates the equivalent Monty Hall Paradox by several decades, as it was introduced by Martin Gardner in the October 1959 issue of Scientific American. A recent account may be found in Gardner's own 2001 book: The Colossal Book of Mathematics, W.W. Norton & Company, New York, ISBN 0-393-02023-1.
Answer : 120 ways.
This is the factorial of 5 (namely, 5! = 5´4´3´2´1); the number of ways the 5 other children may be placed to the left of some arbitrarily chosen kid.
5 ways to choose the kid placed first to the left, 4 ways to chose the second, 3 ways for the third, 2 ways for the fourth, 1 way for the last...
Given a first row of 4 distinct letters, only two squares exist where each letter appears once in each row, in each column, and in either diagonal:
There are 576 = (4!)2 choices for the first row in a Bachet square (4! = 24 ways to choose the order of the suits and 4! = 24 choices for the values). Given the first row, we obtain two Bachet squares, (using either the first pattern for suits and the second one for values, or vice versa). All told: 1152 Bachet squares.
Claude-Gaspard Bachet was the first occupant (1634) of the 13th seat at the Académie Française, although he did not attend the founding ceremonies, on account of ill health. The modern records of the Academy alphabetize him as "Méziriac (Bachet de)". Here are a few notorious occupants of his seat: Jean Racine (1672), Octave Feuillet (1862), Pierre Loti (1891), Paul Claudel (1946), Maurice Schumann (1974), Pierre Messmer (1999) and Simone Veil (2008).
C(n,p) is usually pronounced "n choose p". It is simply the number of ways to pick p objects among n different ones. It's so commonly used when counting things that we need a special notation for this. In handwriting or in print, C(n,p) in often written as a pair of parentheses enclosing an n above a p. For example C(10,5) may be written as a 10 above a 5 between parentheses, without anything else within the parentheses:
How do we obtain this value of C(10,5)? Well, let's first count how many ways you could pick the 5 elements in order: You have 10 choices for the first element, but then there are only 9 unpicked elements left, so you only have 9 choices for the second element, 8 for the third, 7 for the fourth and 6 for the fifth. All told, you have 10´9´8´7´6 = 30240 ways to choose an ordered sequence of 5 objects if you have 10 to choose from. This is not quite the answer we want, because we're after the number of ways you could get a given set of 5 elements irrespective of the order in which you pick them... Read on.
Let's count the number of ways to order 5 things. It looks very much like what we did before: You have 5 ways to pick the first, 4 ways to pick the second, 3 ways to pick the third, 2 ways to pick the fourth, and just 1 possibility left for the fifth and last. All told, you have 1´2´3´4´5=120 ways to order 5 elements.
Each of your 30240 sequences of 5 objects is thus part of one (and only one) collection of 120 sequences in which the same objects appear, in some order. The number of such collections is clearly 30240/120, or 252. Therefore, there are 252 different ways to pick 5 unordered elements among 10: C(10,5)=252.
Make sure you understand the above example before going any further...
Instead of writing 1´2´3´4´...´(n-1)´n, you may write n! (it's called a factorial, but there's nothing to it; it's just shorthand). The factorial notation makes it easy to express what we just told you about: What's the number of ways you can order p elements? Well, p!, of course...
The number of ways to pick an ordered sequence of p elements from n possible ones is obtained with the method illustrated by the above example:
n (n-1) (n-2) ... (n-p+1) = n! / (n-p)!
Think about it: The left-hand side has p factors and the right-hand side is a product of n factors divided by (n-p) factors so there are only p of them left...
Using factorials, C(n,p) is the above divided by p! which means that:
If all these things are new to you, it's normal to be confused at first. Study the above and everything will be clear in the end.
The choice numbers C(n,p) are also called binomial coefficients because they appear in the following formula for the n-th power of a binomial. The summation is understood to entail all integral values of p from 0 to infinity. However, when n is an integer, the sum includes only n+1 terms, because C(n,p) is zero when p is greater than n.
(1+x)n = Sp C(n,p) xp
To establish the validity of that formula, we simply remark that xp appears as many times in the expansion of the product of n factors on the left-hand side as there are ways to choose the term x (instead of the constant term 1) in p of those factors.
This was first done by Omar Khayyám (1048-1131). To denote the unknown in his statements, Khayyam used the Arabic term transliterated shay or xay ("the thing"). This became the "x" of our current usage, examplified by the above formula !
Isaac Newton (1643-1727) remarked that Khayyam's formula remains valid even when the exponent n is not an integer, provided the right-hand side is duly interpreted as an infinite sum (a power series that converges when |x| < 1 ) and C(n,p) is properly defined in the following way, which doesn't require n to be an integer: Isaac Newton 1643-1727
C(n+1,p+1) = C(n,p) + C(n,p+1)
This could be shown by using the above expression for C(n,p), but there's an easier approach... Just notice that you can pick (p+1) objects among (n+1) in one of two ways: Either you pick the last object or you don't! There are C(n,p) ways to do the first thing and C(n,p+1) ways to do the second. Halmos
With this formula, you can build a table of the choice numbers without any multiplications. If you write the coefficients C(n,p) in a table where n is the number of the line (lowest on top) and p the number of the column (lowest to the left). Each coefficient is the sum of the two numbers above it (one is directly on top, the other is to the left of that one).
0 1 2 3 4 5 6 7 8 9 10 11 12 13 0: 1 1: 1 1 2: 1 2 1 3: 1 3 3 1 4: 1 4 6 4 1 5: 1 5 10 10 5 1 6: 1 6 15 20 15 6 1 7: 1 7 21 35 35 21 7 1 8: 1 8 28 56 70 56 28 8 1 9: 1 9 36 84 126 126 84 36 9 1 10: 1 10 45 120 210 252 210 120 45 10 1 11: 1 11 55 165 330 462 462 330 165 55 11 1 12: 1 12 66 220 495 792 924 792 495 229 66 12 1 13: 1 13 78 286 715 1287 1716 1716 1287 715 286 78 13 1 14: 1 14 91 364 1001 2002 3003 3432 3003 2002 1001 364 91 14 15: 1 15 105 455 1365 3003 5005 6435 6435 5005 3003 1365 455 105 16: 1 16 120 560 1820 4368 8008 ... 17: 1 17 136 680 2380 6188 ... 18: 1 18 153 816 3060 8568 ... 19: 1 19 171 969 3876 ...
Note that C(n,n) is 1, so is C(n,0) (there's only one way to pick no object at all). More generally, C(n,p) = C(n,n-p) because there are as many ways to pick p objects as there are ways to discard that many.
The above triangular table formed by the binomial coefficients is an ancient mathematical object whose origins have been traced to the second century BC (in India). It way well be much older than that.
It's associated with different mathematicians in different languages. Local versions of the Stigler's Law of Eponymy apply: The Persians call it Khayyám's triangle although they know it was published a century before Khayyam, by Al Karaji (953-1029). Likewise, the Chinese call it the triangle of Yang Hui, although Yang Hui himself attributed it to Jia Xian (c.1010-c.1070).
In the West, it's usually called Pascal's triangle but it's known to the Italians as Tartaglia's triangle (Niccolò Tartaglia published the first six rows in 1556). Albert Girard (1595-1632) called it the triangle of extraction when he published it in 1629 (when Pascal was just 6 years old). This emphasized the rôle of the coefficients in the algorithm of extraction of an n-th root, which was known to the medieval Chinese. Pascal himself just called it le triangle arithmétique.
Pascal's triangle has many interesting properties. Here are some of them:
It's not difficult to prove that sums of the successive antidiagonals form the celebrated Fibonacci sequence :
1, 1, 1+1 = 2, 1+2 = 3, 1+3+1 = 5, 1+4+3 = 8, 1+5+6+1 = 13, 1+6+10+4 = 21, 1+7+15+10+1 = 34, 1+8+21+20+5 = 55 (A000045)
Count the number of odd entries in line number n. This gives you the sequence shown below, known as Gould's sequence. It was apparently introduced in 1961 by Henry W. Gould (b. 1928) but Manfred R. Schroeder (b. 1926) has also called it the Dress Sequence, after Andreas Dress (b. 1938).
1, 2, 2, 4, 2, 4, 4, 8, 2, 4, 4, 8, 4, 8... (A001316)
Every term in it is a power of 2. The exponent in the power which corresponds to the number n row turns out to be the number of "1" bits in the binary expansion of n (A000120). For example, the number n = 1000000000 (one billion) is expressed in binary as 111011100110101100101 which has 13 ones in it. Therefore, the number of odd entries in that distant row is:
Waclaw Sierpinski 1882-19692 13 = 8192
Pascal's Triangle by Casandra Monroe (Numberphile, 2017年03月10日).
Project Euler:
Problem 242
|
Sierpinski Triangle
The ordinary choice number C(n,p) can be seen as the number of ways to put n objects into either a red box or a blue box so that they contain respectively p and n-p objects. Note that:
C(n,p) = C(n,n-p) and C(n,p) = 0 when either p < 0 or n < p.
Likewise, the number of ways to distribute n objects into three boxes so that the first two contain respectively p and q objects is denoted C(n,p,q). Such a distribution can be obtained by picking p objects among n for the first box and q objects among the remaining n-p for the second box (the rest goes into the third box). Clearly, the result doesn't depend on the order in which the boxes are filled and it can be expressed symmetrically:
C(n,p,q) = C(n,p) C(n-p,q) = C(n,q) C(n-q,p) = C(n,p+q) C(n-p-q,p)
C(n,p,q) = n! / [ (n-p-q)! p! q! ]
More generally, the number of ways n distinct objects can be distributed among k distinct boxes of fixed sizes (n1 ... nk ) is:
C(n, n1 ... nk-1 ) = n! / [ n1! ... nk-1! nk! ] where n1 + n2 + ... + nk = n
Such a quantity is called a multichoice number.
Consider putting markers into an array so that the leftmost square is marked and a total of p squares are left unmarked. If the array has n+p squares, there are n markers. Because the position of one marker is imposed, the number of ways to do so is:
C(n+p-1 , n-1) = C(n+p-1 , p)
Such arrays are clearly in one-to-one correspondence with the repartitions of p identical balls into n numbered bins: We just associate bin number k with the k-th marker from the left. The number of balls in the bin is the number of unmarked squares to the right of its assigned marker (until either the next marker or the right end of the array, whichever comes first). QED
To put it in a nutshell:
This simple remark governs the statistical behavior of elementary particles, according to quantum logic. Loosely speaking, the above bins are quantum states and there are two possibilities:
Answer : C ( n+p-1 , p ) as explained above.
Dice Theorem (The many misconceptions in the "footnote on dice" are debunked here.)
For n different flavors, the answer is the following choice number:
C(n+2,3) = n(n+1)(n+2)/6
That's 84 when n = 7, or 5456 when
n = 31.
That's a thinly disguised special case of the previous problem.
To provide an alternate explanation in concrete terms, let's consider that the flavors are listed in alphabetical order on the menu. A weird waitress insists that you may not name any flavor more than once but are allowed to use the words FIRST and LAST (at most once each) to refer respectively to the first or the last flavor you picked explicitely. Each type of sundae can be so named in a unique way. This is equivalent to picking 3 different "flavors" among n+2 (including the two bogus "flavors" FIRST and LAST) which can be done in C(n+2,3) ways. QED
Now, about Linda's pattern (which is indeed pretty). It's fairly easy to prove its correctness using, for example, the formulas for sums of consecutive integers and squares of consecutive integers. However, there's a better way, using straight combinatorics (which Linda may or may not have used to discover the pattern):
To name a sundae, you may first choose your "middle" flavor from the menu. You are then allowed to pick one flavor below the middle one (including itself) and one flavor above it (including itself), for a total of 3 flavors, allowing any repetitions.
Thus, if your middle choice is kth on the list, you have k (n+1-k) possible choices; k choices for the lower flavor and (n+1-k) for the upper one. The sum over all possible middle choices (from k=1 to k=n) is the total number of possible sundaes. That's also Linda's "pattern":
The number of different types of shirts to choose from is:
n = 3 x 5 = 15
To specify a package of p = 4 shirts, you must select p of those n types, allowing repetitions. It's like putting p identical balls into n numbered bins. The number of ways to do so is:
C(n+p-1 , p) = C(15+4-1 , 4) = C(18,4) = 3060
That's the correct numerical answer to a silly question. (What sane retailer would ever bundle together shirts of different sizes?)
As is often the case, there are several ways to count the same thing and this may lead to interesting identities. For example, we may build a duplicates-allowed choice of p items of n possible types by first choosing a certain number q+1 of different types [for some q between 0 and p-1], which can be done in C(n,q+1) ways. Once this is done, we have C(p-1,q) possibilities to cast the chosen types on the p items. (HINT: You may do so by choosing at which q "intervals" in an ordered lineup of p items the item type is seen to "increase".) All told:
I am dedicating this to my high-school teacher, Mr. Leterrier [also a khôlleur at Taupe Laplace] who posed this nice question to my high-school class in 1972 or 1973 (time flies). I still remember finding the answer through some awkward method, just like the rest of the class did.
Only after simplifying the result did it occur to me that there ought to be an elegant solution. Give it a fair shot before peeking!
Sets with few ordinary lines (2013) by Terence Tao.
1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786...
(A000108).
Starting with n = 0, the nth Catalan number
is C(2n,n) / (n+1)
This quantity often occurs in combinatorics. For example:
Binary trees
|
Noncrossing partitions
|
Catalan Numbers
|
Eugène Catalan (1814-1894; X1833)
Two (two?) minutes for... Catalan numbers
(French, 15:48) by Jerome-Cottanceau
"El Jj"
(Blog, 2017年02月07日).
With p=0.015 and q=1-p=0.985, exactly k spacers are defective with probability C(200,k) pk´q200-k. In particular:
No spacer is defective (k=0) with probability q200, which is about 4.86683%.
Exactly one is defective (k=1) with probability 200´p´q199, or about 14.823%
Exactly 2 are defective with probability 19900´p2´q198, or about 22.46%.
Adding the above three results, we see that 2 or less spacers are defective with a probability of about 42.15%. Therefore, 3 or more are defective with a probability of about 57.85%.
Incidentally, the most likely number of defective spacers is k=3 (with a probability of about 22.574%), after which the probabilities decrease rapidly: 16.93% for k=4, 10.1% for k=5, 5% for k=6, 2.11% fo k=7, 0.76% for k=8, 0.25% for k=9, 0.073% for k=10, etc. In a sample of n spacers, the average number of bad spacers is n times the probability of a bad spacer. Here, that means 200´0.015, which is also 3.
As usual, observe that "2 children have the same birthday" when 3, 4, or 5 do... Let's first suppose that no one is born on February 29th:
If we assume each kid has one of 365 equiprobable birthdays, it's easier to compute the probability that they all have different birthdays: The second kid has a birthday different from the first one with probability (364/365). Knowing that the first two birthdays are different, the third birthday is different from these two with probability (363/365), etc. All told, the probability of having 5 different birthdays is 364×363×362×361/3654. The probability that (at least) two kids share the same birthday is therefore 1 - 364×363×362×361/3654, or 481626601/17748900625, which is about 2.71% (0.0271355737).
Now, let's bring leap years and February 29 into the picture. We may even afford the luxury of using the full Gregorian calendar (century years are not leap years except when divisible by 400; 2000 was a leap year, 1900 was not). This is appropriate only when we do not know at what time the family lived (for a 20th or 21st century family the Julian odds of exactly one leap year in 4 are more appropriate since 2000 was a leap year). The Julian probability of having one's birthday on Feb. 29 is q = 1/1461 (1 leap year in 4 years). The corresponding Gregorian probability is q = 97/146097 (97 leap years in 400 years).
Using either value for q, we may state that none of the 5 kids were born on February 29 with a probability (1-q)5 for which the above analysis applies, so two kids have an identical birthday other than Feb. 29 with a probability (481626601/17748900625)(1-q)5. To this, we should add the probability that at least 2 kids were born on Feb. 29, which is 1-(1-q)5.
All told, this gives a probability of 180043909073061/6656578833083301 or about 2.7047512782 % in the Julian case (limited to 20th/21st century), whereas the Gregorian case (from recent history to far into the future) corresponds to a probability of about 2.7050013288 %. (That's exactly 1800420599383851642470257 chances in 66558954341646251642470257.) To summarize, the desired probability is about:
No, it does not. Heck, it doesn't even apply to a random group of real people, because maternity wards are notoriously busier at certain times of the year... Various additional statistical biases apply to siblings. One major observation is that twins are not so rare, especially among large families. A minor observation is that the same woman cannot give birth in March and June of the same year... For completeness, two people born on the same day of the same year may have the same mother, even if they're not twins. (Guess how.)
By itself, the possibility of twins (in about 2% of live births) overshadows and invalidates the above result when applied to siblings.
As far as the laws of mathematics refer to reality, they are not certain,
and
as far as they are certain, they do not refer to reality.
Albert
Einstein (1879-1955)
cov(X,X) is the variance of X (i.e., the square of its standard deviation ).
By definition, the covariance of two random variables vanishes if and only if those two variables are statistically independent (i.e., the mathematical expectation of their product is the product of their respective mathematical expectations).
The notion can be illustrated by the example of two random variables X and Y obtained from three independent variables (A, B, U) and two constants (x,y). Without loss of generality, we'll assume that the variance of U is 1.
X = A + x U
Y = B + y U
The covariance of X and Y is then seen to be cov(X,Y) = xy.
To compute that quickly, we introduce another independent variable U' (distributed just like U) and define Y'=B+yU' (thus, cov(X,Y') = 0).
cov(X,Y) = cov(X,Y) - cov(X,Y') = xy E(U2 ) - xy E(UU') = xy
If n independent random samples are either equal to 1 (with probability p) or to 0 (with probability 1-p), the sum of their values is a random variable which is said to have a binomial distribution; its average is np and its variance is np(1-p). Here, this means 200´0.47´0.53, or 49.82. (The standard deviation is the square root of this, roughly 7.0583.)
The explanation for this simple formula repays study. Here it is:
To obtain a binomial distribution (both in theory and in practice), you may consider the sum Y of n independent random variables (X1, X2, ... Xn), each of which being equal to 1 with probability p and to 0 with probability q = 1-p. The sum is equal to k with probability C(n,k)(1-p)n-k pk . The "binomial" distribution derives its name from the binomial coefficients C(n,k) which appear in this expression.
The average ("expectation" or "expected value") E(Y) of a sum being the sum of the averages, we have simply E(Y)=np. (The expectation E is a linear function.) With n=200 and p=47%, the sum Y will have an expected value of E(Y)=94.
The variance V(Y) of Y is the expectation of the random variable (Y-E(Y)) 2, namely V(Y)=E([Y-E(Y)] 2=E(Y 2-2E(Y)Y+E(Y) 2)=E(Y 2)-E(Y) 2. (The last equality in this relation is known as Koenig's theorem. It is obtained simply by noticing that E(Y) is just a constant number. The expectation of a constant number is itself and the expectation of a random variable multiplied by any constant number is simply that constant number multiplied by the expectation of the random variable.)
For two independent random variables A and B, the relation E(AB)=E(A)E(B) holds (which is not generally the case for dependent variables). You may consider the variance of the sum A+B: V(A+B)=E([A+B] 2-[E(A+B)] 2. V(A+B) may thus be expressed as E(A 2+B 2+2AB)-[E(A)+E(B)] 2. Since we have E(AB)=E(A)E(B) for statistically independent variables, this boils down to E(A 2)+E(B 2)-E(A) 2-E(B) 2, which you may recognize as equal to V(A)+V(B). In other words, if A and B are independent, V(A+B)=V(A)+V(B).
Therefore, the variance of the sum Y is n times the variance of each independent term X.
What is the variance of X? Well, as X is either 0 or 1,
X 2=X
and
E(X 2)=E(X)=p
so that
V(X)=E(X 2)-E(X) 2=p-p 2=pq.
All told, V(Y) = npq , as advertised.
Both formulas are correct but they apply to different situations. Basically, you use N when you know the theoretical average of the sample (for example, you may know it's zero because of some symmetry in the phenomenon), whereas you use N-1 when you are working with an estimate of the average, obtained by averaging the sample at hand.
This latter use of (N-1) comes from the fact that you want what's called an "unbiased" estimator of the standard deviation, namely you want the calculations to yield a result which does not have a systematic built-in error...
Consider n samples X1, X2, ... Xn obtained from an unknown random variable X. The expected value of the sum is the sum of the expected values. Therefore, if you want to estimate the mean of X, your best shot is to divide by n the sum of the samples. Call this the "sample average". It's an "unbiased" estimate of the mean in the sense that the expected value of the "sample average" equals the true mean of the random variable X. So far so good.
Now, if you want to estimate the standard deviation but do not know the true mean of X, you'll have to use the best estimate of the mean at your disposal, namely the sample average. The problem is that this sample average is obtained from the sample itself and is therefore not independent of each individual sample. The quadratic deviation of X1 from the sample average is:
(X1-(X1+...+Xn)/n)2
The expected value of that quantity may be computed from the fact that all the samples are independent and have the same expected value E(X). We obtain:
(1-1/n)2 E(X2) + (n-1)/n2 E(X2) - 2(1-1/n)(n-1)/n E(X)2 + (n-1)(n-2)/n2 E(X)2
This boils down to (n-1)/n [E(X2)-E(X)2]. Now, the quantity E(X2)-E(X)2 equals the square of the standard deviation s. In other words, the square of the deviation of X1 from the sample average is expected to be (n-1)/ns 2
What's true of X1 also applies to X2,...,Xn and, therefore, the sum of the n samples has an expected value of (n-1)s 2.
That's why you have to divide the sum by (n-1) instead of n when you're working with the sample average because the true mean is unknown.
When the true mean is used, there are no such complications from interdependencies of the quantities involved and the divisor n should be used.
In other words, we may estimate the standard deviation s of a random variable X from a set of n samples X1, ..., Xn using either of the following unbiased formulas:
In this, m is the true mean of X, whereas A = (X1+...+Xn)/n is an estimate of m based on the sample at hand.
The relevant nontrivial factor 1-1/n is known as Bessel's correction.
Proof : If Y ≥ a > 0, then 1 ≤ Y/a. So, the function which equals 1 when Y ≥ a and 0 elsewhere has a lesser mathematical expectation than Y/a. QED
Markov's inequality | Andrey Markov (1856-1922) inherited it from Chebyshev (his teacher).
Proof : Apply Markov's inequality to the random variable Y = (X-m)2
P( |X-m| ≥ ks) = P( Y ≥ k2 s2 ) ≤ E(Y) / k2 s2 = 1 / k2 QED
The French mathematician Irénée-Jules Bienaymé (1796-1878; X1814) was a personal friend of Pafnuty Chebyshev (1821-1894) whose work he translated from Russian into French. However, the inequality which now bears both names was actually first published by Bienaymé in 1853. Chebychev would use it only 14 years later, in a generalized proof of the Law of large numbers.
P(AUBUC) = P(A) + P(B) + P(C) - P(AÇB) - P(BÇC) - P(CÇA) + P(AÇBÇC)
You may prove this using the relation
P(AUE) = P(A) + P(E) - P(AÇE), with E=BUC,
namely: P(AUBUC) = P(A) + P(BUC)
- P((AÇB)U(AÇC)).
The last two terms may be expanded using the formula for the probalitity of a union again:
P(AUBUC) = P(A) + P(B) + P(C)
- P(BÇC)
- P(AÇB)
- P(AÇC)
+ P((AÇB)Ç(AÇC)).
The last term of this expression is clearly
P(AÇBÇC).
This identity is a special case of a relation which is worth committing to memory: The probability of a union is equal to:
This is easily proved by induction; the formula for n+1 events is derived from the formula for n events just as we went from 2 to 3 events in the above.
This type of computation is often called an inclusion-exclusion enumeration. It holds either for the probability of events, or for the number of elements in sets. Here are a few great examples:
Among equiprobable permutations of n elements, the probability that p given elements are fixed points is (n-p)! / n! (since there are (n-p)! ways to choose a permutation of the elements other than those fixed points). Thus, the sum of the probabilities of all such events is simply 1/p! (namely, the above probability multiplied into the C(n,p) ways to choose p elements).
Clearly, there's at least one fixed point when point i is fixed for some i between 1 and n. The probability of the union of those n nonexclusive events is thus readily obtained by inclusion-exclusion enumeration using the remark presented in the previous paragrah: The probability that there is at least one fixed point in a random permutation of n elements is thus:
1/1! - 1/2! + 1/3! - 1/4! + ... - (-1)n / n!
Except for very small values of n, that expression is extremely close to a number which all electrical engineer memorize as 63.2% for daily use:
1-1/e = 0.632120558828557678404476229838539...
This is to say that the probability that there are no fixed points in a permutation of many elements is very nearly equal to:
1 / e = 0.367879441171442321595523770161460867445811131...
The reader may want to use the same reasoning to obtain the probability that a function (not necessarily bijective) from {1...n} to {1...n} has no fixed point (using the fact that p given elements are fixed points with probability 1/np ). The result is a binomial expansion which turns out to be equal to:
( 1 - 1/n ) n
Of course, that expression is more readily established by observing that a function without fixed points is merely obtained by choosing an image among (n-1) acceptable points for each of the n elements.
This simple argument makes inclusion-exclusion enumeration look awkward. However, there was no counterpart to this shortcut in the previous case involving permutations (i.e., bijective functions).
The limit of that expression as n gets large happens to be also equal to 1 / e but the convergence is much slower than in the case of permutations:
( 1 - 1/n ) n = e n Log(1-1/n) = 1/e [ 1 - 1/2n - 5/24n2 - ... ]
What is the number of permutations of the integers 1 to 100 which leave exactly 3 of the 25 primes unchanged? Answer : [by inclusion-exclusion]
The probability that a random permutation has the above property is equal to that 156-digit integer divided by 100! (namely: 0.00188785484103034...).
What is the number of squarefree integers not exceeding N ?
Let's exclude from the total count (N) all the multiples of the squares of primes, using a straightforward inclusion-exclusion approach:
For q2 no greater than N, we're dealing with floor (N/q2 ) multiples of q2. That number is either excluded (minus sign) when q is a squarefree integer with an odd number of prime factors or included (plus sign) if q is a squarefree integer with an even number of prime factors. If q is the multiple of a square, it's simply ignored, because the multiples of q are already dealt with in the above scheme (as multiples of squarefree numbers). All told, the desired result may be very neatly expressed using the moebius function m which is precisely equal to -1 in the first case and +1 in the second (it's 0 for a multiple of a square):
For N = 2 50, the value of this expression (namely, 684465067343069) can be computed in minutes (assuming m to be decently implemented).
The value of m for millions of integers can be generated extremely fast as a large array of two-bit entries (4 possible codes stand for the 3 legitimate values -1, 0 and +1 and a "not yet computed" place-holder, if needed) This involves a slight modification of the Sieve of Eratosthenes which can be carried out without any multiplications or divisions, as described elsewhere on this site.
A computation based directly on the following naive expression would be hopeless (it involves 1125899906842624 terms, instead of 33554432).
The numbers of squarefree integers not exceeding 10n (for n = 0, 1, 2, ... 17) appear on the vertical list at left (A071172). This reveals the string of digits (A059956) given on the last line which is just the decimal expansion of the ultimate density of squarefree integers among natural integers (about 60.79% of integers are squarefree) namely:
6 / p2
= 0.607927101854026628663...
= (1 - 1/22 )
(1 - 1/32 )
(1 - 1/52 )
... (1 - 1/p2 )
...
Beautiful math (2016) &
UCLA Math Distinguished Lecture (2015年05月21日)
by Manjul Bhargava (1974-).
Probability that two integers are coprime (10:31) by
Peyam R. Tabrizian (Dr Peyam, 2021年05月18日).
The same approach would yield (even more efficiently) the number of cubefree integers not exceeding N, which is given by the formula:
In the above, N1/3 actually stands for the largest integer whose cube doesn't exceed N. To obtain this precise value with ordinary floating-point arithmetic requires some care. A "safe" expression would be:
ë N 1/3 û = floor ( (N+0.5)1/3 )
In particular, when N is equal to the successive powers of 10 we obtain the following sequence (A160112) in whose digits appear the decimal expansion of the density of cubefree integers (namely, the reciprocal of Apéry's constant).
1, 9, 85, 833, 8319, 83190, 831910, 8319081, 83190727, 831907372, 8319073719, 83190737244, 831907372522, 8319073725828, 83190737258105, 831907372580692, 8319073725807178, 83190737258070643, 831907372580707771, 8319073725807074649, 83190737258070746354, 831907372580707468393, 8319073725807074687961, 83190737258070746868817, 831907372580707468685592, 8319073725807074686838076, 83190737258070746868310774, 831907372580707468683127869, 8319073725807074686831257909, 83190737258070746868312631845 ...
1/z(3) = 0.831907372580707468683126278821530734417... A088453
Squarefree Numbers:
A071172,
A053462,
A143658
|
Cubefree Numbers:
A160112,
A160113
Counting Square-Free Numbers
by Jakub Pawlewicz (2011年07月25日).
Inclusion-Exclusion Principle
by Robin Whitty
(Theorem of the Day #221).
There are C(52,5) = 2598960 different poker hands and each of them is dealt with the same probability. [The choice number C(52,5) is pronounced "52 choose 5" and is equal to the number of ways 5 objects can be picked among 52 when the order is irrelevant, as explained above.]
Therefore, it's only a matter of counting how many such hands belong to a given type to determine the probability of that type (namely, the ratio of that number to 2598960, the total number of possible hands). When the probability of an event is the fraction P = x / (x + y) , its so-called odds are said to be either x to y in favor or y to x against. We pay tribute to this popular way of expressing probabilities in the list below (which assumes some familiarity with poker hands). Note that there are normally 10 different "heights" for a straight and that the ace (A) belongs to the lowest (A,2,3,4,5) and the highest (10,J,Q,K,A), which is traditionally called a Royal Flush if all cards belong to the same suit.
A word of explanation for the last count. "High Card" means that the hand is "none of the above" (two such hands would be compared highest card first to decide who wins). These hands are uniquely determined from two independent choices: First, pick 5 distinct values in one of the [C(13,5)-10] ways which aren't straights, then assign suits in one of the [45-4] ways that aren't flushes.
The total of all of the above is 2598960 = C(52,5), because every hand is of one and only one of the 10 types listed. You may want to check this...
Should your own local rules disallow the 10th straight sequence (A,2,3,4,5), the above counts for straights and/or flushes should be changed (and the "High Card" count should be modified as well), replacing 4, 36, 5108, 10200 and 1302540 respectively by 4, 32, 5112, 9180 and 1303560 = (C(13,5)-9)(45-4).
Let's first count how many ways there are to get a royal flush (A,K,Q,J,10 in the same suit) when you are dealt q cards. Among the C(52,q) hands of q cards, the following number contain a royal flush (recall that C(n,p) is 0 for a negative p):
C(4,1)C(52-5,q-5) - C(4,2)C(52-10,q-10)
+ C(4,3)C(52-15,q-15) - C(4,4)C(52-20,q-20)
The first term corresponds to a choice of one of the 4 possible royal flushes followed by q-5 cards among the 47 remaining cards. This would count more than once any hand with more than one royal flush in it, so a negative adjustment must be made... What you must do is a proper "inclusion-exclusion" enumeration (as presented above) which is where the 3 additional terms come from.
The same thing goes for straight flushes (let's consider a royal flush to be a particular case of a straight flush) except the alternating terms are more complex, and more numerous [as many as 20 straight flushes could coexist in a hand of q = 26 cards]. Clearly, the "naive" 40 C(47,2) count (for q = 7) tallies more than once any hand containing more than one straight flush...
Below is a correct enumeration. The n-th square bracket corresponds to the total number of hands containing n straight flushes. The bracket contains up to 4n-3 terms, each corresponding to a given total number of cards used by the n straight flushes under consideration (this number can be anything from 4+n to 5n). The coefficients in front of the binomial numbers add up to C(40,n):
[ 40 C(52-5,q-5) ] - [ 36 C(52-6,q-6) + 32 C(52-7,q-7) + 28 C(52-8,q-8) + 24 C(52-9,q-9) + 660 C(52-10,q-10) ] + [ 32 C(52-7,q-7) + 56 C(52-8,q-8) + ... ] - [ 28 C(52-8,q-8) + ... ] + ...
For example, the second of the above brackets corresponds to two given ("featured") simultaneous straight flushes (there may be more, we don't care): Among the C(40,2) = 780 such featured pairs, 36 use 6 cards, 32 use 7 cards, 28 use 8 cards, 24 use 9 cards, and 660 pairs are disjoint and use 10 cards. Many of the terms cancel out (we're missing a shortcut here) so the formula can be greatly simplified, but there's no need for sophistication in the case where q = 7, since the terms already given explicitely include all the nonzero ones. For q = 7, the thing boils down to the advertised result:
40 C(47,2) - 36 C(46,1) = 41584
The enumeration presented in the previous article is only easy for small hands or very large ones: For example, in a "hand" of q = 45 cards there's always at least one straight flush. Only one hand of 44 cards is without a straight flush.
However, no such shortcut applies to the case q = 26. We'll use a simple computer program to do the actual counting fairly efficiently, in two steps:
First, let's consider the C(13,q) hands consisting of q cards of the same suit. Let's call N(q) the number of these which contain at least one straight. There are only 8192 different such hands for q between 0 and 13 (that's 2 to the power of 13) so it's easy to tabulate N(q) in a fraction of a second with a computer.
To do so, we assign each hand a pattern of 13 bits (corresponding to the integers from 0 to 8191) with the most significant bit set if we hold an Ace, the next one for a King, and so on, down to the least significant bit for a deuce. The number 31 (i.e, 5 consecutive bits set to 1) multiplied by any power of two, from 1 to 256, gives a so-called mask corresponding to one of the 9 regular straights, while the mask for the tenth straight (A,2,3,4,5) is 4096+15 = 4111. The tiny table below may thus be built by counting the number q of nonzero bits in each of the 8192 patterns whose "bitwise and" with at least one of the 10 acceptable masks is equal to that particular mask. Here's a piece of UBASIC code to do just that:
dim N(13) for I=0 to 8191 Q=bitcount(I) if 4111=bitand(4111,I) then *TALLY M=31 while M<8192 if M=bitand(M,I) then *TALLY M=2*M wend: goto *SKIP *TALLY: N(Q)=N(Q)+1 *SKIP: next INumber of combinations of q cards from the same suit
Now, for the entire pack of 52 cards, we may consider the 4 nonexclusive possibilities of having straight clubs, straight hearts, straight diamonds, or straight spades... Thus, a straightforward inclusion-exclusion enumeration gives the desired result as the sum of the following expressions [of four terms] over all the nonnegative indices q1 q2 q3 q4 whose sum is equal to q:
C(4,1) N(q1) C(13,q2) C(13,q3) C(13,q4)
- C(4,2) N(q1) N(q2) C(13,q3) C(13,q4)
+ C(4,3) N(q1) N(q2) N(q3) C(13,q4)
- C(4,4) N(q1) N(q2) N(q3) N(q4)
Such terms are zero when an index is more than 13 or when the function N appears with an argument below 5, so there are relatively few nonzero terms to add (or subtract): For q = 26, the 4 above types of terms are nonzero in 1209, 709, 334 and 84 cases, respectively. A grand total of 2336 nonzero terms... A priori, C(q+3,3) ordered sets of indices of sum q, and 4 terms to each set could have meant 14616 terms for q = 26. All told, 255774304012724 hands contain a straight flush out of 495918532948104 possible hands of 26 cards. Thus, the probability to have a straight [or royal] flush among 26 cards is:
9134796571883 / 17711376176718 =わ 0.5157587124082935...
q= (%) 4 0 0 5 40 0.001539077 6 1844 0.009057633 7 41584 0.031082810 8 611340 0.081237077 9 6588116 0.179069883 10 55482100 0.350708060 11 380126920 0.629310354 12 2177910310 1.055294393 13 10644616240 1.676281723 14 45049914588 2.546680140 15 167011924492 3.726795587 16 547315800984 5.281342552 17 1597026077496 7.277207845 18 4173458163098 9.780325218 19 9813490226056 12.851547041 20 20841357619302 16.541465273 21 40096048882028 20.884249209 22 70043238025686 25.890741583 23 111299858400784 31.541291220 24 161084365419139 37.779089838
q= (%) 25 212534592698472 44.505092157 26 255774304012724 51.575871241 27 280828262657348 58.805898612 28 281310102228657 65.975612201 29 257052287198104 72.846103901 30 214209339496536 79.180215104 31 162748148903084 84.768275047 32 112708315365631 89.454857917 33 71138289660672 93.161256080 34 40921413681848 95.897626995 35 21453967174040 97.759817712 36 10250154666092 98.909218234 37 4460732889232 99.539237677 38 1766089532348 99.837373263 39 634724725868 99.954515344 40 206360120020 99.990654664 41 60402956200 99.998720874 42 15820006672 99.999889077 43 3679075192 99.999994346 44 752538149 99.999999867 45 133784560 100
The above summation can be simplified into the following expression, which saves multiplications. It involves only 508 nonzero terms for q = 26, and never more than 1195 nonzero terms for any q (this maximum is reached for q = 35):
åi N(i) [ 4 C(39,q-i) - åj N(j) [ 6 C(26,q-i-j) - åk N(k) [ 4 C(13,q-i-j-k) - N(q-i-j-k) ] ] ]
We could also use the elegant approach presented below and introduce the following polynomial S(x) which encodes the result of the above "first step":
10 x5 + 71 x6 + 217 x7 +
371 x8 + 384 x9 + 234 x10 +
77 x11 + 13 x12 + x13
= x5 (1+x) (10 + 61 x + 156 x2 + 215 x3
+ 169 x4 + 65 x5 + 12 x6 + x7 )
The number of combinations of q cards containing at least one straight [or royal] flush is then equal to the coefficient of xq in the following expression:
4 S(x) (1+x)39 - 6 S(x)2 (1+x)26 + 4 S(x)3 (1+x)13 - S(x)4
This expression may be rewritten in a much more compact form:
(1+x)52 - [ (1+x)13 - S(x) ] 4
The hands without a straight flush are thus enumerated by
[ (1+x)13 - S(x) ] 4.
This is easy to prove: The inside of the bracket counts the combinations of cards of the same suit without a straight. A combination from the whole pack without a straight flush is just a juxtaposition of four such combinations; one in each suit.
Similarly, the formula given above for the number of combinations of q cards containing at least one royal flush is equal to the coefficient of xq in:
(1+x)52 - [ (1+x)13 - x5 (1 + x)8 ] 4
So, the following difference counts hands with straight flushes but no royal ones:
[ (1+x)13 - x5 (1 + x)8 ] 4 - [ (1+x)13 - S(x) ] 4
This is equal to
36 x5 + 1656 x6 + 37260 x7 + 546480 x8 +
5874656 x9 + 49346350 x10 +
337176880x11 + ...
+ 490000 x47+ 25000 x47 + 625 x48.
These coefficients appear on the second line of the table in the next article...
Among q cards, there are 13 nonexclusive ways to have at least one 4-of-a-kind hand (4 aces, 4 kings ... 4 deuces). Therefore (cf. previous article) the number of such q-cards combinations is the coefficient of xq in the polynomial:
(1+x)52 - [ (1+x)4 - x4 ] 13
However, when q is 8 or more, there's a possibility of having both a 4-of-a-kind and a straight flush, in which case the existence of the latter overshadows the former (according to the careful wording of the above question).
For example, the coefficient of x8 in the above (i.e., 2529462) includes 200 straight (or royal) flushes: Each such hand is obtained from any of the 40 possible straight flushes by singling out one of its 5 cards [and including all other cards of its kind]. So, only 2529262 eight-card hands are counted as 4-of-a-kind.
Some of the data tabulated below may thus be surprising at first. For example, with 8 cards or more, a full house is more likely than just three-of-a-kind...
Some of the above entries courtesy of Bill Butler | The Wizard of Odds
With 11 cards, what's really difficult is to avoid getting at least a pair, for you must hold AKQJ9876432 (the only set of 11 different values which does not contain a straight) without holding more than 4 cards of any suit. There are as many ways to do this as there are arrangements of 11 suits where no suit appears more than 4 times. As explained elsewhere on this page, this amounts to:
The coefficient of x11 / 11! in ( 1 + x2 / 2! + x3 / 3! + x4 / 4! ) 4
This coefficient is 11! times 25/432, which is 2310000 (whereas there are 411, or 4194304 unrestricted arrangements). All told, there are thus 2310000 combinations of 11 cards that do not include a pair or better. Out of a total of C(52,11) = 60403728840 equiprobable combinations, this is a probability of only 0.000038242672... With 12 cards, this probability would be zero, since you would always hold either a straight or several cards of the same kind.
Let's first count the unrestricted arrangement of the letters in CONSTANTINOPLE. There are 14 letters (3 N's, 2 O's, 2 T's, and a single copy of each of the 7 letters C,S,A,I,P,L,E). If all these letters were different (say the N's are colored red white or blue, whereas both the O's and the T's are either red or blue), you would have 14! different arrangements. To each actual arrangement of the (uncolored) letters corresponds exactly 3!2!2! arrangements of the colored letters. Therefore, there are 14!/(3!2!2!) or 3632428800 possible arrangements of the (uncolored) letter in CONSTANTINOPLE. So far so good.
What happens when you don't allow adjacent vowels? You have 5 vowels and 9 consonants. In how many ways can you put P vowels in 2P-1+K positions?
Well, if K is negative you can't do it. If K is zero, you've got only one choice (think about it). In general, you have C(K+P,P) choices, because that's the number of ways you could place the vowels in P+K positions if you did not have any restriction. With the restriction, each vowel is in effect occupying two spaces (itself and the position to its right) except for the last one which is only occupying one space. So, you may as well mentally ignore the P-1 "dead spaces" and see the exact correspondence between the two situations.
Here, you have P=5 and K=5, which means C(10,5) = 252 ways of choosing the positions of the vowels. Once this is done, you have 9!/(3!2!) choices for placing the consonants (since there are 3 identical N's and 2 identical T's) and 5!/2! choices for the vowels (there are two O's). All told, you have 252 ´ 9! ´ 5! / 24 = 457228800 possibilities, as advertised.
Incidentally, the probability that no two vowels are adjacent in an arrangement of P vowels among N letters (N = 2P-1+K) depends only on N and P, because it's the same as the probability of having no adjacent marks when P squares are randomly marked in a row of N squares. (Putting equiprobable arrangements of vowels in the marked squares and equiprobable arrangements of consonants in the unmarked ones won't change the odds.) This probability is:
C(N-P+1,P) / C(N,P)
With P=5 and N=14, this ratio is simply 252/2002, or 18/143 (about 12.59%) which is an easier way to find the ratio of the above pair of large numbers...
The 9 letters include: 5 S's, 2 E's, 1 P, 1 O.
Let's use this question to present a very interesting general approach...
To make a selection, you have to chose a certain number of S's from 0 to the maximum allowed, which is 5 (the fact that you will eventually use at most 4 S's is irrelevant at this point), then you choose the number of E's from 0 to 2, and the numbers of P's and O's each from 0 to 1.
The four numbers you pick must add up to 4. Well, the coefficient of x4 in the following expression is obtained with exactly the same breakdown, so both quantities are equal (think about it)!
(1 + x + x2 + x3 + x4 + x5) (1 + x + x2) (1+x) (1+x)
In the expansion of this polynomial, the coefficient of xp is thus the number of selections of p letters taken from POSSESSES:
1 + 4x + 8x2 + 11x3 + 12x4 + 12x5 + 11x6 + 8x7 + 4x8 + x9
In particular, the coefficient of x4 is 12, the answer to question (a).
Now, let's try to count arrangements with the same polynomial idea. Well, if all the letters were different, you could rearrange each of the above basic selections in 4! different ways. They are not different, though, and that means that for each set of k identical letters in a basic selection, you've overestimated the true count of arrangements by a factor of k! All told, you'll see that the number of possible arrangements is 4! times the coefficient of x4 in the following polynomial (it's more convenient to describe this as "the coefficient of x4/4! "):
(1 + x + x2/2! + x3/3! + x4/4! + x5/5!) (1 + x + x2/2!) (1+x) (1+x)
Now, expand this polynomial and you get:
1 + 4x + 14x2/2! + 43x3/3! + 115x4/4! + 266x5/5! + 543x6/6! + 987x7/7! + 1512x8/8! + 1512x9/9!
In particular, the coefficient of x4/4! is 115 and that's the answer to your second question.
Incidentally, notice that, if you had an unlimited supply of the 4 letters, the above polynomial would become a series equal to the product of 4 identical series, each equal to exp(x). Therefore, the whole thing would simply be the series for exp(4x), namely:
1 + (4x) + (4x)2/2! + ... + (4x)n/n! + ...
This is one (devious) way to prove that there are 4n arrangements of n letters from an "alphabet" of only 4. (4 choices for the 1st position, 4 for the 2nd, 4 for the 3rd, 4 for the 4th, etc.)
I encourage you to do the same thing to count basic selections (not arrangements) of n letters of 4 possible differents kinds (whose supply is unlimited). What, then, is the coefficient of xn in 1/(1-x)4 ? [Answer: C(n+3,3)]
The answer is 30492. The approach introduced above shows that this is the coefficient of x19 in the expansion of ( 1 + x + x2 + ... + x9 ) 6.
In UBASIC: print coeff(((_X^10-1)\(_X-1))^6,19)
Equivalently, you could ask about the number of ways to flip a coin n times without ever getting p consecutive tails. Same problem.
Let's call a bit sequence [of zeroes and ones] without p consecutive zeroes "satisfactory" and let F(n) be the number of such satisfactory strings of length n.
Since all sequences of length less than p are satisfactory, we have F(n) = 2n, for n<p. On the other hand, if n is at least equal to p, a satisfactory sequence starts with exactly k zeroes (k = 0, 1, ... p-1), followed by a "one" bit and then any satisfactory sequence of length (n-k-1). Therefore, the following relation holds, which defines F(n) recursively:
F(n) = F(n-1) + F(n-2) + ... + F(n-p)
In particular, for p=2, we're counting the number of flip sequences where tails never occurs twice in a row, and obtain as F(n) the Fibonacci number of rank (n+2). For p=3, it's the so-called "Tribonacci" number of rank (n+3). For p=4, we have "Tetranacci" numbers, for p=5 "Pentanacci" numbers, etc. Note that, for p=1, F(n) = 1, because there's only one string of n bits without zeroes...
Number of strings of n bits without p consecutive zeroes : (A048887)As there are 1024 equiprobable ways to flip a fair coin 10 times, the probability you'll do so without ever getting tails twice in a row is 144/1024 (about 14 %).
Losing sequences are endless. Winning sequences include:
H, THH, TTHHH, THTHHH, TTTHHHH, TTHTHHHH, ...
Theoretically, you never have to concede defeat: Even if you have already flipped 1000 tails, there's a very slim chance that you will flip 1001 heads in a row, starting with the next throw. In practice, it would be silly not to give up after accumulating 30 tails or so... As such an artificial bound tends to infinity, the probability of winning tends rapidly to a value of 0.7112119...
Let's analyze the situation without assuming that the coin is a fair one: p is the probability of heads and q = 1-p is the probability of tails.
We may bypass the elementary considerations of the above introduction by dealing directly with the uncountable set of infinite sequences of flips. The subset of all such sequences sharing a prescribed prefix consisting of n heads and m tails has a probability measure equal to pn qm.
Let's call P(n) the probability of the set of all strings that start with a winning string with n tails stripped from its n+1 trailing heads. Such a string with n+1 tails is uniquely obtained by appending to a string of the same type with n tails a string consisting of at most n consecutive heads, followed by one final tails. Therefore:
So, P(0) = 1, P(1) = (1-p), P(2) = (1-p)(1-p2 ), etc.
The probability of the set of winning strings with n tails being pn+1 P(n) we obtain the following expression for the overall probability of winning:
The above probability of winning may also be expressed as:
f (p) = p [ 1 + p (1-p) [ 1 + p (1-p2 ) [ 1 + p (1-p3 ) [ 1 + p (1-p4 ) [ ... ] ]]]]
As advertised, for a fair coin (p = ½) that expression is equal to:
f (½) = 0.711211904913397578721100278070769219911088...
This is the greatest site ever. I've forwarded your link to others to whom I've asked the same question.
Thanks for the kind words, Michael.
The above function f has an interesting expansion as a power series :
f (x) = x + x2 - x5 - x7 + x12 + x15 - x22 - x26 + x35 + x40 - x51 - x57 + x70 + x77 - x92 - x100 + x117 + x126 - x145 - x155 + x176 + x187 - x210 - x222 + x247 + x260 - ...
The exponents of x are the generalized pentagonal numbers (A001318). The terms are grouped by pairs, with alternating signs. More precisely:
That formula yields a fast way to compute f (x) and a peculiar decimal expansion for f (0.1) that features only 4 digits (0, 1, 8 and 9) namely:
0.1099899000010009999998999900000000100000999999999989999990000000000001...
Incidentally: ½ = f ( 0.3705997158021455417170957127716712...)
If you only allow quarters, dimes, nickels and pennies, the answer is 242. If you also allow dollar coins and half-dollars, the answer is 293.
Note that the number of ways you can obtain a sum of 100 by adding 25,10,5, and 1 is the coefficient of x100 in the expression:
(1 + x25 + x50 +...) (1 + x10 + x20 + x30 +...) (1 + x5 + x10 + x15 +...) (1 + x + x2 + x3 + x4 +...)
also equal to: 1 / [ (1-x25)(1-x10)(1-x5)(1-x) ]
The practical way to do it is via a small computer program. In BASIC, this may be something as simple as the following (which prints "242"):
s = 0 FOR q = 100 TO 0 STEP -25 FOR d = q TO 0 STEP -10 FOR n = d TO 0 STEP -5 s = s + 1 NEXT NEXT NEXT PRINT s
It's easy to change the above to account for half-dollars (the result is 292). Add the unique way corresponding to a single dollar coin for a total of 293.
The above simple-minded program can be made far more efficient if we notice that there are 1+floor(p/5) ways to make change for an amount of p cents with nickel and pennies only (that's what the inner loop computes). The same idea is easy to extend to get rid of the next innermost loop by counting the ways to make change with pennies, nickels and dimes; there are (1+d)(1-d+n) such ways, where n=floor(p/5) and d=floor(p/10). The following program is indeed much faster for large values of p (well beyond p=100):
INPUT "Amount in cents p=";p s=0 FOR j = p TO 0 STEP -25 s = s + (1 + INT(j/5))*(1 - INT(j/5) + INT(j/10)) NEXT j PRINT s;"ways; with pennies, nickels, dimes & quarters"
It's more difficult to get rid of this "quarter counting" loop with a closed formula, but it can be done (see below). When we are faced with a large number of weird denominations it may not be practical to derive such closed form formulas. On the other hand, programs like the above tend to be too slow [their running time is roughly proportional to the result, which grows exponentially with the number of denominations]. A more efficient approach is based on the actual computation of the product of the polynomials described above. This may be done with a number of additions which is roughly proportional to the number of denominations, and also to the square of the amount p...
Given below is a BASIC implementation of that approach. As is, the program is written for pennies, nickels, dimes and quarters, but you may have different and/or additional denominations by changing only the first line of the program to define other coins (in terms of the number of pennies they stand for). Two polynomials are used in the form of arrays (A and B). For each new coin denomination, Polynomial B is updated to reflect the number of ways to make change with such coins and/or the previously considered ones, knowing that the (previously computed) Polynomial A tells how many ways change could be made without the new denomination... At the end, Array B contains all the answers for any amount p, up to the built-in limit M defined at the beginning of the program (M=200 is just an example here). It's then a simple matter to display interactively the answer B(p) for any desired amount p:
N=3: DIM c(N): c(1) = 5: c(2) = 10: c(3) = 25 M=200: DIM A(M),B(M) FOR i = 0 TO M: B(i) = 1: NEXT i FOR d = 1 TO N FOR i = 0 TO M: A(i) = B(i): B(i) = 0: NEXT i FOR j = 0 TO M STEP c(d) FOR k = 0 TO M-j B(j+k) = B(j+k) + A(k) NEXT k NEXT j NEXT d Main: INPUT "Amount in cents p=";p PRINT B(p); "ways!" GOTO Main
Let p be the amount in cents (¢) for which we have to make change. Let n be floor(p/5) which is the number of whole nickels (5¢) contained in that. Similarly, we'll call d,q,h and w the numbers of dimes (10¢) quarters (25¢) half-dollars (50¢) and whole dollars (100¢) in the amount p.
The number of ways to make change using only pennies and nickels is n+1, because all you have to do is decide on a number of nickels from 0 to n.
If you're allowed dimes as well, you first choose a number i of dimes from 0 to d, and are left with (p-10i) cents for which you must give change using only pennies and nickels. Since floor((p-10i)/5) is n-2i, you have n-2i+1 ways to do that. The total number of ways to make change is thus the sum of (n-2i+1) when i goes from 1 to d, which works out to be:
(n-d+1)(d+1)
ways to give change for p cents
using pennies, nickels & dimes,
where n = floor(p/5) and d = floor(p/10)
The situation is more complicated when quarters are allowed as well:
If we use an even number 2j of quarters, we have to make change for p-50j cents without quarters and have (n-d-5j+1)(d-5j+1) ways to do so...
With an odd number 2j+1 of quarters, we're left with (p-50j-25) pennies. Since floor((p-50j-25)/10) can be shown to be equal to (n-d-5j-3), we have (d-5j-1)(n-d-5j-2) ways to complete the transaction.
When q = 2h, we sum up the first expression for j going from 0 to h and the second one for j between 0 and h-1. When q = 2h+1, on the other hand, we sum the second one also up to j = h, which gives formally an additional term equal to (d-5h-1)(n-d-5h-2) . Thus, we obtain an expression valid when q is either 2h or 2h+1 by adding (q-2h)(d-5h-1)(n-d-5h-2) to the expression obtained for q = 2h. The formidable result shows that there are 133333423333351000001 ways to make change for a million dollars in pennies, nickels, dimes and quarters:
(h / 6) [ 100 h 2
+ 6 d (2n-2d-1)
- 15 h (2n-1)
- 7 ]
+
(n-d+1) (d+1)
+
(q-2h)
(d-5h-1)
(n-d-5h-2)
Eliminating h [using q = 2h + (q mod 2) ] we obtain the following expression. (The last term is 0 or ±1/8 and may thus be discarded if the result is rounded to the nearest integer.)
(q / 24) [ 50 q 2
+ 12 d (2n-2d-1)
- 15 q (2n-1)
- 14 ]
+
(n-d+1) (d+1)
+
(q mod 2)
[ 2(n-2d) - 1 ] / 8
This formula was worked out and posted here (immediately) on February 8, 2004. This is the earliest source for it. The relevant permanent link is http://www.numericana.com/answer/counting.htm#uschange.
How many ways to make change for a googol dollars? (30:39) by Burkard Polster (Mathologer, 2021年01月23日).
A simple efficient program can be quickly designed as above which gives foolproof answers when N is less that the size M of a reasonable array.
Mint=7: DIM c(Mint) c(1)=2: c(2)=5: c(3)=10: c(4)=20: c(5)=50 c(6)=100: c(7)=200 M=200: DIM A(M),B(M) FOR i = 0 TO M: B(i) = 1: NEXT i FOR d = 1 TO Mint FOR i = 0 TO M: A(i) = B(i): B(i) = 0: NEXT i FOR j = 0 TO M STEP c(d) FOR k = 0 TO M-j B(j+k) = B(j+k) + A(k) NEXT k NEXT j NEXT d Main: INPUT "Amount in pence N=";N PRINT B(N); "ways!" GOTO Main
However, mathematics is the art of patterns and we may want to go after the same type of closed formulas which were derived for American money at the end of the previous section. That would allow values of N in the zillions.
To enforce a better computational discipline, we shall consider that the supply of various coins is limited. Let's introduce the notation:
Ni = min { floor ( N / i ) , number of available i-coins }
Assuming N1 = N, there's just one way to make change for N with pennies only, isn't there? (With N1<N, there would be no way to make change.)
Using 1 and 2 denominations (pennies and tuppences) you may choose to use from 0 to N2 tuppences and the balance in pennies only. There are 1+N2 ways to do so (still assuming N1=N).
If you also have shillings (5p, five pence coins) you may use any number s of them, between 0 and N5. You are then left with an amount N-s for which the previous formula applies. So, the total number of possibilities is
There are 252 decreasing sequences of 5 digits (and 2002 nonincreasing ones).
To build such a sequence, you must choose 5 digits among the 10 possible ones. Once you've done this, you've got no choice but to lay them out in decreasing order. Therefore, this can be done in C(10,5)=252 ways.
If you were interested in a nonincreasing sequence instead, you would allow several adjacent copies of the same digit. This is only slightly more difficult to count; there are C(14,5)=2002 possibilities.
Why is that? Well, consider a row of 9+5=14 squares of which 5 are "occupied" and 9 are "empty". In each occupied square, just write the total number of empty squares to its right. The 5 numbers you wrote are between 0 and 9 and their sequence is nonincreasing. Conversely, each nonincreasing sequence of 5 digits may occupy (under the above rules) 5 of 14 such squares in a unique way; place them right to left to see that you have no choice in the matter. Therefore, there are just as many nonincreasing sequences of 5 digits as there are ways to pick 5 squares among 5+9. That's C(14,5)=2002.
Because of the musical example, we may call an element of 2G/G a scale. Mathematically, the empty scale is the equivalence class containing only the empty set. The dubious musical counterpart would be the scale of pure silence.
This entails the equivalence relation which says that two subsets A and B of G are equivalent the following holds (using Minkowski addition)
$ x Î G , A + x = B
A dundamental remark is that if #A is coprime with the order m of G, then there are exactly m sets equivalent to A. Otherwise, this is not necessarily so.
An n-tonic key is related to exactly 12 other keys by transposition and to at most n keys by modal change. Some pairs of keys are related both ways.
Musicians usually call this a "key", as in C-major or A-minor (respectively abbreviating "major mode in the key of C" or "minor mode in the key of A"). However, because they use "key", "scale" or "mode" almost interchangeably, I prefer to use slightly unusal locutions which are unambiguous. For exemple, I view the major and minor modes as modes of the diatonic scale, which is an unambiguous term (there are seven such modes, commonly called modes of the major scale). The present discussion requires precision. Go back later to an approximative jargon, if you must.
One way to choose a rooted mode is to first choose a tonic, then pick some of the 11 notes at most 11 semitones above it.
This method shows that there are just 211 = 2048 rooted modes in a given key (that's indeed the number of subsets contained in a set of 11 elements).
The tonic (root) itself could be tuned to any frequency, standard or not. This is utterly irrelevant to this first part of our discussion. Most humans can only perceive musical intervals. Very few of us ever had perfect pitch (an ability which is always lost gradually by the age of 50 or so, anyway).
Included in this count are some pathological scales like the trivial scale, which contains only the tonic itself. It's arguably the scale played by a single percussion element; always at the same pitch or lack thereof (e.g, triangle).
Musicians classify a scale according to the number n of tones in it (including the root). using standard numerical prefixes. The number of rooted n-tonic modes is just a straight choice number:
Those numbers form the eleventh line of Pascal's triangle. We have:
The binary code of a chromatic scale is a string of 12 bits (binary digits; 0 or 1) starting with a leftmost "1" corresponding to the root note, which is always present. For example the common C-major scale in the Western tradition is 101011010101 (you'll recognize this pattern as the layout of the seven white keys (1) and the five black keys (0) on a piano).
The more compact interval structure of a chromatic scale is obtained from its binary code by listing for each "1" bit the number of bit positions between itself and the next "1" bit to its right. Thus, in the example of the major scale (binary code 101011010101) we obtain the interval structure 2212221, knowing that to the right of the rightmost bit in any binary code, a "1" bit is always understood.
English-speaking musicians often read the interval structure of a musical scale using the locutions "half-step", abbreviated H, for 1 (1 semitone is a half-step) and "whole step", abbreviated W, for 2 (2 semitones is a whole step). Alternately, some use "semitone" (S) for 1 and "tone" for 2. They may abbreviate the above interval structure of the major scale as either WWHWWWH or TTSTTTS. That system breaks down for scales featuring intervals of 3 semitones or more (the most prominent of those is the harmonic minor scale). A one-character abbreviation for an interval of three semitones is neesed. Some people use "+", others use "WH" which requires the abbreviations of intervals to be separated by commas or dashes to lift the obvious ambiguity. I can't even suggest to abbreviate the proper scientific name for "one tone and a half", namely sesquitone, possibly abbreviated "S", because that wouldn't help the tone/semitone people. What I recommend for any extensive discussion of musical scales, following Francesco Balena and others, is to retain the above numerical version of the interval structure, which is just as compact and easy to read as its alphabetical counterparts, if not more so. Its natural extension to the larger intervals of exotic scales requires little of no explanation. For a few pathological scales, we may need a single-character digit following "9". This is normally "A"; we can't possibly need "B" (think about it). However "0" is also acceptable in this context, because it's otherwise unused and more widely recognized as a "digit". That would be my recommendation and that's what I will use in Numericana if/when the need arises. Thus, in our scales stuctures, "0" stands for "ten".
Musicians call a rooted scale a key and routinely print it on sheet music. Composers may put the keys in their titles, often followed by an identifying number for prolific composers who wrote several. (E.g., Many works of Mozart are called "Piano Concerto in C major".)
In Music theory, we study scales independently of roots. A scale is the n-th mode of another if it includes the same tones but starts at the n-th note (n-th degree of the base scale). For example, the natural minor scale is just mode VI of the major scale (that's the reason why A-minor and C-major use just the white keys on the piano). Conversely, the major scale is the second mode of the minor scale. If two heptatonic scales are modes of each other and the latter is the k-th mode of the former, then the former is the (8-k)th mode of the latter. (The formula is n+1-k for n-tonic scales.) Roman numerals are used to identify the modes of an n-tonic scale, starting with I for the scale itself, witout offset.
For most exotic scales, nobody ever bothers to give a different name to a scale and its first mode. In the fundamental example of the major scale however, nomenclature is clarified by calling the first mode Ionian.
If we were not prevented to do so by tradition and daily usage, it would be better to identify a mode of an n-tonic scale by its offset from 0 to n-1. Offsets are simply added modulo n. This makes mincemeat of rarely-asked questions like: In an heptatonic scale (n=7) what's mode VI of mode II of mode IV ? The answer is III = 2+1 because:
(6-1) + (2-1) + (4-1) = 9 = 2 (mod 7)
Important modes are often called scales by themselves. Their modes are the same as those of the original scale, with a different numbering. The most basic example is the natural minor scale (called Aeolian when considered as the sixth mode of the major scale). Another example is the Double-harmonic minor scale (Hungarian minor or Gypsy minor scale) which is the fourth mode of the Double-harmonic major scale (Byzantine or Gypsy major scale).
When n is coprime with 12, an n-tonic scale (in particular pentatonic or heptatonic) can't be invariant by a nontrivial transposition (i.e., by a number of semitones not divisible by 12) and it will have exactly n modes. Otherwise, some n-tonic scales will have fewer than n modes (that's the case for some hexatonic or octatonic scales).
Unfortunately, most practicing musicians pay little or no attention to the purity of their jargon and many of them will sometimes use words like scale, key, root and mode almost interchangeably. This makes teaching difficult. especially when some teachers don't even attempt to clear the confusion.
The above enumeration is of relatively little help. Instead, we must first recognize that keys (rooted scales) which are modes of the same heptatonic scale are played with the same keys on the piano. For example, C-Major, A-Minor and F-Lydian are played using only the white keys.
There are C(12,7) = 792 different ways to choose a set of 7 pitch-classes among the 12 possible ones. Each can be transposed in exactly 12 different ways within the same scale. Therefore, there are 792/12 = 66 different heptatonic scales, one of which is the major scale (the diatonic scale, of which the natural minor scale is just a mode).
This way of counting only applies when all n-tonic scales have exactly n-modes, which is only the case when n is coprime with 12, as remarked above. In this case, not surprisingly, we find as many scales as there are rooted scales.
For other values of n (2,3,4,6,8,9,10) only the first step of the previous enumeration holds: There are always C(12,n) ways to pick n≥1 chromatic tones. That's a total of 4095 possible nonempy sets of allowed tones.
4095 = 212 - 1
Those can be classified as follows, as explained below:
The above results can be obtained as follows:
When n is 1, 5, 7 or 11, there are as many scales as rooted scales. Done.
When n = 2, there are 5 scales with 2 modes each and a single symmetrical scale with only one mode, for a total of 6 different ditonic scales. In the symmetrical scale, the two tones are separated by an interval of 6 semitones (a tritone) which is notoriously unpleasant.
[画像: Come back later, we're still working on this one... ]
Messiaen coined this term to indicate that some scales are invariant by a limited transposition of less than 12 semitones. This is crucial in the proper enumeration of scales and their modes.
[画像: Come back later, we're still working on this one... ]
Synthetic scales (1907)
|
Synthetic modes
|
Ferruccio Busoni (1866-1924).
Limited Transpositions (1944)
|
Olivier Messiaen (1908-1992)
Olivier Messiaen's Modes of Limited Transposition (15:37)
by Rick Beato (2017年04月08日).
The Scale Omnibus: 399 scales in 12 keys.
by Francesco Balena (2014年06月08日).
AllTheScales.org
by William Zeitler (1954-).
|
A study of scales
by Ian Ring.
1. There is a total of C(52,4) equiprobable ways of choosing 4 positions for the aces in the (ordered) piles and 134 ways of choosing one position in each pile. The probability that each pile has one ace is thus 134/C(52,4)=2197/20825 or about 0.105498199... just under 10.55%.
2. C(100,10) ways of picking 10 ICs and C(25,2)C(75,8) ways of picking 2 bad ones and 8 good ones, for a probability of C(25,2)C(75,8)/C(100,10)=11002861125/37631107514 or about 29.24%.
3. The probability of exactly k passengers showing up is C(370,k)(0.95)k(0.05)370-k. Add these up from k=351 to k=370 and you'll get the probability that more than 350 passengers will show up, which is just under 60.7264% (0.6072639447).
11 / 221 is your answer:
In a 52 card deck, there are 12 face cards. There are C(52,2) equiprobable combinations of 2 cards among which only C(12,2) are pairs of face cards. The probability of getting a pair of face cards is thus C(12,2)/C(52,2) which is 12´11/(52´51) = 11/221, or about 0.04977... In other words, there are "11 chances in 221" (exact result); a probability of about 4.977%.
If you were to put the first card back in the deck before the second draw, the probability of getting a second face card would be higher, since you'd improve the chances of getting a "good" card on the second draw whenever you were successful the first time around. (The result would be the probability of getting a face card twice in two independent events, namely (3/13)2, or about 5.325%.)
11025 for a square grid of side N=14. This has been asked/answered many times for other values of N (N=8 for a chessboard).
My solution is to count the number of possible diagonals of such rectangles: Each diagonal is uniquely determined by a pair of extremities. The number of such pairs is the product of (N+1)2 (choices for the first point) and N2 (choices for the second point not on the row or the column of the first) divided by 2 (because there are two ordered pairs for every unordered pair). As there are 2 diagonals per rectangle, the above product is simply twice the number of rectangles. Thus, the number of rectangles is N2(N+1)2/4, which is 11025 for N=14 (and 1296 for N=8, by the way).
The answer is 204 for a regular 8 by 8 checkerboard. It's N(N+1)(2N+1)/6 for an N by N checkerboard. Here's one way to prove this:
A square of size P is uniquely determined by its lower left corner, which may be picked anywhere on the (N-P+1) by (N-P+1) grid at the lower left of the chessboard. The total number of squares is therefore N2+(N-1)2+(N-2)2+...+22+12, which is equal to N(N+1)(2N+1)/6. Namely:
Philip Brocoum presents this as a warm-up exercise which was actually practised by his former improv troupe at MIT, until nobody screamed. (2006年01月26日)
As each of the 10 people has 3 choices, there are only 310 = 59049 equiprobable possibilities. By direct counting, only 4406 of these involve no direct "eye contact". So, the probability (p) that no one will scream is about 7.46 %. The expected number (1/p) of times it would take to repeat the experience until nobody screams is thus slightly more than 13.4.
For an even number (n) of people, there are 3n scenarios. The number of those for which nobody screams is given by the second row of the following table, whose third row gives the expected number of tries needed to obtain a silent one.
Brocoum's problem can be usefully generalized, to Hamiltonian cubic graphs. For Brocoum's original poser, Max Alekseyev found (on 2008年06月13日) that the number of silent configurations for a circle of 2n people (A141221) is given by the following recurrence relation, for n>1. (see full proof elsewhere).
an+4 = 8 an+3 - 16 an+2 + 10 an+1 - an
It's also fully defined by a third order recurrence (cf. A141385) :
an+3 =
7 an+2
- 9 an+1
+ an
- 2
starting with:
a2 = 30 ;
a3 = 156 ;
a4 = 826
The sequence is best started at n=2 (a circle of 4 people) as the rules of the game do not really apply to a "circle" of only two people. (Alekseyev's analysis does not apply to a circle of less than 4 people either.)
In all questions that involve random variables which may assume a continuous range of values, we must carefully specify the probability distribution involved. This is often more delicate than in the discrete case. Here, it suffices to say that if 0<a<b<1, a "random number" between 0 and 1 falls between a and b with probability b-a. This is not a statement to be proven, it's a description of the Lebesgue probability measure on the unit segment (the so-called uniform probability distribution), which we choose to assume. With another choice of distribution, we would obtain other results. That's all.
Well, with this distribution, the average distance from a random point in a segment to either of its extremities is half the length of the segment. [The reader is encouraged to prove this.] If we consider a fixed point x on the unit segment [0,1], a random point y will be less than x with probability x, and it will be more than x with probability (1-x). The distance between x and y averages x/2 in the former case, and (1-x)/2 in the latter. The overall average distance to point x is obtained as the weighted average of the two cases (each case is weighted according to its own probability). It is thus equal to:
x2/2 + (1-x)2/2 = x2 - x + 1/2 = d/dx [ x3/3 - x2/2 + x/2 ]
If x is a random number uniformly distributed between 0 and 1, the average of the above quantity is equal to its integral from 0 to 1, namely 1/3. Therefore:
The average distance between two points [uniformly] picked at
random on a segment is one third the length of the segment.
A nice introduction to continuous random variables and/or elementary calculus...
It's much more difficult to work out the average distance between two random points on a unit square, given by Mark R. Diamond (2001年12月11日):
[2 + Ö2 + 5 ln(1+Ö2) ] / 15 = 0.52140543316472067833...
Video : Distance between Two Random Points in a Square by Presh Talwalkar (2016年07月03日).
Since all points on the surface of a sphere of radius R are equivalent, the average distance between two random points is the same as the average distance from a fixed point to a random one (assuming, of course, that the probability of a set is proportional to its spherical area).
p R / 2
HINT : That result is obtained immediately by noticing a symmetry about the equator (at a distance pR/2 from either pole) concerning distances to the pole.
4 R / 3
Let's assume that the random starting position is uniformly distributed within the unit square. This is to say that the probability that the random point is within some patch is proportional to the area of that patch (actually, the probability is equal to the area since the total area of the square is unity).
Let's first assume that we are only allowed to jump along a direction at an angle q from a diagonal (WLG, q is between 0 and 45°). A jump of length D stays within the unit square when the starting point is in a rectangle whose area is:
( 1 - | D cos q | ) ( 1 - | D sin q | )
That quantity is thus the probability that we stay within the square when the direction of the jump is prescribed. If the direction is random (isotropically distributed) the desired probability is the average of the above, namely: